id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2207.08000
|
Ruizhi Shao
|
Ruizhi Shao, Zerong Zheng, Hongwen Zhang, Jingxiang Sun, Yebin Liu
|
DiffuStereo: High Quality Human Reconstruction via Diffusion-based
Stereo Using Sparse Cameras
|
Accepted by ECCV2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose DiffuStereo, a novel system using only sparse cameras (8 in this
work) for high-quality 3D human reconstruction. At its core is a novel
diffusion-based stereo module, which introduces diffusion models, a type of
powerful generative models, into the iterative stereo matching network. To this
end, we design a new diffusion kernel and additional stereo constraints to
facilitate stereo matching and depth estimation in the network. We further
present a multi-level stereo network architecture to handle high-resolution (up
to 4k) inputs without requiring unaffordable memory footprint. Given a set of
sparse-view color images of a human, the proposed multi-level diffusion-based
stereo network can produce highly accurate depth maps, which are then converted
into a high-quality 3D human model through an efficient multi-view fusion
strategy. Overall, our method enables automatic reconstruction of human models
with quality on par to high-end dense-view camera rigs, and this is achieved
using a much more light-weight hardware setup. Experiments show that our method
outperforms state-of-the-art methods by a large margin both qualitatively and
quantitatively.
|
[
{
"created": "Sat, 16 Jul 2022 19:08:18 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Jul 2022 08:12:00 GMT",
"version": "v2"
}
] |
2022-07-21
|
[
[
"Shao",
"Ruizhi",
""
],
[
"Zheng",
"Zerong",
""
],
[
"Zhang",
"Hongwen",
""
],
[
"Sun",
"Jingxiang",
""
],
[
"Liu",
"Yebin",
""
]
] |
We propose DiffuStereo, a novel system using only sparse cameras (8 in this work) for high-quality 3D human reconstruction. At its core is a novel diffusion-based stereo module, which introduces diffusion models, a type of powerful generative models, into the iterative stereo matching network. To this end, we design a new diffusion kernel and additional stereo constraints to facilitate stereo matching and depth estimation in the network. We further present a multi-level stereo network architecture to handle high-resolution (up to 4k) inputs without requiring unaffordable memory footprint. Given a set of sparse-view color images of a human, the proposed multi-level diffusion-based stereo network can produce highly accurate depth maps, which are then converted into a high-quality 3D human model through an efficient multi-view fusion strategy. Overall, our method enables automatic reconstruction of human models with quality on par to high-end dense-view camera rigs, and this is achieved using a much more light-weight hardware setup. Experiments show that our method outperforms state-of-the-art methods by a large margin both qualitatively and quantitatively.
|
2403.04135
|
Yui Uehara
|
Yui Uehara
|
Unsupervised Learning of Harmonic Analysis Based on Neural HSMM with
Code Quality Templates
|
20 pages, 5 figures, the original edition of this paper will be
published in the ICNMC2024 Proceedings and this arXiv publication is a copy
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a method of unsupervised learning of harmonic analysis
based on a hidden semi-Markov model (HSMM). We introduce the chord quality
templates, which specify the probability of pitch class emissions given a root
note and a chord quality. Other probability distributions that comprise the
HSMM are automatically learned via unsupervised learning, which has been a
challenge in existing research. The results of the harmonic analysis of the
proposed model were evaluated using existing labeled data. While our proposed
method has yet to perform as well as existing models that used supervised
learning and complex rule design, it has the advantage of not requiring
expensive labeled data or rule elaboration. Furthermore, we also show how to
recognize the tonic without prior knowledge, based on the transition
probabilities of the Markov model.
|
[
{
"created": "Thu, 7 Mar 2024 01:29:48 GMT",
"version": "v1"
}
] |
2024-03-08
|
[
[
"Uehara",
"Yui",
""
]
] |
This paper presents a method of unsupervised learning of harmonic analysis based on a hidden semi-Markov model (HSMM). We introduce the chord quality templates, which specify the probability of pitch class emissions given a root note and a chord quality. Other probability distributions that comprise the HSMM are automatically learned via unsupervised learning, which has been a challenge in existing research. The results of the harmonic analysis of the proposed model were evaluated using existing labeled data. While our proposed method has yet to perform as well as existing models that used supervised learning and complex rule design, it has the advantage of not requiring expensive labeled data or rule elaboration. Furthermore, we also show how to recognize the tonic without prior knowledge, based on the transition probabilities of the Markov model.
|
2401.15544
|
Ramy Harik
|
Ramy Harik, Fadi El Kalach, Jad Samaha, Devon Clark, Drew Sander,
Philip Samaha, Liam Burns, Ibrahim Yousif, Victor Gadow, Theodros Tarekegne,
Nitol Saha
|
Analog and Multi-modal Manufacturing Datasets Acquired on the Future
Factories Platform
|
11 pages, datasets for Future Factories
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Two industry-grade datasets are presented in this paper that were collected
at the Future Factories Lab at the University of South Carolina on December
11th and 12th of 2023. These datasets are generated by a manufacturing assembly
line that utilizes industrial standards with respect to actuators, control
mechanisms, and transducers. The two datasets were both generated
simultaneously by operating the assembly line for 30 consecutive hours (with
minor filtering) and collecting data from sensors equipped throughout the
system. During operation, defects were also introduced into the assembly
operation by manually removing parts needed for the final assembly. The
datasets generated include a time series analog dataset and the other is a time
series multi-modal dataset which includes images of the system alongside the
analog data. These datasets were generated with the objective of providing
tools to further the research towards enhancing intelligence in manufacturing.
Real manufacturing datasets can be scarce let alone datasets with anomalies or
defects. As such these datasets hope to address this gap and provide
researchers with a foundation to build and train Artificial Intelligence models
applicable for the manufacturing industry. Finally, these datasets are the
first iteration of published data from the future Factories lab and can be
further adjusted to fit more researchers needs moving forward.
|
[
{
"created": "Sun, 28 Jan 2024 02:26:58 GMT",
"version": "v1"
}
] |
2024-01-30
|
[
[
"Harik",
"Ramy",
""
],
[
"Kalach",
"Fadi El",
""
],
[
"Samaha",
"Jad",
""
],
[
"Clark",
"Devon",
""
],
[
"Sander",
"Drew",
""
],
[
"Samaha",
"Philip",
""
],
[
"Burns",
"Liam",
""
],
[
"Yousif",
"Ibrahim",
""
],
[
"Gadow",
"Victor",
""
],
[
"Tarekegne",
"Theodros",
""
],
[
"Saha",
"Nitol",
""
]
] |
Two industry-grade datasets are presented in this paper that were collected at the Future Factories Lab at the University of South Carolina on December 11th and 12th of 2023. These datasets are generated by a manufacturing assembly line that utilizes industrial standards with respect to actuators, control mechanisms, and transducers. The two datasets were both generated simultaneously by operating the assembly line for 30 consecutive hours (with minor filtering) and collecting data from sensors equipped throughout the system. During operation, defects were also introduced into the assembly operation by manually removing parts needed for the final assembly. The datasets generated include a time series analog dataset and the other is a time series multi-modal dataset which includes images of the system alongside the analog data. These datasets were generated with the objective of providing tools to further the research towards enhancing intelligence in manufacturing. Real manufacturing datasets can be scarce let alone datasets with anomalies or defects. As such these datasets hope to address this gap and provide researchers with a foundation to build and train Artificial Intelligence models applicable for the manufacturing industry. Finally, these datasets are the first iteration of published data from the future Factories lab and can be further adjusted to fit more researchers needs moving forward.
|
2107.07728
|
Pascal Pfeiffer
|
Christof Henkel, Pascal Pfeiffer and Philipp Singer
|
Recognizing bird species in diverse soundscapes under weak supervision
|
All authors contributed equally, 8 pages, 4 figures, submitted to
CEUR-WS
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present a robust classification approach for avian vocalization in complex
and diverse soundscapes, achieving second place in the BirdCLEF2021 challenge.
We illustrate how to make full use of pre-trained convolutional neural
networks, by using an efficient modeling and training routine supplemented by
novel augmentation methods. Thereby, we improve the generalization of weakly
labeled crowd-sourced data to productive data collected by autonomous recording
units. As such, we illustrate how to progress towards an accurate automated
assessment of avian population which would enable global biodiversity
monitoring at scale, impossible by manual annotation.
|
[
{
"created": "Fri, 16 Jul 2021 06:54:38 GMT",
"version": "v1"
}
] |
2021-07-19
|
[
[
"Henkel",
"Christof",
""
],
[
"Pfeiffer",
"Pascal",
""
],
[
"Singer",
"Philipp",
""
]
] |
We present a robust classification approach for avian vocalization in complex and diverse soundscapes, achieving second place in the BirdCLEF2021 challenge. We illustrate how to make full use of pre-trained convolutional neural networks, by using an efficient modeling and training routine supplemented by novel augmentation methods. Thereby, we improve the generalization of weakly labeled crowd-sourced data to productive data collected by autonomous recording units. As such, we illustrate how to progress towards an accurate automated assessment of avian population which would enable global biodiversity monitoring at scale, impossible by manual annotation.
|
0711.3176
|
Seyed Abolfazl Motahari
|
Abolfazl S. Motahari and Amir K. Khandani
|
To Decode the Interference or To Consider it as Noise
|
submitted to IEEE Transactions on Information Theory
| null | null | null |
cs.IT math.IT
| null |
We address single-user data transmission over a channel where the received
signal incurs interference from a finite number of users (interfering users)
that use single codebooks for transmitting their own messages. The receiver,
however, is allowed to decode interfering users' messages. This means the
signal transmitted from any interfering user is either decoded or considered as
noise at the receiver side. We propose the following method to obtain an
achievable rate for this channel. Assuming its own data is decoded
successfully, the receiver partitions the set of interfering users into two
disjoint subsets, namely the set of decodable users and the set of
non-decodable users. Then the transmitter's rate is chosen such that the
intended signal can be jointly decoded with the set of decodable users. To show
the strength of this method, we prove that for the additive Gaussian channel
with Gaussian interfering users, the Gaussian distribution is optimal and the
achievable rate is the capacity of this channel. To obtain the maximum
achievable rate, one needs to find the maximum decodable subset of interfering
users. Due to the large number of possible choices, having efficient algorithms
that find the set of decodable users with maximum cardinality is desired. To
this end, we propose an algorithm that enables the receiver to accomplish this
task in polynomial time.
|
[
{
"created": "Tue, 20 Nov 2007 17:34:30 GMT",
"version": "v1"
}
] |
2007-11-21
|
[
[
"Motahari",
"Abolfazl S.",
""
],
[
"Khandani",
"Amir K.",
""
]
] |
We address single-user data transmission over a channel where the received signal incurs interference from a finite number of users (interfering users) that use single codebooks for transmitting their own messages. The receiver, however, is allowed to decode interfering users' messages. This means the signal transmitted from any interfering user is either decoded or considered as noise at the receiver side. We propose the following method to obtain an achievable rate for this channel. Assuming its own data is decoded successfully, the receiver partitions the set of interfering users into two disjoint subsets, namely the set of decodable users and the set of non-decodable users. Then the transmitter's rate is chosen such that the intended signal can be jointly decoded with the set of decodable users. To show the strength of this method, we prove that for the additive Gaussian channel with Gaussian interfering users, the Gaussian distribution is optimal and the achievable rate is the capacity of this channel. To obtain the maximum achievable rate, one needs to find the maximum decodable subset of interfering users. Due to the large number of possible choices, having efficient algorithms that find the set of decodable users with maximum cardinality is desired. To this end, we propose an algorithm that enables the receiver to accomplish this task in polynomial time.
|
2107.13118
|
Qiaoyong Zhong
|
Jinlei Hou, Yingying Zhang, Qiaoyong Zhong, Di Xie, Shiliang Pu, Hong
Zhou
|
Divide-and-Assemble: Learning Block-wise Memory for Unsupervised Anomaly
Detection
|
accepted by ICCV 2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconstruction-based methods play an important role in unsupervised anomaly
detection in images. Ideally, we expect a perfect reconstruction for normal
samples and poor reconstruction for abnormal samples. Since the
generalizability of deep neural networks is difficult to control, existing
models such as autoencoder do not work well. In this work, we interpret the
reconstruction of an image as a divide-and-assemble procedure. Surprisingly, by
varying the granularity of division on feature maps, we are able to modulate
the reconstruction capability of the model for both normal and abnormal
samples. That is, finer granularity leads to better reconstruction, while
coarser granularity leads to poorer reconstruction. With proper granularity,
the gap between the reconstruction error of normal and abnormal samples can be
maximized. The divide-and-assemble framework is implemented by embedding a
novel multi-scale block-wise memory module into an autoencoder network.
Besides, we introduce adversarial learning and explore the semantic latent
representation of the discriminator, which improves the detection of subtle
anomaly. We achieve state-of-the-art performance on the challenging MVTec AD
dataset. Remarkably, we improve the vanilla autoencoder model by 10.1% in terms
of the AUROC score.
|
[
{
"created": "Wed, 28 Jul 2021 01:14:32 GMT",
"version": "v1"
}
] |
2021-07-29
|
[
[
"Hou",
"Jinlei",
""
],
[
"Zhang",
"Yingying",
""
],
[
"Zhong",
"Qiaoyong",
""
],
[
"Xie",
"Di",
""
],
[
"Pu",
"Shiliang",
""
],
[
"Zhou",
"Hong",
""
]
] |
Reconstruction-based methods play an important role in unsupervised anomaly detection in images. Ideally, we expect a perfect reconstruction for normal samples and poor reconstruction for abnormal samples. Since the generalizability of deep neural networks is difficult to control, existing models such as autoencoder do not work well. In this work, we interpret the reconstruction of an image as a divide-and-assemble procedure. Surprisingly, by varying the granularity of division on feature maps, we are able to modulate the reconstruction capability of the model for both normal and abnormal samples. That is, finer granularity leads to better reconstruction, while coarser granularity leads to poorer reconstruction. With proper granularity, the gap between the reconstruction error of normal and abnormal samples can be maximized. The divide-and-assemble framework is implemented by embedding a novel multi-scale block-wise memory module into an autoencoder network. Besides, we introduce adversarial learning and explore the semantic latent representation of the discriminator, which improves the detection of subtle anomaly. We achieve state-of-the-art performance on the challenging MVTec AD dataset. Remarkably, we improve the vanilla autoencoder model by 10.1% in terms of the AUROC score.
|
2404.01615
|
Melanie McGrath
|
Melanie J. McGrath (CSIRO), Andreas Duenser (CSIRO), Justine Lacey
(CSIRO), Cecile Paris (CSIRO)
|
Collaborative human-AI trust (CHAI-T): A process framework for active
management of trust in human-AI collaboration
|
36 pages, 2 figures
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Collaborative human-AI (HAI) teaming combines the unique skills and
capabilities of humans and machines in sustained teaming interactions
leveraging the strengths of each. In tasks involving regular exposure to
novelty and uncertainty, collaboration between adaptive, creative humans and
powerful, precise artificial intelligence (AI) promises new solutions and
efficiencies. User trust is essential to creating and maintaining these
collaborative relationships. Established models of trust in traditional forms
of AI typically recognize the contribution of three primary categories of trust
antecedents: characteristics of the human user, characteristics of the
technology, and environmental factors. The emergence of HAI teams, however,
requires an understanding of human trust that accounts for the specificity of
task contexts and goals, integrates processes of interaction, and captures how
trust evolves in a teaming environment over time. Drawing on both the
psychological and computer science literature, the process framework of trust
in collaborative HAI teams (CHAI-T) presented in this paper adopts the
tripartite structure of antecedents established by earlier models, while
incorporating team processes and performance phases to capture the dynamism
inherent to trust in teaming contexts. These features enable active management
of trust in collaborative AI systems, with practical implications for the
design and deployment of collaborative HAI teams.
|
[
{
"created": "Tue, 2 Apr 2024 03:39:06 GMT",
"version": "v1"
}
] |
2024-04-03
|
[
[
"McGrath",
"Melanie J.",
"",
"CSIRO"
],
[
"Duenser",
"Andreas",
"",
"CSIRO"
],
[
"Lacey",
"Justine",
"",
"CSIRO"
],
[
"Paris",
"Cecile",
"",
"CSIRO"
]
] |
Collaborative human-AI (HAI) teaming combines the unique skills and capabilities of humans and machines in sustained teaming interactions leveraging the strengths of each. In tasks involving regular exposure to novelty and uncertainty, collaboration between adaptive, creative humans and powerful, precise artificial intelligence (AI) promises new solutions and efficiencies. User trust is essential to creating and maintaining these collaborative relationships. Established models of trust in traditional forms of AI typically recognize the contribution of three primary categories of trust antecedents: characteristics of the human user, characteristics of the technology, and environmental factors. The emergence of HAI teams, however, requires an understanding of human trust that accounts for the specificity of task contexts and goals, integrates processes of interaction, and captures how trust evolves in a teaming environment over time. Drawing on both the psychological and computer science literature, the process framework of trust in collaborative HAI teams (CHAI-T) presented in this paper adopts the tripartite structure of antecedents established by earlier models, while incorporating team processes and performance phases to capture the dynamism inherent to trust in teaming contexts. These features enable active management of trust in collaborative AI systems, with practical implications for the design and deployment of collaborative HAI teams.
|
2206.12766
|
Md Jobair Hossain Faruk
|
Md Jobair Hossain Faruk, Hossain Shahriar, Maria Valero, Sweta Sneha,
Sheikh I. Ahamed Mohammad Rahman
|
Towards Blockchain-Based Secure Data Management for Remote Patient
Monitoring
| null |
2021 IEEE International Conference on Digital Health (ICDH)
|
10.1109/ICDH52753.2021.00054
| null |
cs.DB cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Traditional data collection, storage and processing of Electronic Health
Records (EHR) utilize centralized techniques that pose several risks of single
point of failure and lean the systems to a number of internal and external data
breaches that compromise their reliability and availability. Blockchain is an
emerging distributed technology that can solve these issues due to its
immutability and architectural nature that prevent records manipulation or
alterations. In this paper, we discuss the progress and opportunities of remote
patient monitoring using futuristic blockchain technologies and its two primary
frameworks: Ethereum and Hyperledger Fabric. We also discuss the possible
blockchain use cases in software engineering for systematic, disciplined, and
quantifiable application development. The study extends by introducing a system
architecture for EHR data management using Ethereum as a model. We discuss the
challenges and limitations along with the initial evaluation results of the
proposed system and draw future research directions in this promising area.
|
[
{
"created": "Sun, 26 Jun 2022 02:20:38 GMT",
"version": "v1"
}
] |
2022-06-28
|
[
[
"Faruk",
"Md Jobair Hossain",
""
],
[
"Shahriar",
"Hossain",
""
],
[
"Valero",
"Maria",
""
],
[
"Sneha",
"Sweta",
""
],
[
"Rahman",
"Sheikh I. Ahamed Mohammad",
""
]
] |
Traditional data collection, storage and processing of Electronic Health Records (EHR) utilize centralized techniques that pose several risks of single point of failure and lean the systems to a number of internal and external data breaches that compromise their reliability and availability. Blockchain is an emerging distributed technology that can solve these issues due to its immutability and architectural nature that prevent records manipulation or alterations. In this paper, we discuss the progress and opportunities of remote patient monitoring using futuristic blockchain technologies and its two primary frameworks: Ethereum and Hyperledger Fabric. We also discuss the possible blockchain use cases in software engineering for systematic, disciplined, and quantifiable application development. The study extends by introducing a system architecture for EHR data management using Ethereum as a model. We discuss the challenges and limitations along with the initial evaluation results of the proposed system and draw future research directions in this promising area.
|
2006.16411
|
Ali Hadian
|
Ali Hadian, Ankit Kumar, Thomas Heinis
|
Hands-off Model Integration in Spatial Index Structures
|
Proceedings of the 2nd International Workshop on Applied AI for
Database Systems and Applications (2020)
| null | null | null |
cs.DB cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spatial indexes are crucial for the analysis of the increasing amounts of
spatial data, for example generated through IoT applications. The plethora of
indexes that has been developed in recent decades has primarily been optimised
for disk. With increasing amounts of memory even on commodity machines,
however, moving them to main memory is an option. Doing so opens up the
opportunity to use additional optimizations that are only amenable to main
memory. In this paper we thus explore the opportunity to use light-weight
machine learning models to accelerate queries on spatial indexes. We do so by
exploring the potential of using interpolation and similar techniques on the
R-tree, arguably the most broadly used spatial index. As we show in our
experimental analysis, the query execution time can be reduced by up to 60%
while simultaneously shrinking the index's memory footprint by over 90%
|
[
{
"created": "Mon, 29 Jun 2020 22:05:28 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Aug 2020 19:43:38 GMT",
"version": "v2"
}
] |
2020-08-11
|
[
[
"Hadian",
"Ali",
""
],
[
"Kumar",
"Ankit",
""
],
[
"Heinis",
"Thomas",
""
]
] |
Spatial indexes are crucial for the analysis of the increasing amounts of spatial data, for example generated through IoT applications. The plethora of indexes that has been developed in recent decades has primarily been optimised for disk. With increasing amounts of memory even on commodity machines, however, moving them to main memory is an option. Doing so opens up the opportunity to use additional optimizations that are only amenable to main memory. In this paper we thus explore the opportunity to use light-weight machine learning models to accelerate queries on spatial indexes. We do so by exploring the potential of using interpolation and similar techniques on the R-tree, arguably the most broadly used spatial index. As we show in our experimental analysis, the query execution time can be reduced by up to 60% while simultaneously shrinking the index's memory footprint by over 90%
|
1811.02068
|
Yuqi Zhou
|
Yuqi Zhou, Jorge Cisneros-Saldana, Le Xie
|
False Analog Data Injection Attack Towards Topology Errors: Formulation
and Feasibility Analysis
|
5 pages, 7 figures, Proc. of 2018 IEEE Power and Energy Society
General Meeting
| null |
10.1109/PESGM.2018.8586585
| null |
cs.SY math.NA math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a class of false analog data injection attack that
can misguide the system as if topology errors had occurred. By utilizing the
measurement redundancy with respect to the state variables, the adversary who
knows the system configuration is shown to be capable of computing the
corresponding measurement value with the intentionally misguided topology. The
attack is designed such that the state as well as residue distribution after
state estimation will converge to those in the system with a topology error. It
is shown that the attack can be launched even if the attacker is constrained to
some specific meters. The attack is detrimental to the system since
manipulation of analog data will lead to a forged digital topology status, and
the state after the error is identified and modified will be significantly
biased with the intended wrong topology. The feasibility of the proposed attack
is demonstrated with an IEEE 14-bus system.
|
[
{
"created": "Mon, 5 Nov 2018 22:33:46 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Nov 2018 00:05:45 GMT",
"version": "v2"
}
] |
2019-07-11
|
[
[
"Zhou",
"Yuqi",
""
],
[
"Cisneros-Saldana",
"Jorge",
""
],
[
"Xie",
"Le",
""
]
] |
In this paper, we propose a class of false analog data injection attack that can misguide the system as if topology errors had occurred. By utilizing the measurement redundancy with respect to the state variables, the adversary who knows the system configuration is shown to be capable of computing the corresponding measurement value with the intentionally misguided topology. The attack is designed such that the state as well as residue distribution after state estimation will converge to those in the system with a topology error. It is shown that the attack can be launched even if the attacker is constrained to some specific meters. The attack is detrimental to the system since manipulation of analog data will lead to a forged digital topology status, and the state after the error is identified and modified will be significantly biased with the intended wrong topology. The feasibility of the proposed attack is demonstrated with an IEEE 14-bus system.
|
2402.05680
|
Antti Kuusisto
|
Reijo Jaakkola, Tomi Janhunen, Antti Kuusisto, Masood Feyzbakhsh
Rankooh, Miikka Vilander
|
Interpretable classifiers for tabular data via discretization and
feature selection
|
Changes in relation to version 1: more thorough and detailed
experiments, general corrections and refinements
| null | null | null |
cs.LG cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a method for computing immediately human interpretable yet
accurate classifiers from tabular data. The classifiers obtained are short
Boolean formulas, computed via first discretizing the original data and then
using feature selection coupled with a very fast algorithm for producing the
best possible Boolean classifier for the setting. We demonstrate the approach
via 13 experiments, obtaining results with accuracies comparable to ones
obtained via random forests, XGBoost, and existing results for the same
datasets in the literature. In most cases, the accuracy of our method is in
fact similar to that of the reference methods, even though the main objective
of our study is the immediate interpretability of our classifiers. We also
prove a new result on the probability that the classifier we obtain from
real-life data corresponds to the ideally best classifier with respect to the
background distribution the data comes from.
|
[
{
"created": "Thu, 8 Feb 2024 13:58:16 GMT",
"version": "v1"
},
{
"created": "Thu, 30 May 2024 14:12:54 GMT",
"version": "v2"
}
] |
2024-05-31
|
[
[
"Jaakkola",
"Reijo",
""
],
[
"Janhunen",
"Tomi",
""
],
[
"Kuusisto",
"Antti",
""
],
[
"Rankooh",
"Masood Feyzbakhsh",
""
],
[
"Vilander",
"Miikka",
""
]
] |
We introduce a method for computing immediately human interpretable yet accurate classifiers from tabular data. The classifiers obtained are short Boolean formulas, computed via first discretizing the original data and then using feature selection coupled with a very fast algorithm for producing the best possible Boolean classifier for the setting. We demonstrate the approach via 13 experiments, obtaining results with accuracies comparable to ones obtained via random forests, XGBoost, and existing results for the same datasets in the literature. In most cases, the accuracy of our method is in fact similar to that of the reference methods, even though the main objective of our study is the immediate interpretability of our classifiers. We also prove a new result on the probability that the classifier we obtain from real-life data corresponds to the ideally best classifier with respect to the background distribution the data comes from.
|
1408.4443
|
Daphney-Stavroula Zois
|
Daphney-Stavroula Zois, Urbashi Mitra
|
Controlled Sensing: A Myopic Fisher Information Sensor Selection
Algorithm
|
6 pages, 3 figures, accepted in Globecom 2014
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers the problem of state tracking with observation control
for a particular class of dynamical systems. The system state evolution is
described by a discrete-time, finite-state Markov chain, while the measurement
process is characterized by a controlled multi-variate Gaussian observation
model. The computational complexity of the optimal control strategy proposed in
our prior work proves to be prohibitive. A suboptimal, lower complexity
algorithm based on the Fisher information measure is proposed. Toward this end,
the preceding measure is generalized to account for multi-valued discrete
parameters and control inputs. A closed-form formula for our system model is
also derived. Numerical simulations are provided for a physical activity
tracking application showing the near-optimal performance of the proposed
algorithm.
|
[
{
"created": "Tue, 19 Aug 2014 19:50:26 GMT",
"version": "v1"
}
] |
2014-08-20
|
[
[
"Zois",
"Daphney-Stavroula",
""
],
[
"Mitra",
"Urbashi",
""
]
] |
This paper considers the problem of state tracking with observation control for a particular class of dynamical systems. The system state evolution is described by a discrete-time, finite-state Markov chain, while the measurement process is characterized by a controlled multi-variate Gaussian observation model. The computational complexity of the optimal control strategy proposed in our prior work proves to be prohibitive. A suboptimal, lower complexity algorithm based on the Fisher information measure is proposed. Toward this end, the preceding measure is generalized to account for multi-valued discrete parameters and control inputs. A closed-form formula for our system model is also derived. Numerical simulations are provided for a physical activity tracking application showing the near-optimal performance of the proposed algorithm.
|
2008.11906
|
Oliver Biggar
|
Oliver Biggar (1), Mohammad Zamani (1), Iman Shames (2) ((1) Defence
Science and Technology Group, Australia, (2) The Australian National
University, Australia)
|
A principled analysis of Behavior Trees and their generalisations
|
13 pages, 11 figures. The content of the previous version is now
split between this and arXiv:2104.07919, which have both been significantly
updated
| null | null | null |
cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As complex autonomous robotic systems become more widespread, the need for
transparent and reusable Artificial Intelligence (AI) designs becomes more
apparent. In this paper we analyse how the principles behind Behavior Trees
(BTs), an increasingly popular tree-structured control architecture, are
applicable to these goals. Using structured programming as a guide, we analyse
the BT principles of reactiveness and modularity in a formal framework of
action selection. Proceeding from these principles, we review a number of
challenging use cases of BTs in the literature, and show that reasoning via
these principles leads to compatible solutions. Extending these arguments, we
introduce a new class of control architectures we call generalised BTs or
$k$-BTs and show how they can extend the applicability of BTs to some of the
aforementioned challenging BT use cases while preserving the BT principles.
|
[
{
"created": "Thu, 27 Aug 2020 04:09:31 GMT",
"version": "v1"
},
{
"created": "Tue, 25 May 2021 05:42:14 GMT",
"version": "v2"
}
] |
2021-05-26
|
[
[
"Biggar",
"Oliver",
""
],
[
"Zamani",
"Mohammad",
""
],
[
"Shames",
"Iman",
""
]
] |
As complex autonomous robotic systems become more widespread, the need for transparent and reusable Artificial Intelligence (AI) designs becomes more apparent. In this paper we analyse how the principles behind Behavior Trees (BTs), an increasingly popular tree-structured control architecture, are applicable to these goals. Using structured programming as a guide, we analyse the BT principles of reactiveness and modularity in a formal framework of action selection. Proceeding from these principles, we review a number of challenging use cases of BTs in the literature, and show that reasoning via these principles leads to compatible solutions. Extending these arguments, we introduce a new class of control architectures we call generalised BTs or $k$-BTs and show how they can extend the applicability of BTs to some of the aforementioned challenging BT use cases while preserving the BT principles.
|
2108.09836
|
Jan Kaiser
|
Jan Kaiser and Supriyo Datta
|
Probabilistic computing with p-bits
| null |
Appl. Phys. Lett. 119, 150503 (2021)
|
10.1063/5.0067927
| null |
cs.ET cond-mat.dis-nn quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Digital computers store information in the form of bits that can take on one
of two values 0 and 1, while quantum computers are based on qubits that are
described by a complex wavefunction, whose squared magnitude gives the
probability of measuring either 0 or 1. Here, we make the case for a
probabilistic computer based on p-bits, which take on values 0 and 1 with
controlled probabilities and can be implemented with specialized compact
energy-efficient hardware. We propose a generic architecture for such
p-computers and emulate systems with thousands of p-bits to show that they can
significantly accelerate randomized algorithms used in a wide variety of
applications including but not limited to Bayesian networks, optimization,
Ising models, and quantum Monte Carlo.
|
[
{
"created": "Sun, 22 Aug 2021 20:50:01 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Oct 2021 21:15:49 GMT",
"version": "v2"
}
] |
2021-10-14
|
[
[
"Kaiser",
"Jan",
""
],
[
"Datta",
"Supriyo",
""
]
] |
Digital computers store information in the form of bits that can take on one of two values 0 and 1, while quantum computers are based on qubits that are described by a complex wavefunction, whose squared magnitude gives the probability of measuring either 0 or 1. Here, we make the case for a probabilistic computer based on p-bits, which take on values 0 and 1 with controlled probabilities and can be implemented with specialized compact energy-efficient hardware. We propose a generic architecture for such p-computers and emulate systems with thousands of p-bits to show that they can significantly accelerate randomized algorithms used in a wide variety of applications including but not limited to Bayesian networks, optimization, Ising models, and quantum Monte Carlo.
|
1901.04670
|
Xuefeng Peng
|
Xuefeng Peng, Yi Ding, David Wihl, Omer Gottesman, Matthieu
Komorowski, Li-wei H. Lehman, Andrew Ross, Aldo Faisal, Finale Doshi-Velez
|
Improving Sepsis Treatment Strategies by Combining Deep and Kernel-Based
Reinforcement Learning
|
AMIA 2018 Annual Symposium
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sepsis is the leading cause of mortality in the ICU. It is challenging to
manage because individual patients respond differently to treatment. Thus,
tailoring treatment to the individual patient is essential for the best
outcomes. In this paper, we take steps toward this goal by applying a
mixture-of-experts framework to personalize sepsis treatment. The mixture model
selectively alternates between neighbor-based (kernel) and deep reinforcement
learning (DRL) experts depending on patient's current history. On a large
retrospective cohort, this mixture-based approach outperforms physician, kernel
only, and DRL-only experts.
|
[
{
"created": "Tue, 15 Jan 2019 05:40:27 GMT",
"version": "v1"
}
] |
2019-01-16
|
[
[
"Peng",
"Xuefeng",
""
],
[
"Ding",
"Yi",
""
],
[
"Wihl",
"David",
""
],
[
"Gottesman",
"Omer",
""
],
[
"Komorowski",
"Matthieu",
""
],
[
"Lehman",
"Li-wei H.",
""
],
[
"Ross",
"Andrew",
""
],
[
"Faisal",
"Aldo",
""
],
[
"Doshi-Velez",
"Finale",
""
]
] |
Sepsis is the leading cause of mortality in the ICU. It is challenging to manage because individual patients respond differently to treatment. Thus, tailoring treatment to the individual patient is essential for the best outcomes. In this paper, we take steps toward this goal by applying a mixture-of-experts framework to personalize sepsis treatment. The mixture model selectively alternates between neighbor-based (kernel) and deep reinforcement learning (DRL) experts depending on patient's current history. On a large retrospective cohort, this mixture-based approach outperforms physician, kernel only, and DRL-only experts.
|
1704.04293
|
Mohammad Amin
|
Mohammad Amin and Marta Molinas
|
Model Predictive Control of Voltage Source Converter in a HVDC System
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Model Predictive Control (MPC) method is a class of advanced control
techniques most widely applied in industry. The major advantages of the MPC are
its straightforward procedure which can be applied for both linear and
nonlinear system. This paper proposes the use of MPC for voltage source
converter (VSC) in a high voltage direct current (HVDC) system. A MPC
controller is modeled based on the state-space model of a single VSC-HVDC
station including the dynamics of the main ac grid. A full scale nonlinear
switching model of point-to-point connected VSC-based HVDC system is developed
in Matlab/Simulink association with SimPower system to demonstrate the
application of the proposed controller.
|
[
{
"created": "Thu, 13 Apr 2017 22:35:54 GMT",
"version": "v1"
}
] |
2017-04-17
|
[
[
"Amin",
"Mohammad",
""
],
[
"Molinas",
"Marta",
""
]
] |
Model Predictive Control (MPC) method is a class of advanced control techniques most widely applied in industry. The major advantages of the MPC are its straightforward procedure which can be applied for both linear and nonlinear system. This paper proposes the use of MPC for voltage source converter (VSC) in a high voltage direct current (HVDC) system. A MPC controller is modeled based on the state-space model of a single VSC-HVDC station including the dynamics of the main ac grid. A full scale nonlinear switching model of point-to-point connected VSC-based HVDC system is developed in Matlab/Simulink association with SimPower system to demonstrate the application of the proposed controller.
|
1909.05964
|
Ashish Tiwari
|
Sumit Gulwani and Kunal Pathak and Arjun Radhakrishna and Ashish
Tiwari and Abhishek Udupa
|
Quantitative Programming by Examples
| null | null | null | null |
cs.PL cs.AI cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Programming-by-Example (PBE) systems synthesize an intended program in some
(relatively constrained) domain-specific language from a small number of
input-output examples provided by the user. In this paper, we motivate and
define the problem of quantitative PBE (qPBE) that relates to synthesizing an
intended program over an underlying (real world) programming language that also
minimizes a given quantitative cost function. We present a modular approach for
solving qPBE that consists of three phases: intent disambiguation, global
search, and local search. On two concrete objectives, namely program
performance and size, our qPBE procedure achieves $1.53 X$ and $1.26 X$
improvement respectively over the baseline FlashFill PBE system, averaged over
$701$ benchmarks. Our detailed experiments validate the design of our procedure
and show the value of combining global and local search for qPBE.
|
[
{
"created": "Thu, 12 Sep 2019 21:55:00 GMT",
"version": "v1"
}
] |
2019-09-16
|
[
[
"Gulwani",
"Sumit",
""
],
[
"Pathak",
"Kunal",
""
],
[
"Radhakrishna",
"Arjun",
""
],
[
"Tiwari",
"Ashish",
""
],
[
"Udupa",
"Abhishek",
""
]
] |
Programming-by-Example (PBE) systems synthesize an intended program in some (relatively constrained) domain-specific language from a small number of input-output examples provided by the user. In this paper, we motivate and define the problem of quantitative PBE (qPBE) that relates to synthesizing an intended program over an underlying (real world) programming language that also minimizes a given quantitative cost function. We present a modular approach for solving qPBE that consists of three phases: intent disambiguation, global search, and local search. On two concrete objectives, namely program performance and size, our qPBE procedure achieves $1.53 X$ and $1.26 X$ improvement respectively over the baseline FlashFill PBE system, averaged over $701$ benchmarks. Our detailed experiments validate the design of our procedure and show the value of combining global and local search for qPBE.
|
2401.08868
|
Manuel Tran
|
Manuel Tran and Amal Lahiani and Yashin Dicente Cid and Melanie
Boxberg and Peter Lienemann and Christian Matek and Sophia J. Wagner and
Fabian J. Theis and Eldad Klaiman and Tingying Peng
|
B-Cos Aligned Transformers Learn Human-Interpretable Features
|
Accepted at MICCAI 2023 (oral). Camera-ready available at
https://doi.org/10.1007/978-3-031-43993-3_50
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Vision Transformers (ViTs) and Swin Transformers (Swin) are currently
state-of-the-art in computational pathology. However, domain experts are still
reluctant to use these models due to their lack of interpretability. This is
not surprising, as critical decisions need to be transparent and
understandable. The most common approach to understanding transformers is to
visualize their attention. However, attention maps of ViTs are often
fragmented, leading to unsatisfactory explanations. Here, we introduce a novel
architecture called the B-cos Vision Transformer (BvT) that is designed to be
more interpretable. It replaces all linear transformations with the B-cos
transform to promote weight-input alignment. In a blinded study, medical
experts clearly ranked BvTs above ViTs, suggesting that our network is better
at capturing biomedically relevant structures. This is also true for the B-cos
Swin Transformer (Bwin). Compared to the Swin Transformer, it even improves the
F1-score by up to 4.7% on two public datasets.
|
[
{
"created": "Tue, 16 Jan 2024 22:46:29 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Jan 2024 07:14:00 GMT",
"version": "v2"
}
] |
2024-01-19
|
[
[
"Tran",
"Manuel",
""
],
[
"Lahiani",
"Amal",
""
],
[
"Cid",
"Yashin Dicente",
""
],
[
"Boxberg",
"Melanie",
""
],
[
"Lienemann",
"Peter",
""
],
[
"Matek",
"Christian",
""
],
[
"Wagner",
"Sophia J.",
""
],
[
"Theis",
"Fabian J.",
""
],
[
"Klaiman",
"Eldad",
""
],
[
"Peng",
"Tingying",
""
]
] |
Vision Transformers (ViTs) and Swin Transformers (Swin) are currently state-of-the-art in computational pathology. However, domain experts are still reluctant to use these models due to their lack of interpretability. This is not surprising, as critical decisions need to be transparent and understandable. The most common approach to understanding transformers is to visualize their attention. However, attention maps of ViTs are often fragmented, leading to unsatisfactory explanations. Here, we introduce a novel architecture called the B-cos Vision Transformer (BvT) that is designed to be more interpretable. It replaces all linear transformations with the B-cos transform to promote weight-input alignment. In a blinded study, medical experts clearly ranked BvTs above ViTs, suggesting that our network is better at capturing biomedically relevant structures. This is also true for the B-cos Swin Transformer (Bwin). Compared to the Swin Transformer, it even improves the F1-score by up to 4.7% on two public datasets.
|
1012.0634
|
Radwa El Shawi
|
Radwa El Shawi, Joachim Gudmundsson, and Christos Levcopoulos
|
Quickest Path Queries on Transportation Network
|
16 pages, 7 figures
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers the problem of finding a quickest path between two
points in the Euclidean plane in the presence of a transportation network. A
transportation network consists of a planar network where each road (edge) has
an individual speed. A traveller may enter and exit the network at any point on
the roads. Along any road the traveller moves with a fixed speed depending on
the road, and outside the network the traveller moves at unit speed in any
direction. We give an exact algorithm for the basic version of the problem:
given a transportation network of total complexity n in the Euclidean plane, a
source point s and a destination point t, and the quickest path between s and
t. We also show how the transportation network can be preprocessed in time
O(n^2 log n) into a data structure of size O(n^2) such that (1 +
\epsilon)-approximate cheapest path cost queries between any two points in the
plane can be answered in time O(1\epsilon^4 log n).
|
[
{
"created": "Fri, 3 Dec 2010 04:03:46 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jan 2011 10:40:59 GMT",
"version": "v2"
},
{
"created": "Sun, 29 May 2011 15:13:53 GMT",
"version": "v3"
}
] |
2015-03-17
|
[
[
"Shawi",
"Radwa El",
""
],
[
"Gudmundsson",
"Joachim",
""
],
[
"Levcopoulos",
"Christos",
""
]
] |
This paper considers the problem of finding a quickest path between two points in the Euclidean plane in the presence of a transportation network. A transportation network consists of a planar network where each road (edge) has an individual speed. A traveller may enter and exit the network at any point on the roads. Along any road the traveller moves with a fixed speed depending on the road, and outside the network the traveller moves at unit speed in any direction. We give an exact algorithm for the basic version of the problem: given a transportation network of total complexity n in the Euclidean plane, a source point s and a destination point t, and the quickest path between s and t. We also show how the transportation network can be preprocessed in time O(n^2 log n) into a data structure of size O(n^2) such that (1 + \epsilon)-approximate cheapest path cost queries between any two points in the plane can be answered in time O(1\epsilon^4 log n).
|
1203.4063
|
Jesper Nederlof
|
Petteri Kaski, Mikko Koivisto, Jesper Nederlof
|
Homomorphic Hashing for Sparse Coefficient Extraction
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study classes of Dynamic Programming (DP) algorithms which, due to their
algebraic definitions, are closely related to coefficient extraction methods.
DP algorithms can easily be modified to exploit sparseness in the DP table
through memorization. Coefficient extraction techniques on the other hand are
both space-efficient and parallelisable, but no tools have been available to
exploit sparseness.
We investigate the systematic use of homomorphic hash functions to combine
the best of these methods and obtain improved space-efficient algorithms for
problems including LINEAR SAT, SET PARTITION, and SUBSET SUM. Our algorithms
run in time proportional to the number of nonzero entries of the last segment
of the DP table, which presents a strict improvement over sparse DP. The last
property also gives an improved algorithm for CNF SAT with sparse projections.
|
[
{
"created": "Mon, 19 Mar 2012 09:30:28 GMT",
"version": "v1"
}
] |
2012-03-20
|
[
[
"Kaski",
"Petteri",
""
],
[
"Koivisto",
"Mikko",
""
],
[
"Nederlof",
"Jesper",
""
]
] |
We study classes of Dynamic Programming (DP) algorithms which, due to their algebraic definitions, are closely related to coefficient extraction methods. DP algorithms can easily be modified to exploit sparseness in the DP table through memorization. Coefficient extraction techniques on the other hand are both space-efficient and parallelisable, but no tools have been available to exploit sparseness. We investigate the systematic use of homomorphic hash functions to combine the best of these methods and obtain improved space-efficient algorithms for problems including LINEAR SAT, SET PARTITION, and SUBSET SUM. Our algorithms run in time proportional to the number of nonzero entries of the last segment of the DP table, which presents a strict improvement over sparse DP. The last property also gives an improved algorithm for CNF SAT with sparse projections.
|
2105.05560
|
Yuanjie Li
|
Yuanjie Li, Hewu Li, Lixin Liu, Wei Liu, Jiayi Liu, Jianping Wu, Qian
Wu, Jun Liu, Zeqi Lai, Guojie Fan
|
Fractal Rosette: A Stable Space-Ground Network Structure in
Mega-Constellation
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present F-Rosette, a stable space-ground network structure for low-earth
orbit (LEO) satellite mega-constellations at scale. Due to the dynamic
many-to-many space-ground mapping in high mobility, existing LEO
mega-constellations with IP protocol stack suffer from frequent user IP address
changes (every 133~510s per user) and network routing re-convergence (<20%
network usability). To provably stabilize the space-ground network under high
mobility and many-to-many dynamics, F-Rosette adopts a recursive structure over
the Rosette constellation, derives a hierarchical and time-invariant network
address space from a new geographical coordinate system, and ensures efficient
and stable routing via geographical-to-topological routing embedding without
re-convergence. Our hardware-in-the-loop, trace-driven emulations validate
F-Rosette's stability, near-optimal routing (<1.4% additional delays), and
marginal overhead (<1% CPU, <2MB memory) for resource-constrained satellites.
|
[
{
"created": "Wed, 12 May 2021 10:18:57 GMT",
"version": "v1"
}
] |
2021-05-13
|
[
[
"Li",
"Yuanjie",
""
],
[
"Li",
"Hewu",
""
],
[
"Liu",
"Lixin",
""
],
[
"Liu",
"Wei",
""
],
[
"Liu",
"Jiayi",
""
],
[
"Wu",
"Jianping",
""
],
[
"Wu",
"Qian",
""
],
[
"Liu",
"Jun",
""
],
[
"Lai",
"Zeqi",
""
],
[
"Fan",
"Guojie",
""
]
] |
We present F-Rosette, a stable space-ground network structure for low-earth orbit (LEO) satellite mega-constellations at scale. Due to the dynamic many-to-many space-ground mapping in high mobility, existing LEO mega-constellations with IP protocol stack suffer from frequent user IP address changes (every 133~510s per user) and network routing re-convergence (<20% network usability). To provably stabilize the space-ground network under high mobility and many-to-many dynamics, F-Rosette adopts a recursive structure over the Rosette constellation, derives a hierarchical and time-invariant network address space from a new geographical coordinate system, and ensures efficient and stable routing via geographical-to-topological routing embedding without re-convergence. Our hardware-in-the-loop, trace-driven emulations validate F-Rosette's stability, near-optimal routing (<1.4% additional delays), and marginal overhead (<1% CPU, <2MB memory) for resource-constrained satellites.
|
2107.12190
|
Lalli Myllyaho
|
Lalli Myllyaho, Mikko Raatikainen, Tomi M\"annist\"o, Tommi Mikkonen
and Jukka K. Nurminen
|
Systematic Literature Review of Validation Methods for AI Systems
|
25 pages, 6 figures, 12 tables. The manuscript has been accepted to
the Journal of Systems and Software
| null |
10.1016/j.jss.2021.111050
| null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Context: Artificial intelligence (AI) has made its way into everyday
activities, particularly through new techniques such as machine learning (ML).
These techniques are implementable with little domain knowledge. This, combined
with the difficulty of testing AI systems with traditional methods, has made
system trustworthiness a pressing issue.
Objective: This paper studies the methods used to validate practical AI
systems reported in the literature. Our goal is to classify and describe the
methods that are used in realistic settings to ensure the dependability of AI
systems.
Method: A systematic literature review resulted in 90 papers. Systems
presented in the papers were analysed based on their domain, task, complexity,
and applied validation methods.
Results: The validation methods were synthesized into a taxonomy consisting
of trial, simulation, model-centred validation, and expert opinion. Failure
monitors, safety channels, redundancy, voting, and input and output
restrictions are methods used to continuously validate the systems after
deployment.
Conclusions: Our results clarify existing strategies applied to validation.
They form a basis for the synthesization, assessment, and refinement of AI
system validation in research and guidelines for validating individual systems
in practice. While various validation strategies have all been relatively
widely applied, only few studies report on continuous validation.
Keywords: artificial intelligence, machine learning, validation, testing,
V&V, systematic literature review.
|
[
{
"created": "Mon, 26 Jul 2021 12:54:32 GMT",
"version": "v1"
}
] |
2021-09-17
|
[
[
"Myllyaho",
"Lalli",
""
],
[
"Raatikainen",
"Mikko",
""
],
[
"Männistö",
"Tomi",
""
],
[
"Mikkonen",
"Tommi",
""
],
[
"Nurminen",
"Jukka K.",
""
]
] |
Context: Artificial intelligence (AI) has made its way into everyday activities, particularly through new techniques such as machine learning (ML). These techniques are implementable with little domain knowledge. This, combined with the difficulty of testing AI systems with traditional methods, has made system trustworthiness a pressing issue. Objective: This paper studies the methods used to validate practical AI systems reported in the literature. Our goal is to classify and describe the methods that are used in realistic settings to ensure the dependability of AI systems. Method: A systematic literature review resulted in 90 papers. Systems presented in the papers were analysed based on their domain, task, complexity, and applied validation methods. Results: The validation methods were synthesized into a taxonomy consisting of trial, simulation, model-centred validation, and expert opinion. Failure monitors, safety channels, redundancy, voting, and input and output restrictions are methods used to continuously validate the systems after deployment. Conclusions: Our results clarify existing strategies applied to validation. They form a basis for the synthesization, assessment, and refinement of AI system validation in research and guidelines for validating individual systems in practice. While various validation strategies have all been relatively widely applied, only few studies report on continuous validation. Keywords: artificial intelligence, machine learning, validation, testing, V&V, systematic literature review.
|
2209.03473
|
Alexandre Duval
|
Alexandre Duval, Fragkiskos Malliaros
|
Higher-order Clustering and Pooling for Graph Neural Networks
|
CIKM 2022
| null |
10.1145/3511808.3557353
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Graph Neural Networks achieve state-of-the-art performance on a plethora of
graph classification tasks, especially due to pooling operators, which
aggregate learned node embeddings hierarchically into a final graph
representation. However, they are not only questioned by recent work showing on
par performance with random pooling, but also ignore completely higher-order
connectivity patterns. To tackle this issue, we propose HoscPool, a
clustering-based graph pooling operator that captures higher-order information
hierarchically, leading to richer graph representations. In fact, we learn a
probabilistic cluster assignment matrix end-to-end by minimising relaxed
formulations of motif spectral clustering in our objective function, and we
then extend it to a pooling operator. We evaluate HoscPool on graph
classification tasks and its clustering component on graphs with ground-truth
community structure, achieving best performance. Lastly, we provide a deep
empirical analysis of pooling operators' inner functioning.
|
[
{
"created": "Fri, 2 Sep 2022 09:17:10 GMT",
"version": "v1"
}
] |
2022-09-09
|
[
[
"Duval",
"Alexandre",
""
],
[
"Malliaros",
"Fragkiskos",
""
]
] |
Graph Neural Networks achieve state-of-the-art performance on a plethora of graph classification tasks, especially due to pooling operators, which aggregate learned node embeddings hierarchically into a final graph representation. However, they are not only questioned by recent work showing on par performance with random pooling, but also ignore completely higher-order connectivity patterns. To tackle this issue, we propose HoscPool, a clustering-based graph pooling operator that captures higher-order information hierarchically, leading to richer graph representations. In fact, we learn a probabilistic cluster assignment matrix end-to-end by minimising relaxed formulations of motif spectral clustering in our objective function, and we then extend it to a pooling operator. We evaluate HoscPool on graph classification tasks and its clustering component on graphs with ground-truth community structure, achieving best performance. Lastly, we provide a deep empirical analysis of pooling operators' inner functioning.
|
1212.3638
|
Derrick Wing Kwan Ng Dr.
|
Derrick Wing Kwan Ng, Ernest S. Lo, and Robert Schober
|
Energy-Efficient Resource Allocation in Multiuser OFDM Systems with
Wireless Information and Power Transfer
|
6 pages. The paper has been accepted for publication at the IEEE
Wireless Communications and Networking Conference (WCNC) 2013, Shanghai,
China, Apr. 2013
| null |
10.1109/WCNC.2013.6555184
| null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
In this paper, we study the resource allocation algorithm design for
multiuser orthogonal frequency division multiplexing (OFDM) downlink systems
with simultaneous wireless information and power transfer. The algorithm design
is formulated as a non-convex optimization problem for maximizing the energy
efficiency of data transmission (bit/Joule delivered to the users). In
particular, the problem formulation takes into account the minimum required
system data rate, heterogeneous minimum required power transfers to the users,
and the circuit power consumption. Subsequently, by exploiting the method of
time-sharing and the properties of nonlinear fractional programming, the
considered non-convex optimization problem is solved using an efficient
iterative resource allocation algorithm. For each iteration, the optimal power
allocation and user selection solution are derived based on Lagrange dual
decomposition. Simulation results illustrate that the proposed iterative
resource allocation algorithm achieves the maximum energy efficiency of the
system and reveal how energy efficiency, system capacity, and wireless power
transfer benefit from the presence of multiple users in the system.
|
[
{
"created": "Fri, 14 Dec 2012 23:42:59 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Dec 2012 21:01:01 GMT",
"version": "v2"
}
] |
2016-11-17
|
[
[
"Ng",
"Derrick Wing Kwan",
""
],
[
"Lo",
"Ernest S.",
""
],
[
"Schober",
"Robert",
""
]
] |
In this paper, we study the resource allocation algorithm design for multiuser orthogonal frequency division multiplexing (OFDM) downlink systems with simultaneous wireless information and power transfer. The algorithm design is formulated as a non-convex optimization problem for maximizing the energy efficiency of data transmission (bit/Joule delivered to the users). In particular, the problem formulation takes into account the minimum required system data rate, heterogeneous minimum required power transfers to the users, and the circuit power consumption. Subsequently, by exploiting the method of time-sharing and the properties of nonlinear fractional programming, the considered non-convex optimization problem is solved using an efficient iterative resource allocation algorithm. For each iteration, the optimal power allocation and user selection solution are derived based on Lagrange dual decomposition. Simulation results illustrate that the proposed iterative resource allocation algorithm achieves the maximum energy efficiency of the system and reveal how energy efficiency, system capacity, and wireless power transfer benefit from the presence of multiple users in the system.
|
1909.12938
|
Akhil Gupta
|
Akhil Gupta
|
Time Series Modeling for Dream Team in Fantasy Premier League
|
International Conference on Sports Engineering (ICSE'17)
| null | null | null |
cs.CY cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The performance of football players in English Premier League varies largely
from season to season and for different teams. It is evident that a method
capable of forecasting and analyzing the future of these players on-field
antics shall assist the management to a great extent. In a simulated
environment like the Fantasy Premier League, enthusiasts from all over the
world participate and manage the players catalogue for the entire season. Due
to the dynamic nature of points system, there is no known approach for the
formulation of a dream team. This study aims to tackle this problem by using a
hybrid of Autoregressive Integrated Moving Average (ARIMA) and Recurrent Neural
Networks (RNNs) for time series prediction of player points and subsequent
maximization of total points using Linear Programming (LPP). Given the player
points for the past three seasons, the predictions have been made for the
current season by modeling differently for ARIMA and RNN, and then creating an
ensemble of the same. Prior to that, proper data preprocessing techniques were
deployed to enhance the efficacy of the prepared model. Constraints on the type
of players like goalkeepers, defenders, midfielders and forwards along with the
total budget were effectively optimized using LPP approach. The validation of
the proposed team was done with the performance in upcoming season, where the
players outperform as expected, and helped in strengthening the feasibility of
the solution. Likewise, the proposed approach can be extended to English
Premier League by official managers on-field.
|
[
{
"created": "Thu, 19 Sep 2019 20:20:04 GMT",
"version": "v1"
}
] |
2019-10-01
|
[
[
"Gupta",
"Akhil",
""
]
] |
The performance of football players in English Premier League varies largely from season to season and for different teams. It is evident that a method capable of forecasting and analyzing the future of these players on-field antics shall assist the management to a great extent. In a simulated environment like the Fantasy Premier League, enthusiasts from all over the world participate and manage the players catalogue for the entire season. Due to the dynamic nature of points system, there is no known approach for the formulation of a dream team. This study aims to tackle this problem by using a hybrid of Autoregressive Integrated Moving Average (ARIMA) and Recurrent Neural Networks (RNNs) for time series prediction of player points and subsequent maximization of total points using Linear Programming (LPP). Given the player points for the past three seasons, the predictions have been made for the current season by modeling differently for ARIMA and RNN, and then creating an ensemble of the same. Prior to that, proper data preprocessing techniques were deployed to enhance the efficacy of the prepared model. Constraints on the type of players like goalkeepers, defenders, midfielders and forwards along with the total budget were effectively optimized using LPP approach. The validation of the proposed team was done with the performance in upcoming season, where the players outperform as expected, and helped in strengthening the feasibility of the solution. Likewise, the proposed approach can be extended to English Premier League by official managers on-field.
|
2207.08857
|
Md Nafee Al Islam
|
Md Nafee Al Islam, Yihong Ma, Pedro Alarcon Granadeno, Nitesh Chawla,
Jane Cleland-Huang
|
RESAM: Requirements Elicitation and Specification for Deep-Learning
Anomaly Models with Applications to UAV Flight Controllers
| null | null | null | null |
cs.SE cs.AI cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CyberPhysical systems (CPS) must be closely monitored to identify and
potentially mitigate emergent problems that arise during their routine
operations. However, the multivariate time-series data which they typically
produce can be complex to understand and analyze. While formal product
documentation often provides example data plots with diagnostic suggestions,
the sheer diversity of attributes, critical thresholds, and data interactions
can be overwhelming to non-experts who subsequently seek help from discussion
forums to interpret their data logs. Deep learning models, such as Long
Short-term memory (LSTM) networks can be used to automate these tasks and to
provide clear explanations of diverse anomalies detected in real-time
multivariate data-streams. In this paper we present RESAM, a requirements
process that integrates knowledge from domain experts, discussion forums, and
formal product documentation, to discover and specify requirements and design
definitions in the form of time-series attributes that contribute to the
construction of effective deep learning anomaly detectors. We present a
case-study based on a flight control system for small Uncrewed Aerial Systems
and demonstrate that its use guides the construction of effective anomaly
detection models whilst also providing underlying support for explainability.
RESAM is relevant to domains in which open or closed online forums provide
discussion support for log analysis.
|
[
{
"created": "Mon, 18 Jul 2022 18:09:59 GMT",
"version": "v1"
}
] |
2022-07-20
|
[
[
"Islam",
"Md Nafee Al",
""
],
[
"Ma",
"Yihong",
""
],
[
"Granadeno",
"Pedro Alarcon",
""
],
[
"Chawla",
"Nitesh",
""
],
[
"Cleland-Huang",
"Jane",
""
]
] |
CyberPhysical systems (CPS) must be closely monitored to identify and potentially mitigate emergent problems that arise during their routine operations. However, the multivariate time-series data which they typically produce can be complex to understand and analyze. While formal product documentation often provides example data plots with diagnostic suggestions, the sheer diversity of attributes, critical thresholds, and data interactions can be overwhelming to non-experts who subsequently seek help from discussion forums to interpret their data logs. Deep learning models, such as Long Short-term memory (LSTM) networks can be used to automate these tasks and to provide clear explanations of diverse anomalies detected in real-time multivariate data-streams. In this paper we present RESAM, a requirements process that integrates knowledge from domain experts, discussion forums, and formal product documentation, to discover and specify requirements and design definitions in the form of time-series attributes that contribute to the construction of effective deep learning anomaly detectors. We present a case-study based on a flight control system for small Uncrewed Aerial Systems and demonstrate that its use guides the construction of effective anomaly detection models whilst also providing underlying support for explainability. RESAM is relevant to domains in which open or closed online forums provide discussion support for log analysis.
|
2401.13980
|
Qianqian Yang
|
Weixuan Chen, Shuo Shao, Qianqian Yang, Zhaoyang Zhang, Ping Zhang
|
A Nearly Information Theoretically Secure Approach for Semantic
Communications over Wiretap Channel
|
13 pages, 16 figures
| null | null | null |
cs.IT eess.IV math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the challenge of achieving information-theoretic
security in semantic communication (SeCom) over a wiretap channel, where a
legitimate receiver coexists with an eavesdropper experiencing a poorer channel
condition. Despite previous efforts to secure SeCom against eavesdroppers,
achieving information-theoretic security in such schemes remains an open issue.
In this work, we propose a secure digital SeCom approach based on superposition
codes, aiming to attain nearly information-theoretic security. Our proposed
method involves associating semantic information with satellite constellation
points within a double-layered constellation map, where cloud center
constellation points are randomly selected. By carefully allocating power
between these two layers of constellation, we ensure that the symbol error
probability (SEP) of the eavesdropper decoding satellite constellation points
is nearly equivalent to random guessing, while maintaining a low SEP for the
legitimate receiver to successfully decode the semantic information. Simulation
results showcase that the Peak Signal-to-Noise Ratio (PSNR) and Mean Squared
Error (MSE) for the eavesdropper's reconstructed data, using our proposed
method, can range from decoding Gaussian-distributed random noise to
approaching the variance of the data. This validates the ability of our method
to achieve nearly information-theoretic security, demonstrating superior data
security compared to benchmark methods.
|
[
{
"created": "Thu, 25 Jan 2024 06:46:13 GMT",
"version": "v1"
}
] |
2024-01-26
|
[
[
"Chen",
"Weixuan",
""
],
[
"Shao",
"Shuo",
""
],
[
"Yang",
"Qianqian",
""
],
[
"Zhang",
"Zhaoyang",
""
],
[
"Zhang",
"Ping",
""
]
] |
This paper addresses the challenge of achieving information-theoretic security in semantic communication (SeCom) over a wiretap channel, where a legitimate receiver coexists with an eavesdropper experiencing a poorer channel condition. Despite previous efforts to secure SeCom against eavesdroppers, achieving information-theoretic security in such schemes remains an open issue. In this work, we propose a secure digital SeCom approach based on superposition codes, aiming to attain nearly information-theoretic security. Our proposed method involves associating semantic information with satellite constellation points within a double-layered constellation map, where cloud center constellation points are randomly selected. By carefully allocating power between these two layers of constellation, we ensure that the symbol error probability (SEP) of the eavesdropper decoding satellite constellation points is nearly equivalent to random guessing, while maintaining a low SEP for the legitimate receiver to successfully decode the semantic information. Simulation results showcase that the Peak Signal-to-Noise Ratio (PSNR) and Mean Squared Error (MSE) for the eavesdropper's reconstructed data, using our proposed method, can range from decoding Gaussian-distributed random noise to approaching the variance of the data. This validates the ability of our method to achieve nearly information-theoretic security, demonstrating superior data security compared to benchmark methods.
|
1512.07901
|
Marco Bressan
|
Marco Bressan, Enoch Peserico, Luca Pretto
|
Simple set cardinality estimation through random sampling
|
3 pages
| null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a simple algorithm that estimates the cardinality $n$ of a set $V$
when allowed to sample elements of $V$ uniformly and independently at random.
Our algorithm with probability $(1-\delta)$ returns a
$(1\pm\epsilon)-$approximation of $n$ drawing $O\big(\sqrt{n} \cdot
\epsilon^{-1}\sqrt{\log(\delta^{-1})}\big)$ samples (for
$\epsilon^{-1}\sqrt{\log(\delta^{-1})} = O(\sqrt{n})$).
|
[
{
"created": "Thu, 24 Dec 2015 20:42:10 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Jan 2016 15:25:18 GMT",
"version": "v2"
},
{
"created": "Wed, 11 Apr 2018 21:09:20 GMT",
"version": "v3"
}
] |
2018-04-13
|
[
[
"Bressan",
"Marco",
""
],
[
"Peserico",
"Enoch",
""
],
[
"Pretto",
"Luca",
""
]
] |
We present a simple algorithm that estimates the cardinality $n$ of a set $V$ when allowed to sample elements of $V$ uniformly and independently at random. Our algorithm with probability $(1-\delta)$ returns a $(1\pm\epsilon)-$approximation of $n$ drawing $O\big(\sqrt{n} \cdot \epsilon^{-1}\sqrt{\log(\delta^{-1})}\big)$ samples (for $\epsilon^{-1}\sqrt{\log(\delta^{-1})} = O(\sqrt{n})$).
|
1808.09796
|
Bingjie Xu
|
Bingjie Xu, Junnan Li, Yongkang Wong, Mohan S. Kankanhalli, and Qi
Zhao
|
Interact as You Intend: Intention-Driven Human-Object Interaction
Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The recent advances in instance-level detection tasks lay strong foundation
for genuine comprehension of the visual scenes. However, the ability to fully
comprehend a social scene is still in its preliminary stage. In this work, we
focus on detecting human-object interactions (HOIs) in social scene images,
which is demanding in terms of research and increasingly useful for practical
applications. To undertake social tasks interacting with objects, humans direct
their attention and move their body based on their intention. Based on this
observation, we provide a unique computational perspective to explore human
intention in HOI detection. Specifically, the proposed human intention-driven
HOI detection (iHOI) framework models human pose with the relative distances
from body joints to the object instances. It also utilizes human gaze to guide
the attended contextual regions in a weakly-supervised setting. In addition, we
propose a hard negative sampling strategy to address the problem of
mis-grouping. We perform extensive experiments on two benchmark datasets,
namely V-COCO and HICO-DET. The efficacy of each proposed component has also
been validated.
|
[
{
"created": "Wed, 29 Aug 2018 13:25:50 GMT",
"version": "v1"
},
{
"created": "Sun, 22 Sep 2019 11:45:38 GMT",
"version": "v2"
}
] |
2019-09-24
|
[
[
"Xu",
"Bingjie",
""
],
[
"Li",
"Junnan",
""
],
[
"Wong",
"Yongkang",
""
],
[
"Kankanhalli",
"Mohan S.",
""
],
[
"Zhao",
"Qi",
""
]
] |
The recent advances in instance-level detection tasks lay strong foundation for genuine comprehension of the visual scenes. However, the ability to fully comprehend a social scene is still in its preliminary stage. In this work, we focus on detecting human-object interactions (HOIs) in social scene images, which is demanding in terms of research and increasingly useful for practical applications. To undertake social tasks interacting with objects, humans direct their attention and move their body based on their intention. Based on this observation, we provide a unique computational perspective to explore human intention in HOI detection. Specifically, the proposed human intention-driven HOI detection (iHOI) framework models human pose with the relative distances from body joints to the object instances. It also utilizes human gaze to guide the attended contextual regions in a weakly-supervised setting. In addition, we propose a hard negative sampling strategy to address the problem of mis-grouping. We perform extensive experiments on two benchmark datasets, namely V-COCO and HICO-DET. The efficacy of each proposed component has also been validated.
|
2101.03230
|
Junjie Zhong
|
Junjie Zhong and Hiromitsu Hattori
|
Generation of Traffic Flows in Multi-Agent Traffic Simulation with Agent
Behavior Model based on Deep Reinforcement Learning
|
Experiment data maybe wrong due to the method " Repeated and Partial
Training". This method may not converge to the optimal policy
| null | null | null |
cs.MA cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In multi-agent based traffic simulation, agents are always supposed to move
following existing instructions, and mechanically and unnaturally imitate human
behavior. The human drivers perform acceleration or deceleration irregularly
all the time, which seems unnecessary in some conditions. For letting agents in
traffic simulation behave more like humans and recognize other agents' behavior
in complex conditions, we propose a unified mechanism for agents learn to
decide various accelerations by using deep reinforcement learning based on a
combination of regenerated visual images revealing some notable features, and
numerical vectors containing some important data such as instantaneous speed.
By handling batches of sequential data, agents are enabled to recognize
surrounding agents' behavior and decide their own acceleration. In addition, we
can generate a traffic flow behaving diversely to simulate the real traffic
flow by using an architecture of fully decentralized training and fully
centralized execution without violating Markov assumptions.
|
[
{
"created": "Sat, 26 Dec 2020 15:13:06 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Jan 2021 05:00:00 GMT",
"version": "v2"
}
] |
2021-01-26
|
[
[
"Zhong",
"Junjie",
""
],
[
"Hattori",
"Hiromitsu",
""
]
] |
In multi-agent based traffic simulation, agents are always supposed to move following existing instructions, and mechanically and unnaturally imitate human behavior. The human drivers perform acceleration or deceleration irregularly all the time, which seems unnecessary in some conditions. For letting agents in traffic simulation behave more like humans and recognize other agents' behavior in complex conditions, we propose a unified mechanism for agents learn to decide various accelerations by using deep reinforcement learning based on a combination of regenerated visual images revealing some notable features, and numerical vectors containing some important data such as instantaneous speed. By handling batches of sequential data, agents are enabled to recognize surrounding agents' behavior and decide their own acceleration. In addition, we can generate a traffic flow behaving diversely to simulate the real traffic flow by using an architecture of fully decentralized training and fully centralized execution without violating Markov assumptions.
|
1505.04785
|
Emanuel Diamant
|
Emanuel Diamant
|
Advances in Bioinformatics and Computational Biology: Don't take them
too seriously anyway
|
The paper was submitted to the BIOCOMP'15 conference (Las Vegas,
Nevada, USA, July 27-30, 2015) and was accepted as a poster presentation.
arXiv admin note: text overlap with arXiv:1505.04578
| null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the last few decades or so, we witness a paradigm shift in our nature
studies - from a data-processing based computational approach to an
information-processing based cognitive approach. The process is restricted and
often misguided by the lack of a clear understanding about what information is
and how it should be treated in research applications (in general) and in
biological studies (in particular). The paper intend to provide some remedies
for this bizarre situation.
|
[
{
"created": "Mon, 18 May 2015 10:18:20 GMT",
"version": "v1"
}
] |
2015-05-20
|
[
[
"Diamant",
"Emanuel",
""
]
] |
In the last few decades or so, we witness a paradigm shift in our nature studies - from a data-processing based computational approach to an information-processing based cognitive approach. The process is restricted and often misguided by the lack of a clear understanding about what information is and how it should be treated in research applications (in general) and in biological studies (in particular). The paper intend to provide some remedies for this bizarre situation.
|
1303.3469
|
Hassan Bashir A
|
Hassan A. Bashir and Richard S. Neville
|
Hybrid Evolutionary Computation for Continuous Optimization
|
Companion Publications for this Technical Memorandum, available at
IEEE Xplore, are: [1] H. A. Bashir and R. S. Neville, "Convergence
measurement in evolutionary computation using Price's theorem," IEEE (CEC),
2012. [2] H. A. Bashir and R. S. Neville, "A hybrid evolutionary computation
algorithm for global optimization," IEEE (CEC), 2012
| null | null |
Technical Memorandum 2011-v.01
|
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hybrid optimization algorithms have gained popularity as it has become
apparent there cannot be a universal optimization strategy which is globally
more beneficial than any other. Despite their popularity, hybridization
frameworks require more detailed categorization regarding: the nature of the
problem domain, the constituent algorithms, the coupling schema and the
intended area of application. This report proposes a hybrid algorithm for
solving small to large-scale continuous global optimization problems. It
comprises evolutionary computation (EC) algorithms and a sequential quadratic
programming (SQP) algorithm; combined in a collaborative portfolio. The SQP is
a gradient based local search method. To optimize the individual contributions
of the EC and SQP algorithms for the overall success of the proposed hybrid
system, improvements were made in key features of these algorithms. The report
proposes enhancements in: i) the evolutionary algorithm, ii) a new convergence
detection mechanism was proposed; and iii) in the methods for evaluating the
search directions and step sizes for the SQP local search algorithm. The
proposed hybrid design aim was to ensure that the two algorithms complement
each other by exploring and exploiting the problem search space. Preliminary
results justify that an adept hybridization of evolutionary algorithms with a
suitable local search method, could yield a robust and efficient means of
solving wide range of global optimization problems. Finally, a discussion of
the outcomes of the initial investigation and a review of the associated
challenges and inherent limitations of the proposed method is presented to
complete the investigation. The report highlights extensive research,
particularly, some potential case studies and application areas.
|
[
{
"created": "Thu, 14 Mar 2013 14:59:32 GMT",
"version": "v1"
}
] |
2013-03-15
|
[
[
"Bashir",
"Hassan A.",
""
],
[
"Neville",
"Richard S.",
""
]
] |
Hybrid optimization algorithms have gained popularity as it has become apparent there cannot be a universal optimization strategy which is globally more beneficial than any other. Despite their popularity, hybridization frameworks require more detailed categorization regarding: the nature of the problem domain, the constituent algorithms, the coupling schema and the intended area of application. This report proposes a hybrid algorithm for solving small to large-scale continuous global optimization problems. It comprises evolutionary computation (EC) algorithms and a sequential quadratic programming (SQP) algorithm; combined in a collaborative portfolio. The SQP is a gradient based local search method. To optimize the individual contributions of the EC and SQP algorithms for the overall success of the proposed hybrid system, improvements were made in key features of these algorithms. The report proposes enhancements in: i) the evolutionary algorithm, ii) a new convergence detection mechanism was proposed; and iii) in the methods for evaluating the search directions and step sizes for the SQP local search algorithm. The proposed hybrid design aim was to ensure that the two algorithms complement each other by exploring and exploiting the problem search space. Preliminary results justify that an adept hybridization of evolutionary algorithms with a suitable local search method, could yield a robust and efficient means of solving wide range of global optimization problems. Finally, a discussion of the outcomes of the initial investigation and a review of the associated challenges and inherent limitations of the proposed method is presented to complete the investigation. The report highlights extensive research, particularly, some potential case studies and application areas.
|
2012.12394
|
Stefano Giovanni Rizzo
|
Stefano Giovanni Rizzo, Linsey Pang, Yixian Chen, Sanjay Chawla
|
Probabilistic Outlier Detection and Generation
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new method for outlier detection and generation is introduced by lifting
data into the space of probability distributions which are not analytically
expressible, but from which samples can be drawn using a neural generator.
Given a mixture of unknown latent inlier and outlier distributions, a
Wasserstein double autoencoder is used to both detect and generate inliers and
outliers. The proposed method, named WALDO (Wasserstein Autoencoder for
Learning the Distribution of Outliers), is evaluated on classical data sets
including MNIST, CIFAR10 and KDD99 for detection accuracy and robustness. We
give an example of outlier detection on a real retail sales data set and an
example of outlier generation for simulating intrusion attacks. However we
foresee many application scenarios where WALDO can be used. To the best of our
knowledge this is the first work that studies both outlier detection and
generation together.
|
[
{
"created": "Tue, 22 Dec 2020 22:42:56 GMT",
"version": "v1"
}
] |
2020-12-24
|
[
[
"Rizzo",
"Stefano Giovanni",
""
],
[
"Pang",
"Linsey",
""
],
[
"Chen",
"Yixian",
""
],
[
"Chawla",
"Sanjay",
""
]
] |
A new method for outlier detection and generation is introduced by lifting data into the space of probability distributions which are not analytically expressible, but from which samples can be drawn using a neural generator. Given a mixture of unknown latent inlier and outlier distributions, a Wasserstein double autoencoder is used to both detect and generate inliers and outliers. The proposed method, named WALDO (Wasserstein Autoencoder for Learning the Distribution of Outliers), is evaluated on classical data sets including MNIST, CIFAR10 and KDD99 for detection accuracy and robustness. We give an example of outlier detection on a real retail sales data set and an example of outlier generation for simulating intrusion attacks. However we foresee many application scenarios where WALDO can be used. To the best of our knowledge this is the first work that studies both outlier detection and generation together.
|
2407.20535
|
Cynthia Steinhardt
|
Cynthia R. Steinhardt, Menoua Keshishian, Nima Mesgarani, Kim
Stachenfeld
|
DeepSpeech models show Human-like Performance and Processing of Cochlear
Implant Inputs
|
NEURIPS preprint
| null | null | null |
cs.NE cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Cochlear implants(CIs) are arguably the most successful neural implant,
having restored hearing to over one million people worldwide. While CI research
has focused on modeling the cochlear activations in response to low-level
acoustic features, we hypothesize that the success of these implants is due in
large part to the role of the upstream network in extracting useful features
from a degraded signal and learned statistics of language to resolve the
signal. In this work, we use the deep neural network (DNN) DeepSpeech2, as a
paradigm to investigate how natural input and cochlear implant-based inputs are
processed over time. We generate naturalistic and cochlear implant-like inputs
from spoken sentences and test the similarity of model performance to human
performance on analogous phoneme recognition tests. Our model reproduces error
patterns in reaction time and phoneme confusion patterns under noise conditions
in normal hearing and CI participant studies. We then use interpretability
techniques to determine where and when confusions arise when processing
naturalistic and CI-like inputs. We find that dynamics over time in each layer
are affected by context as well as input type. Dynamics of all phonemes diverge
during confusion and comprehension within the same time window, which is
temporally shifted backward in each layer of the network. There is a modulation
of this signal during processing of CI which resembles changes in human EEG
signals in the auditory stream. This reduction likely relates to the reduction
of encoded phoneme identity. These findings suggest that we have a viable model
in which to explore the loss of speech-related information in time and that we
can use it to find population-level encoding signals to target when optimizing
cochlear implant inputs to improve encoding of essential speech-related
information and improve perception.
|
[
{
"created": "Tue, 30 Jul 2024 04:32:27 GMT",
"version": "v1"
}
] |
2024-07-31
|
[
[
"Steinhardt",
"Cynthia R.",
""
],
[
"Keshishian",
"Menoua",
""
],
[
"Mesgarani",
"Nima",
""
],
[
"Stachenfeld",
"Kim",
""
]
] |
Cochlear implants(CIs) are arguably the most successful neural implant, having restored hearing to over one million people worldwide. While CI research has focused on modeling the cochlear activations in response to low-level acoustic features, we hypothesize that the success of these implants is due in large part to the role of the upstream network in extracting useful features from a degraded signal and learned statistics of language to resolve the signal. In this work, we use the deep neural network (DNN) DeepSpeech2, as a paradigm to investigate how natural input and cochlear implant-based inputs are processed over time. We generate naturalistic and cochlear implant-like inputs from spoken sentences and test the similarity of model performance to human performance on analogous phoneme recognition tests. Our model reproduces error patterns in reaction time and phoneme confusion patterns under noise conditions in normal hearing and CI participant studies. We then use interpretability techniques to determine where and when confusions arise when processing naturalistic and CI-like inputs. We find that dynamics over time in each layer are affected by context as well as input type. Dynamics of all phonemes diverge during confusion and comprehension within the same time window, which is temporally shifted backward in each layer of the network. There is a modulation of this signal during processing of CI which resembles changes in human EEG signals in the auditory stream. This reduction likely relates to the reduction of encoded phoneme identity. These findings suggest that we have a viable model in which to explore the loss of speech-related information in time and that we can use it to find population-level encoding signals to target when optimizing cochlear implant inputs to improve encoding of essential speech-related information and improve perception.
|
1302.7080
|
Wesam Elshamy
|
Hassan M Emara, Wesam Elshamy, Ahmed Bahgat
|
Parameter Identification of Induction Motor Using Modified Particle
Swarm Optimization Algorithm
|
IEEE International Symposium on Industrial Electronics Jul 2008,
Cambridge, UK
| null |
10.1109/ISIE.2008.4677254
| null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a new technique for induction motor parameter
identification. The proposed technique is based on a simple startup test using
a standard V/F inverter. The recorded startup currents are compared to that
obtained by simulation of an induction motor model. A Modified PSO optimization
is used to find out the best model parameter that minimizes the sum square
error between the measured and the simulated currents. The performance of the
modified PSO is compared with other optimization methods including line search,
conventional PSO and Genetic Algorithms. Simulation results demonstrate the
ability of the proposed technique to capture the true values of the machine
parameters and the superiority of the results obtained using the modified PSO
over other optimization techniques.
|
[
{
"created": "Thu, 28 Feb 2013 04:41:53 GMT",
"version": "v1"
}
] |
2016-11-17
|
[
[
"Emara",
"Hassan M",
""
],
[
"Elshamy",
"Wesam",
""
],
[
"Bahgat",
"Ahmed",
""
]
] |
This paper presents a new technique for induction motor parameter identification. The proposed technique is based on a simple startup test using a standard V/F inverter. The recorded startup currents are compared to that obtained by simulation of an induction motor model. A Modified PSO optimization is used to find out the best model parameter that minimizes the sum square error between the measured and the simulated currents. The performance of the modified PSO is compared with other optimization methods including line search, conventional PSO and Genetic Algorithms. Simulation results demonstrate the ability of the proposed technique to capture the true values of the machine parameters and the superiority of the results obtained using the modified PSO over other optimization techniques.
|
2010.14916
|
Yoann Dieudonn\'e
|
S\'ebastien Bouchard, Yoann Dieudonn\'e, Arnaud Labourel, Andrzej Pelc
|
Almost-Optimal Deterministic Treasure Hunt in Arbitrary Graphs
| null | null | null | null |
cs.DS cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A mobile agent navigating along edges of a simple connected graph, either
finite or countably infinite, has to find an inert target (treasure) hidden in
one of the nodes. This task is known as treasure hunt. The agent has no a
priori knowledge of the graph, of the location of the treasure or of the
initial distance to it. The cost of a treasure hunt algorithm is the worst-case
number of edge traversals performed by the agent until finding the treasure.
Awerbuch, Betke, Rivest and Singh [3] considered graph exploration and treasure
hunt for finite graphs in a restricted model where the agent has a fuel tank
that can be replenished only at the starting node $s$. The size of the tank is
$B=2(1+\alpha)r$, for some positive real constant $\alpha$, where $r$, called
the radius of the graph, is the maximum distance from $s$ to any other node.
The tank of size $B$ allows the agent to make at most $\lfloor B\rfloor$ edge
traversals between two consecutive visits at node $s$.
Let $e(d)$ be the number of edges whose at least one extremity is at distance
less than $d$ from $s$. Awerbuch, Betke, Rivest and Singh [3] conjectured that
it is impossible to find a treasure hidden in a node at distance at most $d$ at
cost nearly linear in $e(d)$. We first design a deterministic treasure hunt
algorithm working in the model without any restrictions on the moves of the
agent at cost $\mathcal{O}(e(d) \log d)$, and then show how to modify this
algorithm to work in the model from [3] with the same complexity. Thus we
refute the above twenty-year-old conjecture. We observe that no treasure hunt
algorithm can beat cost $\Theta(e(d))$ for all graphs and thus our algorithms
are also almost optimal.
|
[
{
"created": "Wed, 28 Oct 2020 12:25:23 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Nov 2020 12:17:11 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Nov 2020 14:32:43 GMT",
"version": "v3"
},
{
"created": "Thu, 11 Feb 2021 12:43:49 GMT",
"version": "v4"
},
{
"created": "Sat, 13 Feb 2021 10:27:18 GMT",
"version": "v5"
}
] |
2021-02-16
|
[
[
"Bouchard",
"Sébastien",
""
],
[
"Dieudonné",
"Yoann",
""
],
[
"Labourel",
"Arnaud",
""
],
[
"Pelc",
"Andrzej",
""
]
] |
A mobile agent navigating along edges of a simple connected graph, either finite or countably infinite, has to find an inert target (treasure) hidden in one of the nodes. This task is known as treasure hunt. The agent has no a priori knowledge of the graph, of the location of the treasure or of the initial distance to it. The cost of a treasure hunt algorithm is the worst-case number of edge traversals performed by the agent until finding the treasure. Awerbuch, Betke, Rivest and Singh [3] considered graph exploration and treasure hunt for finite graphs in a restricted model where the agent has a fuel tank that can be replenished only at the starting node $s$. The size of the tank is $B=2(1+\alpha)r$, for some positive real constant $\alpha$, where $r$, called the radius of the graph, is the maximum distance from $s$ to any other node. The tank of size $B$ allows the agent to make at most $\lfloor B\rfloor$ edge traversals between two consecutive visits at node $s$. Let $e(d)$ be the number of edges whose at least one extremity is at distance less than $d$ from $s$. Awerbuch, Betke, Rivest and Singh [3] conjectured that it is impossible to find a treasure hidden in a node at distance at most $d$ at cost nearly linear in $e(d)$. We first design a deterministic treasure hunt algorithm working in the model without any restrictions on the moves of the agent at cost $\mathcal{O}(e(d) \log d)$, and then show how to modify this algorithm to work in the model from [3] with the same complexity. Thus we refute the above twenty-year-old conjecture. We observe that no treasure hunt algorithm can beat cost $\Theta(e(d))$ for all graphs and thus our algorithms are also almost optimal.
|
2201.02374
|
Haisen Zhao
|
Fanchao Zhong and Yonglai Xu and Haisen Zhao and Lin Lu
|
As-Continuous-As-Possible Extrusion Fabrication of Surface Models
|
16 pages, 23 figures
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel computational framework for optimizing the toolpath
continuity in fabricating surface models on an extrusion-based 3D printer.
Toolpath continuity has been a critical issue for extrusion-based fabrications
that affects both quality and efficiency. Transfer moves cause non-smoothor
bumpy surfaces and get worse for materials with large inertia like clay. For
surface models, the effects of continuity are even more severe, in terms of
surface quality and model stability. In this paper, we introduce an original
criterion "one-path-patch" (OPP), for representing a shell surface patch that
can be traversed in one path considering fabrication constraints. We study the
properties of an OPP and the merging operations for OPPs, and propose a
bottom-up OPP merging procedure for decomposing the given shell surface into a
minimal number of OPPs and generating the "as-continuous-as-possible" (ACAP)
toolpath. Furthermore, we customize the path planning algorithm with a curved
layer printing scheme, which reduces the staircase defect and improves the
toolpath continuity via possibly connecting multiple segments. We evaluate the
ACAP algorithm for both ceramic and thermoplastic materials, and results
demonstrate that it improves the fabrication of surface models in both surface
quality and efficiency.
|
[
{
"created": "Fri, 7 Jan 2022 09:18:59 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jan 2022 20:22:50 GMT",
"version": "v2"
},
{
"created": "Sat, 28 May 2022 08:39:43 GMT",
"version": "v3"
}
] |
2022-05-31
|
[
[
"Zhong",
"Fanchao",
""
],
[
"Xu",
"Yonglai",
""
],
[
"Zhao",
"Haisen",
""
],
[
"Lu",
"Lin",
""
]
] |
We propose a novel computational framework for optimizing the toolpath continuity in fabricating surface models on an extrusion-based 3D printer. Toolpath continuity has been a critical issue for extrusion-based fabrications that affects both quality and efficiency. Transfer moves cause non-smoothor bumpy surfaces and get worse for materials with large inertia like clay. For surface models, the effects of continuity are even more severe, in terms of surface quality and model stability. In this paper, we introduce an original criterion "one-path-patch" (OPP), for representing a shell surface patch that can be traversed in one path considering fabrication constraints. We study the properties of an OPP and the merging operations for OPPs, and propose a bottom-up OPP merging procedure for decomposing the given shell surface into a minimal number of OPPs and generating the "as-continuous-as-possible" (ACAP) toolpath. Furthermore, we customize the path planning algorithm with a curved layer printing scheme, which reduces the staircase defect and improves the toolpath continuity via possibly connecting multiple segments. We evaluate the ACAP algorithm for both ceramic and thermoplastic materials, and results demonstrate that it improves the fabrication of surface models in both surface quality and efficiency.
|
2110.10548
|
Ningning Xie
|
Ningning Xie, Tamara Norman, Dominik Grewe, Dimitrios Vytiniotis
|
Synthesizing Optimal Parallelism Placement and Reduction Strategies on
Hierarchical Systems for Deep Learning
| null | null | null | null |
cs.PL cs.DC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel characterization of the mapping of multiple parallelism
forms (e.g. data and model parallelism) onto hierarchical accelerator systems
that is hierarchy-aware and greatly reduces the space of software-to-hardware
mapping. We experimentally verify the substantial effect of these mappings on
all-reduce performance (up to 448x). We offer a novel syntax-guided program
synthesis framework that is able to decompose reductions over one or more
parallelism axes to sequences of collectives in a hierarchy- and mapping-aware
way. For 69% of parallelism placements and user requested reductions, our
framework synthesizes programs that outperform the default all-reduce
implementation when evaluated on different GPU hierarchies (max 2.04x, average
1.27x). We complement our synthesis tool with a simulator exceeding 90% top-10
accuracy, which therefore reduces the need for massive evaluations of synthesis
results to determine a small set of optimal programs and mappings.
|
[
{
"created": "Wed, 20 Oct 2021 13:05:49 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Nov 2021 12:54:39 GMT",
"version": "v2"
}
] |
2021-11-17
|
[
[
"Xie",
"Ningning",
""
],
[
"Norman",
"Tamara",
""
],
[
"Grewe",
"Dominik",
""
],
[
"Vytiniotis",
"Dimitrios",
""
]
] |
We present a novel characterization of the mapping of multiple parallelism forms (e.g. data and model parallelism) onto hierarchical accelerator systems that is hierarchy-aware and greatly reduces the space of software-to-hardware mapping. We experimentally verify the substantial effect of these mappings on all-reduce performance (up to 448x). We offer a novel syntax-guided program synthesis framework that is able to decompose reductions over one or more parallelism axes to sequences of collectives in a hierarchy- and mapping-aware way. For 69% of parallelism placements and user requested reductions, our framework synthesizes programs that outperform the default all-reduce implementation when evaluated on different GPU hierarchies (max 2.04x, average 1.27x). We complement our synthesis tool with a simulator exceeding 90% top-10 accuracy, which therefore reduces the need for massive evaluations of synthesis results to determine a small set of optimal programs and mappings.
|
2312.02207
|
Xiaojun Jia
|
Xiaojun Jia, Jindong Gu, Yihao Huang, Simeng Qin, Qing Guo, Yang Liu,
Xiaochun Cao
|
TranSegPGD: Improving Transferability of Adversarial Examples on
Semantic Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transferability of adversarial examples on image classification has been
systematically explored, which generates adversarial examples in black-box
mode. However, the transferability of adversarial examples on semantic
segmentation has been largely overlooked. In this paper, we propose an
effective two-stage adversarial attack strategy to improve the transferability
of adversarial examples on semantic segmentation, dubbed TranSegPGD.
Specifically, at the first stage, every pixel in an input image is divided into
different branches based on its adversarial property. Different branches are
assigned different weights for optimization to improve the adversarial
performance of all pixels.We assign high weights to the loss of the
hard-to-attack pixels to misclassify all pixels. At the second stage, the
pixels are divided into different branches based on their transferable property
which is dependent on Kullback-Leibler divergence. Different branches are
assigned different weights for optimization to improve the transferability of
the adversarial examples. We assign high weights to the loss of the
high-transferability pixels to improve the transferability of adversarial
examples. Extensive experiments with various segmentation models are conducted
on PASCAL VOC 2012 and Cityscapes datasets to demonstrate the effectiveness of
the proposed method. The proposed adversarial attack method can achieve
state-of-the-art performance.
|
[
{
"created": "Sun, 3 Dec 2023 00:48:33 GMT",
"version": "v1"
}
] |
2023-12-06
|
[
[
"Jia",
"Xiaojun",
""
],
[
"Gu",
"Jindong",
""
],
[
"Huang",
"Yihao",
""
],
[
"Qin",
"Simeng",
""
],
[
"Guo",
"Qing",
""
],
[
"Liu",
"Yang",
""
],
[
"Cao",
"Xiaochun",
""
]
] |
Transferability of adversarial examples on image classification has been systematically explored, which generates adversarial examples in black-box mode. However, the transferability of adversarial examples on semantic segmentation has been largely overlooked. In this paper, we propose an effective two-stage adversarial attack strategy to improve the transferability of adversarial examples on semantic segmentation, dubbed TranSegPGD. Specifically, at the first stage, every pixel in an input image is divided into different branches based on its adversarial property. Different branches are assigned different weights for optimization to improve the adversarial performance of all pixels.We assign high weights to the loss of the hard-to-attack pixels to misclassify all pixels. At the second stage, the pixels are divided into different branches based on their transferable property which is dependent on Kullback-Leibler divergence. Different branches are assigned different weights for optimization to improve the transferability of the adversarial examples. We assign high weights to the loss of the high-transferability pixels to improve the transferability of adversarial examples. Extensive experiments with various segmentation models are conducted on PASCAL VOC 2012 and Cityscapes datasets to demonstrate the effectiveness of the proposed method. The proposed adversarial attack method can achieve state-of-the-art performance.
|
2204.04421
|
Ronghao Dang
|
Ronghao Dang, Zhuofan Shi, Liuyi Wang, Zongtao He, Chengju Liu, Qijun
Chen
|
Unbiased Directed Object Attention Graph for Object Navigation
|
13 pages, accepted by ACM Mutimedia 2022
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object navigation tasks require agents to locate specific objects in unknown
environments based on visual information. Previously, graph convolutions were
used to implicitly explore the relationships between objects. However, due to
differences in visibility among objects, it is easy to generate biases in
object attention. Thus, in this paper, we propose a directed object attention
(DOA) graph to guide the agent in explicitly learning the attention
relationships between objects, thereby reducing the object attention bias. In
particular, we use the DOA graph to perform unbiased adaptive object attention
(UAOA) on the object features and unbiased adaptive image attention (UAIA) on
the raw images, respectively. To distinguish features in different branches, a
concise adaptive branch energy distribution (ABED) method is proposed. We
assess our methods on the AI2-Thor dataset. Compared with the state-of-the-art
(SOTA) method, our method reports 7.4%, 8.1% and 17.6% increase in success rate
(SR), success weighted by path length (SPL) and success weighted by action
efficiency (SAE), respectively.
|
[
{
"created": "Sat, 9 Apr 2022 08:13:05 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Jul 2022 01:41:38 GMT",
"version": "v2"
}
] |
2022-07-11
|
[
[
"Dang",
"Ronghao",
""
],
[
"Shi",
"Zhuofan",
""
],
[
"Wang",
"Liuyi",
""
],
[
"He",
"Zongtao",
""
],
[
"Liu",
"Chengju",
""
],
[
"Chen",
"Qijun",
""
]
] |
Object navigation tasks require agents to locate specific objects in unknown environments based on visual information. Previously, graph convolutions were used to implicitly explore the relationships between objects. However, due to differences in visibility among objects, it is easy to generate biases in object attention. Thus, in this paper, we propose a directed object attention (DOA) graph to guide the agent in explicitly learning the attention relationships between objects, thereby reducing the object attention bias. In particular, we use the DOA graph to perform unbiased adaptive object attention (UAOA) on the object features and unbiased adaptive image attention (UAIA) on the raw images, respectively. To distinguish features in different branches, a concise adaptive branch energy distribution (ABED) method is proposed. We assess our methods on the AI2-Thor dataset. Compared with the state-of-the-art (SOTA) method, our method reports 7.4%, 8.1% and 17.6% increase in success rate (SR), success weighted by path length (SPL) and success weighted by action efficiency (SAE), respectively.
|
2205.11908
|
Siddhartha Siddhartha
|
Siddhartha
|
An interpretation of the final fully connected layer
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years neural networks have achieved state-of-the-art accuracy for
various tasks but the the interpretation of the generated outputs still remains
difficult. In this work we attempt to provide a method to understand the learnt
weights in the final fully connected layer in image classification models. We
motivate our method by drawing a connection between the policy gradient
objective in RL and supervised learning objective. We suggest that the commonly
used cross entropy based supervised learning objective can be regarded as a
special case of the policy gradient objective. Using this insight we propose a
method to find the most discriminative and confusing parts of an image. Our
method does not make any prior assumption about neural network achitecture and
has low computational cost. We apply our method on publicly available
pre-trained models and report the generated results.
|
[
{
"created": "Tue, 24 May 2022 09:05:19 GMT",
"version": "v1"
}
] |
2022-05-25
|
[
[
"Siddhartha",
"",
""
]
] |
In recent years neural networks have achieved state-of-the-art accuracy for various tasks but the the interpretation of the generated outputs still remains difficult. In this work we attempt to provide a method to understand the learnt weights in the final fully connected layer in image classification models. We motivate our method by drawing a connection between the policy gradient objective in RL and supervised learning objective. We suggest that the commonly used cross entropy based supervised learning objective can be regarded as a special case of the policy gradient objective. Using this insight we propose a method to find the most discriminative and confusing parts of an image. Our method does not make any prior assumption about neural network achitecture and has low computational cost. We apply our method on publicly available pre-trained models and report the generated results.
|
2311.11707
|
Dimitri Watel
|
Dominique Barth (DAVID, UVSQ), Thierry Mautor (DAVID, UVSQ), Dimitri
Watel (SAMOVAR, ENSIIE), Marc-Antoine Weisser (LISN, GALaC)
|
Configuring an heterogeneous smartgrid network: complexity and
approximations for tree topologies
|
Journal of Global Optimization, 2023
| null |
10.1007/s10898-023-01338-0
| null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the problem of configuring a power distribution network with
reliability and resilience objectives by satisfying the demands of the
consumers and saturating each production source as little as possible. We
consider power distribution networks containing source nodes producing
electricity, nodes representing electricity consumers and switches between
them. Configuring this network consists in deciding the orientation of the
links between the nodes of the network. The electric flow is a direct
consequence of the chosen configuration and can be computed in polynomial time.
It is valid if it satisfies the demand of each consumer and capacity
constraints on the network. In such a case, we study the problem of determining
a feasible solution that balances the loads of the sources, that is their
production rates. We use three metrics to measure the quality of a solution:
minimizing the maximum load, maximizing the minimum load and minimizing the
difference of the maximum and the minimum loads. This defines optimization
problems called respectively min-M, max-m and min-R. In the case where the
graph of the network is a tree, it is known that the problem of building a
valid configuration is polynomial. We show the three optimization variants have
distinct properties regarding the theoretical complexity and the
approximability. Particularly, we show that min-M is polynomial, that max-m is
NP-Hard but belongs to the class FPTAS and that min-R is NP-Hard, cannot 1 be
approximated to within any exponential relative ratio but, for any $\epsilon$ >
0, there exists an algorithm for which the value of the returned solution
equals the value of an optimal solution shifted by at most $\epsilon$.
|
[
{
"created": "Mon, 20 Nov 2023 12:22:26 GMT",
"version": "v1"
}
] |
2023-11-21
|
[
[
"Barth",
"Dominique",
"",
"DAVID, UVSQ"
],
[
"Mautor",
"Thierry",
"",
"DAVID, UVSQ"
],
[
"Watel",
"Dimitri",
"",
"SAMOVAR, ENSIIE"
],
[
"Weisser",
"Marc-Antoine",
"",
"LISN, GALaC"
]
] |
We address the problem of configuring a power distribution network with reliability and resilience objectives by satisfying the demands of the consumers and saturating each production source as little as possible. We consider power distribution networks containing source nodes producing electricity, nodes representing electricity consumers and switches between them. Configuring this network consists in deciding the orientation of the links between the nodes of the network. The electric flow is a direct consequence of the chosen configuration and can be computed in polynomial time. It is valid if it satisfies the demand of each consumer and capacity constraints on the network. In such a case, we study the problem of determining a feasible solution that balances the loads of the sources, that is their production rates. We use three metrics to measure the quality of a solution: minimizing the maximum load, maximizing the minimum load and minimizing the difference of the maximum and the minimum loads. This defines optimization problems called respectively min-M, max-m and min-R. In the case where the graph of the network is a tree, it is known that the problem of building a valid configuration is polynomial. We show the three optimization variants have distinct properties regarding the theoretical complexity and the approximability. Particularly, we show that min-M is polynomial, that max-m is NP-Hard but belongs to the class FPTAS and that min-R is NP-Hard, cannot 1 be approximated to within any exponential relative ratio but, for any $\epsilon$ > 0, there exists an algorithm for which the value of the returned solution equals the value of an optimal solution shifted by at most $\epsilon$.
|
2301.10608
|
Arlindo Oliveira L
|
Tiago Oliveira, Tiago Marques, Arlindo L. Oliveira
|
Connecting metrics for shape-texture knowledge in computer vision
|
7 pages, 3 figures
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Modern artificial neural networks, including convolutional neural networks
and vision transformers, have mastered several computer vision tasks, including
object recognition. However, there are many significant differences between the
behavior and robustness of these systems and of the human visual system. Deep
neural networks remain brittle and susceptible to many changes in the image
that do not cause humans to misclassify images. Part of this different behavior
may be explained by the type of features humans and deep neural networks use in
vision tasks. Humans tend to classify objects according to their shape while
deep neural networks seem to rely mostly on texture. Exploring this question is
relevant, since it may lead to better performing neural network architectures
and to a better understanding of the workings of the vision system of primates.
In this work, we advance the state of the art in our understanding of this
phenomenon, by extending previous analyses to a much larger set of deep neural
network architectures. We found that the performance of models in image
classification tasks is highly correlated with their shape bias measured at the
output and penultimate layer. Furthermore, our results showed that the number
of neurons that represent shape and texture are strongly anti-correlated, thus
providing evidence that there is competition between these two types of
features. Finally, we observed that while in general there is a correlation
between performance and shape bias, there are significant variations between
architecture families.
|
[
{
"created": "Wed, 25 Jan 2023 14:37:42 GMT",
"version": "v1"
}
] |
2023-01-26
|
[
[
"Oliveira",
"Tiago",
""
],
[
"Marques",
"Tiago",
""
],
[
"Oliveira",
"Arlindo L.",
""
]
] |
Modern artificial neural networks, including convolutional neural networks and vision transformers, have mastered several computer vision tasks, including object recognition. However, there are many significant differences between the behavior and robustness of these systems and of the human visual system. Deep neural networks remain brittle and susceptible to many changes in the image that do not cause humans to misclassify images. Part of this different behavior may be explained by the type of features humans and deep neural networks use in vision tasks. Humans tend to classify objects according to their shape while deep neural networks seem to rely mostly on texture. Exploring this question is relevant, since it may lead to better performing neural network architectures and to a better understanding of the workings of the vision system of primates. In this work, we advance the state of the art in our understanding of this phenomenon, by extending previous analyses to a much larger set of deep neural network architectures. We found that the performance of models in image classification tasks is highly correlated with their shape bias measured at the output and penultimate layer. Furthermore, our results showed that the number of neurons that represent shape and texture are strongly anti-correlated, thus providing evidence that there is competition between these two types of features. Finally, we observed that while in general there is a correlation between performance and shape bias, there are significant variations between architecture families.
|
2302.06354
|
Gal Kaplun
|
Gal Kaplun, Andrey Gurevich, Tal Swisa, Mazor David, Shai
Shalev-Shwartz and Eran Malach
|
Less is More: Selective Layer Finetuning with SubTuning
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Finetuning a pretrained model has become a standard approach for training
neural networks on novel tasks, resulting in fast convergence and improved
performance. In this work, we study an alternative finetuning method, where
instead of finetuning all the weights of the network, we only train a carefully
chosen subset of layers, keeping the rest of the weights frozen at their
initial (pretrained) values. We demonstrate that \emph{subset finetuning} (or
SubTuning) often achieves accuracy comparable to full finetuning of the model,
and even surpasses the performance of full finetuning when training data is
scarce. Therefore, SubTuning allows deploying new tasks at minimal
computational cost, while enjoying the benefits of finetuning the entire model.
This yields a simple and effective method for multi-task learning, where
different tasks do not interfere with one another, and yet share most of the
resources at inference time. We demonstrate the efficiency of SubTuning across
multiple tasks, using different network architectures and pretraining methods.
|
[
{
"created": "Mon, 13 Feb 2023 13:38:46 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Feb 2023 02:03:11 GMT",
"version": "v2"
},
{
"created": "Sun, 2 Jul 2023 12:28:46 GMT",
"version": "v3"
}
] |
2023-07-04
|
[
[
"Kaplun",
"Gal",
""
],
[
"Gurevich",
"Andrey",
""
],
[
"Swisa",
"Tal",
""
],
[
"David",
"Mazor",
""
],
[
"Shalev-Shwartz",
"Shai",
""
],
[
"Malach",
"Eran",
""
]
] |
Finetuning a pretrained model has become a standard approach for training neural networks on novel tasks, resulting in fast convergence and improved performance. In this work, we study an alternative finetuning method, where instead of finetuning all the weights of the network, we only train a carefully chosen subset of layers, keeping the rest of the weights frozen at their initial (pretrained) values. We demonstrate that \emph{subset finetuning} (or SubTuning) often achieves accuracy comparable to full finetuning of the model, and even surpasses the performance of full finetuning when training data is scarce. Therefore, SubTuning allows deploying new tasks at minimal computational cost, while enjoying the benefits of finetuning the entire model. This yields a simple and effective method for multi-task learning, where different tasks do not interfere with one another, and yet share most of the resources at inference time. We demonstrate the efficiency of SubTuning across multiple tasks, using different network architectures and pretraining methods.
|
2304.02444
|
P\'eter Antal
|
P\'eter Antal, Tam\'as P\'eni, and Roland T\'oth
|
Autonomous Hook-Based Grasping and Transportation with Quadcopters
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Payload grasping and transportation with quadcopters is an active research
area that has rapidly developed over the last decade. To grasp a payload
without human interaction, most state-of-the-art approaches apply robotic arms
that are attached to the quadcopter body. However, due to the large weight and
power consumption of these aerial manipulators, their agility and flight time
are limited. This paper proposes a motion control and planning method for
transportation with a lightweight, passive manipulator structure that consists
of a hook attached to a quadrotor using a 1 DoF revolute joint. To perform
payload grasping, transportation, and release, first, time-optimal reference
trajectories are designed through specific waypoints to ensure the fast and
reliable execution of the tasks. Then, a two-stage motion control approach is
developed based on a robust geometric controller for precise and reliable
reference tracking and a linear--quadratic payload regulator for rapid setpoint
stabilization of the payload swing. Furthermore, stability of the closed-loop
system is mathematically proven to give safety guarantee for its operation. The
proposed control architecture and design are evaluated in a high-fidelity
physical simulator, and also in real flight experiments, using a custom-made
quadrotor--hook manipulator platform.
|
[
{
"created": "Wed, 5 Apr 2023 14:02:53 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Mar 2024 08:13:01 GMT",
"version": "v2"
}
] |
2024-03-27
|
[
[
"Antal",
"Péter",
""
],
[
"Péni",
"Tamás",
""
],
[
"Tóth",
"Roland",
""
]
] |
Payload grasping and transportation with quadcopters is an active research area that has rapidly developed over the last decade. To grasp a payload without human interaction, most state-of-the-art approaches apply robotic arms that are attached to the quadcopter body. However, due to the large weight and power consumption of these aerial manipulators, their agility and flight time are limited. This paper proposes a motion control and planning method for transportation with a lightweight, passive manipulator structure that consists of a hook attached to a quadrotor using a 1 DoF revolute joint. To perform payload grasping, transportation, and release, first, time-optimal reference trajectories are designed through specific waypoints to ensure the fast and reliable execution of the tasks. Then, a two-stage motion control approach is developed based on a robust geometric controller for precise and reliable reference tracking and a linear--quadratic payload regulator for rapid setpoint stabilization of the payload swing. Furthermore, stability of the closed-loop system is mathematically proven to give safety guarantee for its operation. The proposed control architecture and design are evaluated in a high-fidelity physical simulator, and also in real flight experiments, using a custom-made quadrotor--hook manipulator platform.
|
2103.03434
|
Ojas Kanhere
|
O. Kanhere, A. Chopra, A. Thornburg, T. S. Rappaport, and S. S.
Ghassemzadeh
|
Performance Impact Analysis of Beam Switching in Millimeter Wave
Vehicular Communications
|
IEEE 93rd Vehicular Technology Conference (VTC-Spring)
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Millimeter wave wireless spectrum deployments will allow vehicular
communications to share high data rate vehicular sensor data in real-time. The
highly directional nature of wireless links in millimeter spectral bands will
require continuous channel measurements to ensure the transmitter (TX) and
receiver (RX) beams are aligned to provide the best channel. Using real-world
vehicular mmWave measurement data at 28 GHz, we determine the optimal beam
sweeping period, i.e. the frequency of the channel measurements, to align the
RX beams to the best channel directions for maximizing the
vehicle-to-infrastructure (V2I) throughput. We show that in a realistic
vehicular traffic environment in Austin, TX, for a vehicle traveling at an
average speed of 10.5 mph, a beam sweeping period of 300 ms in future V2I
communication standards would maximize the V2I throughput, using a system of
four RX phased arrays that scanned the channel 360 degrees in the azimuth and
30 degrees above and below the boresight. We also investigate the impact of the
number of active RX chains controlling the steerable phased arrays on V2I
throughput. Reducing the number of RX chains controlling the phased arrays
helps reduce the cost of the vehicular mmWave hardware while multiple RX
chains, although more expensive, provide more robustness to beam direction
changes at the vehicle, allowing near maximum throughput over a wide range of
beam sweep periods. We show that the overhead of utilizing one RX chain instead
of four leads to a 10% drop in mean V2I throughput over six non-line-of-sight
runs in real traffic conditions, with each run being 10 to 20 seconds long over
a distance of 40 to 90 meters.
|
[
{
"created": "Fri, 5 Mar 2021 02:16:01 GMT",
"version": "v1"
}
] |
2021-03-08
|
[
[
"Kanhere",
"O.",
""
],
[
"Chopra",
"A.",
""
],
[
"Thornburg",
"A.",
""
],
[
"Rappaport",
"T. S.",
""
],
[
"Ghassemzadeh",
"S. S.",
""
]
] |
Millimeter wave wireless spectrum deployments will allow vehicular communications to share high data rate vehicular sensor data in real-time. The highly directional nature of wireless links in millimeter spectral bands will require continuous channel measurements to ensure the transmitter (TX) and receiver (RX) beams are aligned to provide the best channel. Using real-world vehicular mmWave measurement data at 28 GHz, we determine the optimal beam sweeping period, i.e. the frequency of the channel measurements, to align the RX beams to the best channel directions for maximizing the vehicle-to-infrastructure (V2I) throughput. We show that in a realistic vehicular traffic environment in Austin, TX, for a vehicle traveling at an average speed of 10.5 mph, a beam sweeping period of 300 ms in future V2I communication standards would maximize the V2I throughput, using a system of four RX phased arrays that scanned the channel 360 degrees in the azimuth and 30 degrees above and below the boresight. We also investigate the impact of the number of active RX chains controlling the steerable phased arrays on V2I throughput. Reducing the number of RX chains controlling the phased arrays helps reduce the cost of the vehicular mmWave hardware while multiple RX chains, although more expensive, provide more robustness to beam direction changes at the vehicle, allowing near maximum throughput over a wide range of beam sweep periods. We show that the overhead of utilizing one RX chain instead of four leads to a 10% drop in mean V2I throughput over six non-line-of-sight runs in real traffic conditions, with each run being 10 to 20 seconds long over a distance of 40 to 90 meters.
|
2310.13343
|
Yuxuan Zhao
|
Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao,
Yuxiao Zhang, Dinuo Li
|
Challenges and Contributing Factors in the Utilization of Large Language
Models (LLMs)
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of large language models (LLMs) like the GPT series,
their widespread use across various application scenarios presents a myriad of
challenges. This review initially explores the issue of domain specificity,
where LLMs may struggle to provide precise answers to specialized questions
within niche fields. The problem of knowledge forgetting arises as these LLMs
might find it hard to balance old and new information. The knowledge repetition
phenomenon reveals that sometimes LLMs might deliver overly mechanized
responses, lacking depth and originality. Furthermore, knowledge illusion
describes situations where LLMs might provide answers that seem insightful but
are actually superficial, while knowledge toxicity focuses on harmful or biased
information outputs. These challenges underscore problems in the training data
and algorithmic design of LLMs. To address these issues, it's suggested to
diversify training data, fine-tune models, enhance transparency and
interpretability, and incorporate ethics and fairness training. Future
technological trends might lean towards iterative methodologies, multimodal
learning, model personalization and customization, and real-time learning and
feedback mechanisms. In conclusion, future LLMs should prioritize fairness,
transparency, and ethics, ensuring they uphold high moral and ethical standards
when serving humanity.
|
[
{
"created": "Fri, 20 Oct 2023 08:13:36 GMT",
"version": "v1"
}
] |
2023-10-23
|
[
[
"Chen",
"Xiaoliang",
""
],
[
"Li",
"Liangbin",
""
],
[
"Chang",
"Le",
""
],
[
"Huang",
"Yunhe",
""
],
[
"Zhao",
"Yuxuan",
""
],
[
"Zhang",
"Yuxiao",
""
],
[
"Li",
"Dinuo",
""
]
] |
With the development of large language models (LLMs) like the GPT series, their widespread use across various application scenarios presents a myriad of challenges. This review initially explores the issue of domain specificity, where LLMs may struggle to provide precise answers to specialized questions within niche fields. The problem of knowledge forgetting arises as these LLMs might find it hard to balance old and new information. The knowledge repetition phenomenon reveals that sometimes LLMs might deliver overly mechanized responses, lacking depth and originality. Furthermore, knowledge illusion describes situations where LLMs might provide answers that seem insightful but are actually superficial, while knowledge toxicity focuses on harmful or biased information outputs. These challenges underscore problems in the training data and algorithmic design of LLMs. To address these issues, it's suggested to diversify training data, fine-tune models, enhance transparency and interpretability, and incorporate ethics and fairness training. Future technological trends might lean towards iterative methodologies, multimodal learning, model personalization and customization, and real-time learning and feedback mechanisms. In conclusion, future LLMs should prioritize fairness, transparency, and ethics, ensuring they uphold high moral and ethical standards when serving humanity.
|
2004.03153
|
Yang Zhang
|
Yang Zhang, Changhui Hu, Xiaobo Lu
|
Adaptive Multiscale Illumination-Invariant Feature Representation for
Undersampled Face Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an novel illumination-invariant feature representation
approach used to eliminate the varying illumination affection in undersampled
face recognition. Firstly, a new illumination level classification technique
based on Singular Value Decomposition (SVD) is proposed to judge the
illumination level of input image. Secondly, we construct the logarithm
edgemaps feature (LEF) based on lambertian model and local near neighbor
feature of the face image, applying to local region within multiple scales.
Then, the illumination level is referenced to construct the high performance
LEF as well realize adaptive fusion for multiple scales LEFs for the face
image, performing JLEF-feature. In addition, the constrain operation is used to
remove the useless high-frequency interference, disentangling useful facial
feature edges and constructing AJLEF-face. Finally, the effects of the our
methods and other state-of-the-art algorithms including deep learning methods
are tested on Extended Yale B, CMU PIE, AR as well as our Self-build Driver
database (SDB). The experimental results demonstrate that the JLEF-feature and
AJLEF-face outperform other related approaches for undersampled face
recognition under varying illumination.
|
[
{
"created": "Tue, 7 Apr 2020 06:48:44 GMT",
"version": "v1"
}
] |
2020-04-08
|
[
[
"Zhang",
"Yang",
""
],
[
"Hu",
"Changhui",
""
],
[
"Lu",
"Xiaobo",
""
]
] |
This paper presents an novel illumination-invariant feature representation approach used to eliminate the varying illumination affection in undersampled face recognition. Firstly, a new illumination level classification technique based on Singular Value Decomposition (SVD) is proposed to judge the illumination level of input image. Secondly, we construct the logarithm edgemaps feature (LEF) based on lambertian model and local near neighbor feature of the face image, applying to local region within multiple scales. Then, the illumination level is referenced to construct the high performance LEF as well realize adaptive fusion for multiple scales LEFs for the face image, performing JLEF-feature. In addition, the constrain operation is used to remove the useless high-frequency interference, disentangling useful facial feature edges and constructing AJLEF-face. Finally, the effects of the our methods and other state-of-the-art algorithms including deep learning methods are tested on Extended Yale B, CMU PIE, AR as well as our Self-build Driver database (SDB). The experimental results demonstrate that the JLEF-feature and AJLEF-face outperform other related approaches for undersampled face recognition under varying illumination.
|
1912.07747
|
William Hsu
|
Huichen Yang, Carlos A. Aguirre, Maria F. De La Torre, Derek
Christensen, Luis Bobadilla, Emily Davich, Jordan Roth, Lei Luo, Yihong
Theis, Alice Lam, T. Yong-Jin Han, David Buttler, William H. Hsu
|
Pipelines for Procedural Information Extraction from Scientific
Literature: Towards Recipes using Machine Learning and Data Science
|
15th International Conference on Document Analysis and Recognition
Workshops (ICDARW 2019)
| null |
10.1109/ICDARW.2019.10037
|
2019-1
|
cs.IR cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a machine learning and data science pipeline for
structured information extraction from documents, implemented as a suite of
open-source tools and extensions to existing tools. It centers around a
methodology for extracting procedural information in the form of recipes,
stepwise procedures for creating an artifact (in this case synthesizing a
nanomaterial), from published scientific literature. From our overall goal of
producing recipes from free text, we derive the technical objectives of a
system consisting of pipeline stages: document acquisition and filtering,
payload extraction, recipe step extraction as a relationship extraction task,
recipe assembly, and presentation through an information retrieval interface
with question answering (QA) functionality. This system meets computational
information and knowledge management (CIKM) requirements of metadata-driven
payload extraction, named entity extraction, and relationship extraction from
text. Functional contributions described in this paper include semi-supervised
machine learning methods for PDF filtering and payload extraction tasks,
followed by structured extraction and data transformation tasks beginning with
section extraction, recipe steps as information tuples, and finally assembled
recipes. Measurable objective criteria for extraction quality include precision
and recall of recipe steps, ordering constraints, and QA accuracy, precision,
and recall. Results, key novel contributions, and significant open problems
derived from this work center around the attribution of these holistic quality
measures to specific machine learning and inference stages of the pipeline,
each with their performance measures. The desired recipes contain identified
preconditions, material inputs, and operations, and constitute the overall
output generated by our computational information and knowledge management
(CIKM) system.
|
[
{
"created": "Mon, 16 Dec 2019 23:04:03 GMT",
"version": "v1"
}
] |
2019-12-18
|
[
[
"Yang",
"Huichen",
""
],
[
"Aguirre",
"Carlos A.",
""
],
[
"De La Torre",
"Maria F.",
""
],
[
"Christensen",
"Derek",
""
],
[
"Bobadilla",
"Luis",
""
],
[
"Davich",
"Emily",
""
],
[
"Roth",
"Jordan",
""
],
[
"Luo",
"Lei",
""
],
[
"Theis",
"Yihong",
""
],
[
"Lam",
"Alice",
""
],
[
"Han",
"T. Yong-Jin",
""
],
[
"Buttler",
"David",
""
],
[
"Hsu",
"William H.",
""
]
] |
This paper describes a machine learning and data science pipeline for structured information extraction from documents, implemented as a suite of open-source tools and extensions to existing tools. It centers around a methodology for extracting procedural information in the form of recipes, stepwise procedures for creating an artifact (in this case synthesizing a nanomaterial), from published scientific literature. From our overall goal of producing recipes from free text, we derive the technical objectives of a system consisting of pipeline stages: document acquisition and filtering, payload extraction, recipe step extraction as a relationship extraction task, recipe assembly, and presentation through an information retrieval interface with question answering (QA) functionality. This system meets computational information and knowledge management (CIKM) requirements of metadata-driven payload extraction, named entity extraction, and relationship extraction from text. Functional contributions described in this paper include semi-supervised machine learning methods for PDF filtering and payload extraction tasks, followed by structured extraction and data transformation tasks beginning with section extraction, recipe steps as information tuples, and finally assembled recipes. Measurable objective criteria for extraction quality include precision and recall of recipe steps, ordering constraints, and QA accuracy, precision, and recall. Results, key novel contributions, and significant open problems derived from this work center around the attribution of these holistic quality measures to specific machine learning and inference stages of the pipeline, each with their performance measures. The desired recipes contain identified preconditions, material inputs, and operations, and constitute the overall output generated by our computational information and knowledge management (CIKM) system.
|
2302.12458
|
Hoi Man Lam
|
Hoi Man Lam, W. Jared Walker, Lucas Jonasch, Dimitri Schreiber, and
Michael C. Yip
|
Design and Mechanics of Cable-Driven Rolling Diaphragm Transmission for
High-Transparency Robotic Motion
|
7 pages, 13 figures
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Applications of rolling diaphragm transmissions for medical and teleoperated
robotics are of great interest, due to the low friction of rolling diaphragms
combined with the power density and stiffness of hydraulic transmissions.
However, the stiffness-enabling pressure preloads can form a tradeoff against
bearing loading in some rolling diaphragm layouts, and transmission setup can
be difficult. Utilization of cable drives compliment the rolling diaphragm
transmission's advantages, but maintaining cable tension is crucial for optimal
and consistent performance. In this paper, a coaxial opposed rolling diaphragm
layout with cable drive and an electronic transmission control system are
investigated, with a focus on system reliability and scalability. Mechanical
features are proposed which enable force balancing, decoupling of transmission
pressure from bearing loads, and maintenance of cable tension. Key
considerations and procedures for automation of transmission setup, phasing,
and operation are also presented. We also present an analysis of system
stiffness to identify key compliance contributors, and conduct experiments to
validate prototype design performance.
|
[
{
"created": "Fri, 24 Feb 2023 05:18:00 GMT",
"version": "v1"
}
] |
2023-02-27
|
[
[
"Lam",
"Hoi Man",
""
],
[
"Walker",
"W. Jared",
""
],
[
"Jonasch",
"Lucas",
""
],
[
"Schreiber",
"Dimitri",
""
],
[
"Yip",
"Michael C.",
""
]
] |
Applications of rolling diaphragm transmissions for medical and teleoperated robotics are of great interest, due to the low friction of rolling diaphragms combined with the power density and stiffness of hydraulic transmissions. However, the stiffness-enabling pressure preloads can form a tradeoff against bearing loading in some rolling diaphragm layouts, and transmission setup can be difficult. Utilization of cable drives compliment the rolling diaphragm transmission's advantages, but maintaining cable tension is crucial for optimal and consistent performance. In this paper, a coaxial opposed rolling diaphragm layout with cable drive and an electronic transmission control system are investigated, with a focus on system reliability and scalability. Mechanical features are proposed which enable force balancing, decoupling of transmission pressure from bearing loads, and maintenance of cable tension. Key considerations and procedures for automation of transmission setup, phasing, and operation are also presented. We also present an analysis of system stiffness to identify key compliance contributors, and conduct experiments to validate prototype design performance.
|
1211.1265
|
Emmanuel d'Angelo
|
Emmanuel d'Angelo, Laurent jacques, Alexandre Alahi, Pierre
Vandergheynst
|
From Bits to Images: Inversion of Local Binary Descriptors
| null | null | null | null |
cs.CV cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Local Binary Descriptors are becoming more and more popular for image
matching tasks, especially when going mobile. While they are extensively
studied in this context, their ability to carry enough information in order to
infer the original image is seldom addressed.
In this work, we leverage an inverse problem approach to show that it is
possible to directly reconstruct the image content from Local Binary
Descriptors. This process relies on very broad assumptions besides the
knowledge of the pattern of the descriptor at hand. This generalizes previous
results that required either a prior learning database or non-binarized
features.
Furthermore, our reconstruction scheme reveals differences in the way
different Local Binary Descriptors capture and encode image information. Hence,
the potential applications of our work are multiple, ranging from privacy
issues caused by eavesdropping image keypoints streamed by mobile devices to
the design of better descriptors through the visualization and the analysis of
their geometric content.
|
[
{
"created": "Tue, 6 Nov 2012 15:32:34 GMT",
"version": "v1"
}
] |
2012-11-07
|
[
[
"d'Angelo",
"Emmanuel",
""
],
[
"jacques",
"Laurent",
""
],
[
"Alahi",
"Alexandre",
""
],
[
"Vandergheynst",
"Pierre",
""
]
] |
Local Binary Descriptors are becoming more and more popular for image matching tasks, especially when going mobile. While they are extensively studied in this context, their ability to carry enough information in order to infer the original image is seldom addressed. In this work, we leverage an inverse problem approach to show that it is possible to directly reconstruct the image content from Local Binary Descriptors. This process relies on very broad assumptions besides the knowledge of the pattern of the descriptor at hand. This generalizes previous results that required either a prior learning database or non-binarized features. Furthermore, our reconstruction scheme reveals differences in the way different Local Binary Descriptors capture and encode image information. Hence, the potential applications of our work are multiple, ranging from privacy issues caused by eavesdropping image keypoints streamed by mobile devices to the design of better descriptors through the visualization and the analysis of their geometric content.
|
1403.2431
|
Gabriele Fici
|
Gabriele Fici, Travis Gagie, Juha K\"arkk\"ainen, Dominik Kempa
|
A Subquadratic Algorithm for Minimum Palindromic Factorization
|
Accepted for publication in Journal of Discrete Algorithms
| null |
10.1016/j.jda.2014.08.001
| null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give an $\mathcal{O}(n \log n)$-time, $\mathcal{O}(n)$-space algorithm for
factoring a string into the minimum number of palindromic substrings. That is,
given a string $S [1..n]$, in $\mathcal{O}(n \log n)$ time our algorithm
returns the minimum number of palindromes $S_1,\ldots, S_\ell$ such that $S =
S_1 \cdots S_\ell$. We also show that the time complexity is $\mathcal{O}(n)$
on average and $\Omega(n\log n)$ in the worst case. The last result is based on
a characterization of the palindromic structure of Zimin words.
|
[
{
"created": "Mon, 10 Mar 2014 22:18:40 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Aug 2014 09:52:23 GMT",
"version": "v2"
}
] |
2020-12-15
|
[
[
"Fici",
"Gabriele",
""
],
[
"Gagie",
"Travis",
""
],
[
"Kärkkäinen",
"Juha",
""
],
[
"Kempa",
"Dominik",
""
]
] |
We give an $\mathcal{O}(n \log n)$-time, $\mathcal{O}(n)$-space algorithm for factoring a string into the minimum number of palindromic substrings. That is, given a string $S [1..n]$, in $\mathcal{O}(n \log n)$ time our algorithm returns the minimum number of palindromes $S_1,\ldots, S_\ell$ such that $S = S_1 \cdots S_\ell$. We also show that the time complexity is $\mathcal{O}(n)$ on average and $\Omega(n\log n)$ in the worst case. The last result is based on a characterization of the palindromic structure of Zimin words.
|
1904.03828
|
Dacheng Tao
|
Chen Gong, Dacheng Tao, Xiaojun Chang, Jian Yang
|
Ensemble Teaching for Hybrid Label Propagation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Label propagation aims to iteratively diffuse the label information from
labeled examples to unlabeled examples over a similarity graph. Current label
propagation algorithms cannot consistently yield satisfactory performance due
to two reasons: one is the instability of single propagation method in dealing
with various practical data, and the other one is the improper propagation
sequence ignoring the labeling difficulties of different examples. To remedy
above defects, this paper proposes a novel propagation algorithm called hybrid
diffusion under ensemble teaching (HyDEnT). Specifically, HyDEnT integrates
multiple propagation methods as base learners to fully exploit their individual
wisdom, which helps HyDEnT to be stable and obtain consistent encouraging
results. More importantly, HyDEnT conducts propagation under the guidance of an
ensemble of teachers. That is to say, in every propagation round the simplest
curriculum examples are wisely designated by a teaching algorithm, so that
their labels can be reliably and accurately decided by the learners. To
optimally choose these simplest examples, every teacher in the ensemble should
comprehensively consider the examples' difficulties from its own viewpoint, as
well as the common knowledge shared by all the teachers. This is accomplished
by a designed optimization problem, which can be efficiently solved via the
block coordinate descent method. Thanks to the efforts of the teachers, all the
unlabeled examples are logically propagated from simple to difficult, leading
to better propagation quality of HyDEnT than the existing methods.
|
[
{
"created": "Mon, 8 Apr 2019 04:10:40 GMT",
"version": "v1"
}
] |
2019-04-09
|
[
[
"Gong",
"Chen",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Chang",
"Xiaojun",
""
],
[
"Yang",
"Jian",
""
]
] |
Label propagation aims to iteratively diffuse the label information from labeled examples to unlabeled examples over a similarity graph. Current label propagation algorithms cannot consistently yield satisfactory performance due to two reasons: one is the instability of single propagation method in dealing with various practical data, and the other one is the improper propagation sequence ignoring the labeling difficulties of different examples. To remedy above defects, this paper proposes a novel propagation algorithm called hybrid diffusion under ensemble teaching (HyDEnT). Specifically, HyDEnT integrates multiple propagation methods as base learners to fully exploit their individual wisdom, which helps HyDEnT to be stable and obtain consistent encouraging results. More importantly, HyDEnT conducts propagation under the guidance of an ensemble of teachers. That is to say, in every propagation round the simplest curriculum examples are wisely designated by a teaching algorithm, so that their labels can be reliably and accurately decided by the learners. To optimally choose these simplest examples, every teacher in the ensemble should comprehensively consider the examples' difficulties from its own viewpoint, as well as the common knowledge shared by all the teachers. This is accomplished by a designed optimization problem, which can be efficiently solved via the block coordinate descent method. Thanks to the efforts of the teachers, all the unlabeled examples are logically propagated from simple to difficult, leading to better propagation quality of HyDEnT than the existing methods.
|
2306.02275
|
Yulin He
|
Yulin He, Wei Chen, Yusong Tan, Siqi Wang
|
USD: Unknown Sensitive Detector Empowered by Decoupled Objectness and
Segment Anything Model
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open World Object Detection (OWOD) is a novel and challenging computer vision
task that enables object detection with the ability to detect unknown objects.
Existing methods typically estimate the object likelihood with an additional
objectness branch, but ignore the conflict in learning objectness and
classification boundaries, which oppose each other on the semantic manifold and
training objective. To address this issue, we propose a simple yet effective
learning strategy, namely Decoupled Objectness Learning (DOL), which divides
the learning of these two boundaries into suitable decoder layers. Moreover,
detecting unknown objects comprehensively requires a large amount of
annotations, but labeling all unknown objects is both difficult and expensive.
Therefore, we propose to take advantage of the recent Large Vision Model (LVM),
specifically the Segment Anything Model (SAM), to enhance the detection of
unknown objects. Nevertheless, the output results of SAM contain noise,
including backgrounds and fragments, so we introduce an Auxiliary Supervision
Framework (ASF) that uses a pseudo-labeling and a soft-weighting strategies to
alleviate the negative impact of noise. Extensive experiments on popular
benchmarks, including Pascal VOC and MS COCO, demonstrate the effectiveness of
our approach. Our proposed Unknown Sensitive Detector (USD) outperforms the
recent state-of-the-art methods in terms of Unknown Recall, achieving
significant improvements of 14.3\%, 15.5\%, and 8.9\% on the M-OWODB, and
27.1\%, 29.1\%, and 25.1\% on the S-OWODB.
|
[
{
"created": "Sun, 4 Jun 2023 06:42:09 GMT",
"version": "v1"
}
] |
2023-06-06
|
[
[
"He",
"Yulin",
""
],
[
"Chen",
"Wei",
""
],
[
"Tan",
"Yusong",
""
],
[
"Wang",
"Siqi",
""
]
] |
Open World Object Detection (OWOD) is a novel and challenging computer vision task that enables object detection with the ability to detect unknown objects. Existing methods typically estimate the object likelihood with an additional objectness branch, but ignore the conflict in learning objectness and classification boundaries, which oppose each other on the semantic manifold and training objective. To address this issue, we propose a simple yet effective learning strategy, namely Decoupled Objectness Learning (DOL), which divides the learning of these two boundaries into suitable decoder layers. Moreover, detecting unknown objects comprehensively requires a large amount of annotations, but labeling all unknown objects is both difficult and expensive. Therefore, we propose to take advantage of the recent Large Vision Model (LVM), specifically the Segment Anything Model (SAM), to enhance the detection of unknown objects. Nevertheless, the output results of SAM contain noise, including backgrounds and fragments, so we introduce an Auxiliary Supervision Framework (ASF) that uses a pseudo-labeling and a soft-weighting strategies to alleviate the negative impact of noise. Extensive experiments on popular benchmarks, including Pascal VOC and MS COCO, demonstrate the effectiveness of our approach. Our proposed Unknown Sensitive Detector (USD) outperforms the recent state-of-the-art methods in terms of Unknown Recall, achieving significant improvements of 14.3\%, 15.5\%, and 8.9\% on the M-OWODB, and 27.1\%, 29.1\%, and 25.1\% on the S-OWODB.
|
cs/0408063
|
Alexander Haubold
|
Alexander Haubold, John R. Kender
|
Analysis and Visualization of Index Words from Audio Transcripts of
Instructional Videos
|
2004 IEEE International Workshop on Multimedia Content-based Analysis
and Retrieval; 20 pages, 8 figures, 7 tables
| null |
10.1109/MMSE.2004.27
| null |
cs.IR cs.MM
| null |
We introduce new techniques for extracting, analyzing, and visualizing
textual contents from instructional videos of low production quality. Using
Automatic Speech Recognition, approximate transcripts (H75% Word Error Rate)
are obtained from the originally highly compressed videos of university
courses, each comprising between 10 to 30 lectures. Text material in the form
of books or papers that accompany the course are then used to filter meaningful
phrases from the seemingly incoherent transcripts. The resulting index into the
transcripts is tied together and visualized in 3 experimental graphs that help
in understanding the overall course structure and provide a tool for localizing
certain topics for indexing. We specifically discuss a Transcript Index Map,
which graphically lays out key phrases for a course, a Textbook Chapter to
Transcript Match, and finally a Lecture Transcript Similarity graph, which
clusters semantically similar lectures. We test our methods and tools on 7 full
courses with 230 hours of video and 273 transcripts. We are able to extract up
to 98 unique key terms for a given transcript and up to 347 unique key terms
for an entire course. The accuracy of the Textbook Chapter to Transcript Match
exceeds 70% on average. The methods used can be applied to genres of video in
which there are recurrent thematic words (news, sports, meetings,...)
|
[
{
"created": "Fri, 27 Aug 2004 20:45:32 GMT",
"version": "v1"
}
] |
2016-11-15
|
[
[
"Haubold",
"Alexander",
""
],
[
"Kender",
"John R.",
""
]
] |
We introduce new techniques for extracting, analyzing, and visualizing textual contents from instructional videos of low production quality. Using Automatic Speech Recognition, approximate transcripts (H75% Word Error Rate) are obtained from the originally highly compressed videos of university courses, each comprising between 10 to 30 lectures. Text material in the form of books or papers that accompany the course are then used to filter meaningful phrases from the seemingly incoherent transcripts. The resulting index into the transcripts is tied together and visualized in 3 experimental graphs that help in understanding the overall course structure and provide a tool for localizing certain topics for indexing. We specifically discuss a Transcript Index Map, which graphically lays out key phrases for a course, a Textbook Chapter to Transcript Match, and finally a Lecture Transcript Similarity graph, which clusters semantically similar lectures. We test our methods and tools on 7 full courses with 230 hours of video and 273 transcripts. We are able to extract up to 98 unique key terms for a given transcript and up to 347 unique key terms for an entire course. The accuracy of the Textbook Chapter to Transcript Match exceeds 70% on average. The methods used can be applied to genres of video in which there are recurrent thematic words (news, sports, meetings,...)
|
2104.01322
|
Wolfgang Utschick
|
Wolfgang Utschick, Valentina Rizzello, Michael Joham, Zhengxiang Ma,
and Leonard Piazzi
|
Learning the CSI Recovery in FDD Systems
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We propose an innovative machine learning-based technique to address the
problem of channel acquisition at the base station in frequency division duplex
systems. In this context, the base station reconstructs the full channel state
information in the downlink frequency range based on limited downlink channel
state information feedback from the mobile terminal. The channel state
information recovery is based on a convolutional neural network which is
trained exclusively on collected channel state samples acquired in the uplink
frequency domain. No acquisition of training samples in the downlink frequency
range is required at all. Finally, after a detailed presentation and analysis
of the proposed technique and its performance, the "transfer learning''
assumption of the convolutional neural network that is central to the proposed
approach is validated with an analysis based on the maximum mean discrepancy
metric.
|
[
{
"created": "Sat, 3 Apr 2021 06:35:24 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Oct 2021 10:31:55 GMT",
"version": "v2"
}
] |
2021-10-05
|
[
[
"Utschick",
"Wolfgang",
""
],
[
"Rizzello",
"Valentina",
""
],
[
"Joham",
"Michael",
""
],
[
"Ma",
"Zhengxiang",
""
],
[
"Piazzi",
"Leonard",
""
]
] |
We propose an innovative machine learning-based technique to address the problem of channel acquisition at the base station in frequency division duplex systems. In this context, the base station reconstructs the full channel state information in the downlink frequency range based on limited downlink channel state information feedback from the mobile terminal. The channel state information recovery is based on a convolutional neural network which is trained exclusively on collected channel state samples acquired in the uplink frequency domain. No acquisition of training samples in the downlink frequency range is required at all. Finally, after a detailed presentation and analysis of the proposed technique and its performance, the "transfer learning'' assumption of the convolutional neural network that is central to the proposed approach is validated with an analysis based on the maximum mean discrepancy metric.
|
1204.4111
|
Tobias Harks
|
Tobias Harks and Britta Peis
|
Resource Buying Games
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In resource buying games a set of players jointly buys a subset of a finite
resource set E (e.g., machines, edges, or nodes in a digraph). The cost of a
resource e depends on the number (or load) of players using e, and has to be
paid completely by the players before it becomes available. Each player i needs
at least one set of a predefined family S_i in 2^E to be available. Thus,
resource buying games can be seen as a variant of congestion games in which the
load-dependent costs of the resources can be shared arbitrarily among the
players. A strategy of player i in resource buying games is a tuple consisting
of one of i's desired configurations S_i together with a payment vector p_i in
R^E_+ indicating how much i is willing to contribute towards the purchase of
the chosen resources. In this paper, we study the existence and computational
complexity of pure Nash equilibria (PNE, for short) of resource buying games.
In contrast to classical congestion games for which equilibria are guaranteed
to exist, the existence of equilibria in resource buying games strongly depends
on the underlying structure of the S_i's and the behavior of the cost
functions. We show that for marginally non-increasing cost functions, matroids
are exactly the right structure to consider, and that resource buying games
with marginally non-decreasing cost functions always admit a PNE.
|
[
{
"created": "Wed, 18 Apr 2012 15:47:25 GMT",
"version": "v1"
}
] |
2012-04-19
|
[
[
"Harks",
"Tobias",
""
],
[
"Peis",
"Britta",
""
]
] |
In resource buying games a set of players jointly buys a subset of a finite resource set E (e.g., machines, edges, or nodes in a digraph). The cost of a resource e depends on the number (or load) of players using e, and has to be paid completely by the players before it becomes available. Each player i needs at least one set of a predefined family S_i in 2^E to be available. Thus, resource buying games can be seen as a variant of congestion games in which the load-dependent costs of the resources can be shared arbitrarily among the players. A strategy of player i in resource buying games is a tuple consisting of one of i's desired configurations S_i together with a payment vector p_i in R^E_+ indicating how much i is willing to contribute towards the purchase of the chosen resources. In this paper, we study the existence and computational complexity of pure Nash equilibria (PNE, for short) of resource buying games. In contrast to classical congestion games for which equilibria are guaranteed to exist, the existence of equilibria in resource buying games strongly depends on the underlying structure of the S_i's and the behavior of the cost functions. We show that for marginally non-increasing cost functions, matroids are exactly the right structure to consider, and that resource buying games with marginally non-decreasing cost functions always admit a PNE.
|
2304.08464
|
Yifan Yin
|
Yifan Yin, Yutai Wang, Yunpu Zhang, Russell H. Taylor, and Balazs P.
Vagvolgyi
|
Applications of Uncalibrated Image Based Visual Servoing in Micro- and
Macroscale Robotics
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present a robust markerless image based visual servoing method that
enables precision robot control without hand-eye and camera calibrations in 1,
3, and 5 degrees-of-freedom. The system uses two cameras for observing the
workspace and a combination of classical image processing algorithms and deep
learning based methods to detect features on camera images. The only
restriction on the placement of the two cameras is that relevant image features
must be visible in both views. The system enables precise robot-tool to
workspace interactions even when the physical setup is disturbed, for example
if cameras are moved or the workspace shifts during manipulation. The
usefulness of the visual servoing method is demonstrated and evaluated in two
applications: in the calibration of a micro-robotic system that dissects
mosquitoes for the automated production of a malaria vaccine, and a macro-scale
manipulation system for fastening screws using a UR10 robot. Evaluation results
indicate that our image based visual servoing method achieves human-like
manipulation accuracy in challenging setups even without camera calibration.
|
[
{
"created": "Mon, 17 Apr 2023 17:41:02 GMT",
"version": "v1"
}
] |
2023-04-18
|
[
[
"Yin",
"Yifan",
""
],
[
"Wang",
"Yutai",
""
],
[
"Zhang",
"Yunpu",
""
],
[
"Taylor",
"Russell H.",
""
],
[
"Vagvolgyi",
"Balazs P.",
""
]
] |
We present a robust markerless image based visual servoing method that enables precision robot control without hand-eye and camera calibrations in 1, 3, and 5 degrees-of-freedom. The system uses two cameras for observing the workspace and a combination of classical image processing algorithms and deep learning based methods to detect features on camera images. The only restriction on the placement of the two cameras is that relevant image features must be visible in both views. The system enables precise robot-tool to workspace interactions even when the physical setup is disturbed, for example if cameras are moved or the workspace shifts during manipulation. The usefulness of the visual servoing method is demonstrated and evaluated in two applications: in the calibration of a micro-robotic system that dissects mosquitoes for the automated production of a malaria vaccine, and a macro-scale manipulation system for fastening screws using a UR10 robot. Evaluation results indicate that our image based visual servoing method achieves human-like manipulation accuracy in challenging setups even without camera calibration.
|
0705.0817
|
Andrea Lo Pumo
|
Andrea Lo Pumo
|
Quantum Shortest Path Netsukuku
| null | null | null | null |
cs.NI
| null |
This document describes the QSPN, the routing discovery algorithm used by
Netsukuku. Through a deductive analysis the main proprieties of the QSPN are
shown. Moreover, a second version of the algorithm, is presented.
|
[
{
"created": "Sun, 6 May 2007 20:05:44 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Pumo",
"Andrea Lo",
""
]
] |
This document describes the QSPN, the routing discovery algorithm used by Netsukuku. Through a deductive analysis the main proprieties of the QSPN are shown. Moreover, a second version of the algorithm, is presented.
|
1810.09786
|
Fabian Falck
|
Fabian Falck, Sagar Doshi, Nico Smuts, John Lingi, Kim Rants, Petar
Kormushev
|
Human-centered manipulation and navigation with Robot DE NIRO
|
In Proceedings of the IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2018) Workshop "Towards Robots that
Exhibit Manipulation Intelligence", Madrid, Spain, Oct. 1, 2018
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social assistance robots in health and elderly care have the potential to
support and ease human lives. Given the macrosocial trends of aging and
long-lived populations, robotics-based care research mainly focused on helping
the elderly live independently. In this paper, we introduce Robot DE NIRO, a
research platform that aims to support the supporter (the caregiver) and also
offers direct human-robot interaction for the care recipient. Augmented by
several sensors, DE NIRO is capable of complex manipulation tasks. It reliably
interacts with humans and can autonomously and swiftly navigate through
dynamically changing environments. We describe preliminary experiments in a
demonstrative scenario and discuss DE NIRO's design and capabilities. We put
particular emphases on safe, human-centered interaction procedures implemented
in both hardware and software, including collision avoidance in manipulation
and navigation as well as an intuitive perception stack through speech and face
recognition.
|
[
{
"created": "Tue, 23 Oct 2018 11:30:33 GMT",
"version": "v1"
}
] |
2018-10-24
|
[
[
"Falck",
"Fabian",
""
],
[
"Doshi",
"Sagar",
""
],
[
"Smuts",
"Nico",
""
],
[
"Lingi",
"John",
""
],
[
"Rants",
"Kim",
""
],
[
"Kormushev",
"Petar",
""
]
] |
Social assistance robots in health and elderly care have the potential to support and ease human lives. Given the macrosocial trends of aging and long-lived populations, robotics-based care research mainly focused on helping the elderly live independently. In this paper, we introduce Robot DE NIRO, a research platform that aims to support the supporter (the caregiver) and also offers direct human-robot interaction for the care recipient. Augmented by several sensors, DE NIRO is capable of complex manipulation tasks. It reliably interacts with humans and can autonomously and swiftly navigate through dynamically changing environments. We describe preliminary experiments in a demonstrative scenario and discuss DE NIRO's design and capabilities. We put particular emphases on safe, human-centered interaction procedures implemented in both hardware and software, including collision avoidance in manipulation and navigation as well as an intuitive perception stack through speech and face recognition.
|
1605.02043
|
Lingda Li
|
Lingda Li, Ari B. Hayes, Stephen A. Hackler, Eddy Z. Zhang, Mario
Szegedy, Shuaiwen Leon Song
|
A Graph-based Model for GPU Caching Problems
|
Currently under submission
| null | null | null |
cs.DC cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modeling data sharing in GPU programs is a challenging task because of the
massive parallelism and complex data sharing patterns provided by GPU
architectures. Better GPU caching efficiency can be achieved through careful
task scheduling among different threads. Traditionally, in the field of
parallel computing, graph partition models are used to model data communication
and guide task scheduling. However, we discover that the previous methods are
either inaccurate or expensive when applied to GPU programs. In this paper, we
propose a novel task partition model that is accurate and gives rise to the
development of fast and high quality task/data reorganization algorithms. We
demonstrate the effectiveness of the proposed model by rigorous theoretical
analysis of the algorithm bounds and extensive experimental analysis. The
experimental results show that it achieves significant performance improvement
across a representative set of GPU applications.
|
[
{
"created": "Fri, 6 May 2016 19:12:06 GMT",
"version": "v1"
}
] |
2016-10-04
|
[
[
"Li",
"Lingda",
""
],
[
"Hayes",
"Ari B.",
""
],
[
"Hackler",
"Stephen A.",
""
],
[
"Zhang",
"Eddy Z.",
""
],
[
"Szegedy",
"Mario",
""
],
[
"Song",
"Shuaiwen Leon",
""
]
] |
Modeling data sharing in GPU programs is a challenging task because of the massive parallelism and complex data sharing patterns provided by GPU architectures. Better GPU caching efficiency can be achieved through careful task scheduling among different threads. Traditionally, in the field of parallel computing, graph partition models are used to model data communication and guide task scheduling. However, we discover that the previous methods are either inaccurate or expensive when applied to GPU programs. In this paper, we propose a novel task partition model that is accurate and gives rise to the development of fast and high quality task/data reorganization algorithms. We demonstrate the effectiveness of the proposed model by rigorous theoretical analysis of the algorithm bounds and extensive experimental analysis. The experimental results show that it achieves significant performance improvement across a representative set of GPU applications.
|
2308.05368
|
Jacopo Tagliabue
|
Jacopo Tagliabue, Ciro Greco, Luca Bigon
|
Building a serverless Data Lakehouse from spare parts
|
Paper accepted for the Second International Workshop on Composable
Data Management Systems (@ VLDB 2023)
| null | null | null |
cs.DB cs.DC cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recently proposed Data Lakehouse architecture is built on open file
formats, performance, and first-class support for data transformation, BI and
data science: while the vision stresses the importance of lowering the barrier
for data work, existing implementations often struggle to live up to user
expectations. At Bauplan, we decided to build a new serverless platform to
fulfill the Lakehouse vision. Since building from scratch is a challenge unfit
for a startup, we started by re-using (sometimes unconventionally) existing
projects, and then investing in improving the areas that would give us the
highest marginal gains for the developer experience. In this work, we review
user experience, high-level architecture and tooling decisions, and conclude by
sharing plans for future development.
|
[
{
"created": "Thu, 10 Aug 2023 06:24:25 GMT",
"version": "v1"
}
] |
2023-08-11
|
[
[
"Tagliabue",
"Jacopo",
""
],
[
"Greco",
"Ciro",
""
],
[
"Bigon",
"Luca",
""
]
] |
The recently proposed Data Lakehouse architecture is built on open file formats, performance, and first-class support for data transformation, BI and data science: while the vision stresses the importance of lowering the barrier for data work, existing implementations often struggle to live up to user expectations. At Bauplan, we decided to build a new serverless platform to fulfill the Lakehouse vision. Since building from scratch is a challenge unfit for a startup, we started by re-using (sometimes unconventionally) existing projects, and then investing in improving the areas that would give us the highest marginal gains for the developer experience. In this work, we review user experience, high-level architecture and tooling decisions, and conclude by sharing plans for future development.
|
2005.04864
|
Xingyu Chen
|
Xingyu Chen and Zijie Liu
|
The Fairness of Leximin in Allocation of Indivisible Chores
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The leximin solution -- which selects an allocation that maximizes the
minimum utility, then the second minimum utility, and so forth -- is known to
provide EFX (envy-free up to any good) fairness guarantee in some contexts when
allocating indivisible goods. However, it remains unknown how fair the leximin
solution is when used to allocate indivisible chores. In this paper, we
demonstrate that the leximin solution can be modified to also provide
compelling fairness guarantees for the allocation of indivisible chores. First,
we generalize the definition of the leximin solution. Then, we show that the
leximin solution finds a PROP1 (proportional up to one good) and PO
(Pareto-optimal) allocation for 3 or 4 agents in the context of chores
allocation with additive distinct valuations. Additionally, we prove that the
leximin solution is EFX for combinations of goods and chores for agents with
general but identical valuations.
|
[
{
"created": "Mon, 11 May 2020 05:15:43 GMT",
"version": "v1"
}
] |
2020-05-12
|
[
[
"Chen",
"Xingyu",
""
],
[
"Liu",
"Zijie",
""
]
] |
The leximin solution -- which selects an allocation that maximizes the minimum utility, then the second minimum utility, and so forth -- is known to provide EFX (envy-free up to any good) fairness guarantee in some contexts when allocating indivisible goods. However, it remains unknown how fair the leximin solution is when used to allocate indivisible chores. In this paper, we demonstrate that the leximin solution can be modified to also provide compelling fairness guarantees for the allocation of indivisible chores. First, we generalize the definition of the leximin solution. Then, we show that the leximin solution finds a PROP1 (proportional up to one good) and PO (Pareto-optimal) allocation for 3 or 4 agents in the context of chores allocation with additive distinct valuations. Additionally, we prove that the leximin solution is EFX for combinations of goods and chores for agents with general but identical valuations.
|
2204.02337
|
Vignesh Ram Somnath
|
Vignesh Ram Somnath, Charlotte Bunne, Andreas Krause
|
Multi-Scale Representation Learning on Proteins
|
Neural Information Processing Systems 2021
| null | null | null |
cs.LG cs.AI q-bio.BM
|
http://creativecommons.org/licenses/by/4.0/
|
Proteins are fundamental biological entities mediating key roles in cellular
function and disease. This paper introduces a multi-scale graph construction of
a protein -- HoloProt -- connecting surface to structure and sequence. The
surface captures coarser details of the protein, while sequence as primary
component and structure -- comprising secondary and tertiary components --
capture finer details. Our graph encoder then learns a multi-scale
representation by allowing each level to integrate the encoding from level(s)
below with the graph at that level. We test the learned representation on
different tasks, (i.) ligand binding affinity (regression), and (ii.) protein
function prediction (classification). On the regression task, contrary to
previous methods, our model performs consistently and reliably across different
dataset splits, outperforming all baselines on most splits. On the
classification task, it achieves a performance close to the top-performing
model while using 10x fewer parameters. To improve the memory efficiency of our
construction, we segment the multiplex protein surface manifold into molecular
superpixels and substitute the surface with these superpixels at little to no
performance loss.
|
[
{
"created": "Mon, 4 Apr 2022 08:29:17 GMT",
"version": "v1"
}
] |
2022-04-06
|
[
[
"Somnath",
"Vignesh Ram",
""
],
[
"Bunne",
"Charlotte",
""
],
[
"Krause",
"Andreas",
""
]
] |
Proteins are fundamental biological entities mediating key roles in cellular function and disease. This paper introduces a multi-scale graph construction of a protein -- HoloProt -- connecting surface to structure and sequence. The surface captures coarser details of the protein, while sequence as primary component and structure -- comprising secondary and tertiary components -- capture finer details. Our graph encoder then learns a multi-scale representation by allowing each level to integrate the encoding from level(s) below with the graph at that level. We test the learned representation on different tasks, (i.) ligand binding affinity (regression), and (ii.) protein function prediction (classification). On the regression task, contrary to previous methods, our model performs consistently and reliably across different dataset splits, outperforming all baselines on most splits. On the classification task, it achieves a performance close to the top-performing model while using 10x fewer parameters. To improve the memory efficiency of our construction, we segment the multiplex protein surface manifold into molecular superpixels and substitute the surface with these superpixels at little to no performance loss.
|
2311.15460
|
Anantaa Kotal
|
Anantaa Kotal, Lavanya Elluri, Deepti Gupta, Varun Mandalapu and
Anupam Joshi
|
Privacy-Preserving Data Sharing in Agriculture: Enforcing Policy Rules
for Secure and Confidential Data Synthesis
| null | null |
10.1109/BigData59044.2023.10386276
| null |
cs.CR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Big Data empowers the farming community with the information needed to
optimize resource usage, increase productivity, and enhance the sustainability
of agricultural practices. The use of Big Data in farming requires the
collection and analysis of data from various sources such as sensors,
satellites, and farmer surveys. While Big Data can provide the farming
community with valuable insights and improve efficiency, there is significant
concern regarding the security of this data as well as the privacy of the
participants. Privacy regulations, such as the EU GDPR, the EU Code of Conduct
on agricultural data sharing by contractual agreement, and the proposed EU AI
law, have been created to address the issue of data privacy and provide
specific guidelines on when and how data can be shared between organizations.
To make confidential agricultural data widely available for Big Data analysis
without violating the privacy of the data subjects, we consider
privacy-preserving methods of data sharing in agriculture. Deep learning-based
synthetic data generation has been proposed for privacy-preserving data
sharing. However, there is a lack of compliance with documented data privacy
policies in such privacy-preserving efforts. In this study, we propose a novel
framework for enforcing privacy policy rules in privacy-preserving data
generation algorithms. We explore several available agricultural codes of
conduct, extract knowledge related to the privacy constraints in data, and use
the extracted knowledge to define privacy bounds in a privacy-preserving
generative model. We use our framework to generate synthetic agricultural data
and present experimental results that demonstrate the utility of the synthetic
dataset in downstream tasks. We also show that our framework can evade
potential threats and secure data based on applicable regulatory policy rules.
|
[
{
"created": "Mon, 27 Nov 2023 00:12:47 GMT",
"version": "v1"
}
] |
2024-01-29
|
[
[
"Kotal",
"Anantaa",
""
],
[
"Elluri",
"Lavanya",
""
],
[
"Gupta",
"Deepti",
""
],
[
"Mandalapu",
"Varun",
""
],
[
"Joshi",
"Anupam",
""
]
] |
Big Data empowers the farming community with the information needed to optimize resource usage, increase productivity, and enhance the sustainability of agricultural practices. The use of Big Data in farming requires the collection and analysis of data from various sources such as sensors, satellites, and farmer surveys. While Big Data can provide the farming community with valuable insights and improve efficiency, there is significant concern regarding the security of this data as well as the privacy of the participants. Privacy regulations, such as the EU GDPR, the EU Code of Conduct on agricultural data sharing by contractual agreement, and the proposed EU AI law, have been created to address the issue of data privacy and provide specific guidelines on when and how data can be shared between organizations. To make confidential agricultural data widely available for Big Data analysis without violating the privacy of the data subjects, we consider privacy-preserving methods of data sharing in agriculture. Deep learning-based synthetic data generation has been proposed for privacy-preserving data sharing. However, there is a lack of compliance with documented data privacy policies in such privacy-preserving efforts. In this study, we propose a novel framework for enforcing privacy policy rules in privacy-preserving data generation algorithms. We explore several available agricultural codes of conduct, extract knowledge related to the privacy constraints in data, and use the extracted knowledge to define privacy bounds in a privacy-preserving generative model. We use our framework to generate synthetic agricultural data and present experimental results that demonstrate the utility of the synthetic dataset in downstream tasks. We also show that our framework can evade potential threats and secure data based on applicable regulatory policy rules.
|
1611.02776
|
Daoyuan Jia
|
Daoyuan Jia, Yongchi Su, Chunping Li
|
Deep Convolutional Neural Network for 6-DOF Image Localization
|
will update soon
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present an accurate and robust method for six degree of freedom image
localization. There are two key-points of our method, 1. automatic immense
photo synthesis and labeling from point cloud model and, 2. pose estimation
with deep convolutional neural networks regression. Our model can directly
regresses 6-DOF camera poses from images, accurately describing where and how
it was captured. We achieved an accuracy within 1 meters and 1 degree on our
out-door dataset, which covers about 2 acres on our school campus.
|
[
{
"created": "Tue, 8 Nov 2016 23:59:16 GMT",
"version": "v1"
}
] |
2016-11-10
|
[
[
"Jia",
"Daoyuan",
""
],
[
"Su",
"Yongchi",
""
],
[
"Li",
"Chunping",
""
]
] |
We present an accurate and robust method for six degree of freedom image localization. There are two key-points of our method, 1. automatic immense photo synthesis and labeling from point cloud model and, 2. pose estimation with deep convolutional neural networks regression. Our model can directly regresses 6-DOF camera poses from images, accurately describing where and how it was captured. We achieved an accuracy within 1 meters and 1 degree on our out-door dataset, which covers about 2 acres on our school campus.
|
2102.10708
|
Mahdi Fahmideh
|
Mahdi Fahmideh, Aakash Ahmed, Ali Behnaz, John Grundy, Willy Susilo
|
Software Engineering for Internet of Things: The Practitioner's
Perspective
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Internet of Things based systems (IoT systems for short) are becoming
increasingly popular across different industrial domains and their development
is rapidly increasing to provide value-added services to end-users and
citizens. Little research to date uncovers the core development process
lifecycle needed for IoT systems, and thus software engineers find themselves
unprepared and unfamiliar with this new genre of system development. To
ameliorate this gap, we conducted a mixed quantitative and qualitative research
study where we derived a conceptual process framework from the extant
literature on IoT, that identifies 27 key tasks for incorporating into
development processes for IoT systems. The framework was then validated by
means of a survey of 127 IoT systems practitioners developers from 35 countries
across 6 continents with 15 different industry backgrounds. Our research
provides an understanding of the most important development process tasks and
informs both software engineering practitioners and researchers of the
challenges and recommendations related to the development of next generation of
IoT systems.
|
[
{
"created": "Sun, 21 Feb 2021 23:09:32 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Apr 2021 23:42:45 GMT",
"version": "v2"
},
{
"created": "Wed, 5 May 2021 05:55:32 GMT",
"version": "v3"
}
] |
2021-05-06
|
[
[
"Fahmideh",
"Mahdi",
""
],
[
"Ahmed",
"Aakash",
""
],
[
"Behnaz",
"Ali",
""
],
[
"Grundy",
"John",
""
],
[
"Susilo",
"Willy",
""
]
] |
Internet of Things based systems (IoT systems for short) are becoming increasingly popular across different industrial domains and their development is rapidly increasing to provide value-added services to end-users and citizens. Little research to date uncovers the core development process lifecycle needed for IoT systems, and thus software engineers find themselves unprepared and unfamiliar with this new genre of system development. To ameliorate this gap, we conducted a mixed quantitative and qualitative research study where we derived a conceptual process framework from the extant literature on IoT, that identifies 27 key tasks for incorporating into development processes for IoT systems. The framework was then validated by means of a survey of 127 IoT systems practitioners developers from 35 countries across 6 continents with 15 different industry backgrounds. Our research provides an understanding of the most important development process tasks and informs both software engineering practitioners and researchers of the challenges and recommendations related to the development of next generation of IoT systems.
|
1607.01383
|
Karim Banawan
|
Karim Banawan, Sennur Ulukus
|
MIMO Wiretap Channel under Receiver Side Power Constraints with
Applications to Wireless Power Transfer and Cognitive Radio
|
Submitted to IEEE Transactions on Communications, September 2015.
Accepted for publication, July 2016
| null |
10.1109/TCOMM.2016.2593739
| null |
cs.IT cs.CR cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the multiple-input multiple-output (MIMO) wiretap channel under a
minimum receiver-side power constraint in addition to the usual maximum
transmitter-side power constraint. This problem is motivated by energy
harvesting communications with wireless energy transfer, where an added goal is
to deliver a minimum amount of energy to a receiver in addition to delivering
secure data to another receiver. In this paper, we characterize the exact
secrecy capacity of the MIMO wiretap channel under transmitter and
receiver-side power constraints. We first show that solving this problem is
equivalent to solving the secrecy capacity of the wiretap channel under a
double-sided correlation matrix constraint on the channel input. We show the
converse by extending the channel enhancement technique to our case. We present
two achievable schemes that achieve the secrecy capacity: the first achievable
scheme uses a Gaussian codebook with a fixed mean, and the second achievable
scheme uses artificial noise (or cooperative jamming) together with a Gaussian
codebook. The role of the mean or the artificial noise is to enable energy
transfer without sacrificing from the secure rate. This is the first instance
of a channel model where either the use of a mean signal or the use of channel
prefixing via artificial noise is strictly necessary for the MIMO wiretap
channel. We then extend our work to consider a maximum receiver-side power
constraint. This problem is motivated by cognitive radio applications, where an
added goal is to decrease the received signal energy (interference temperature)
at a receiver. We further extend our results to: requiring receiver-side power
constraints at both receivers; considering secrecy constraints at both
receivers to study broadcast channels with confidential messages; and removing
the secrecy constraints to study the classical broadcast channel.
|
[
{
"created": "Tue, 5 Jul 2016 19:49:39 GMT",
"version": "v1"
}
] |
2016-11-17
|
[
[
"Banawan",
"Karim",
""
],
[
"Ulukus",
"Sennur",
""
]
] |
We consider the multiple-input multiple-output (MIMO) wiretap channel under a minimum receiver-side power constraint in addition to the usual maximum transmitter-side power constraint. This problem is motivated by energy harvesting communications with wireless energy transfer, where an added goal is to deliver a minimum amount of energy to a receiver in addition to delivering secure data to another receiver. In this paper, we characterize the exact secrecy capacity of the MIMO wiretap channel under transmitter and receiver-side power constraints. We first show that solving this problem is equivalent to solving the secrecy capacity of the wiretap channel under a double-sided correlation matrix constraint on the channel input. We show the converse by extending the channel enhancement technique to our case. We present two achievable schemes that achieve the secrecy capacity: the first achievable scheme uses a Gaussian codebook with a fixed mean, and the second achievable scheme uses artificial noise (or cooperative jamming) together with a Gaussian codebook. The role of the mean or the artificial noise is to enable energy transfer without sacrificing from the secure rate. This is the first instance of a channel model where either the use of a mean signal or the use of channel prefixing via artificial noise is strictly necessary for the MIMO wiretap channel. We then extend our work to consider a maximum receiver-side power constraint. This problem is motivated by cognitive radio applications, where an added goal is to decrease the received signal energy (interference temperature) at a receiver. We further extend our results to: requiring receiver-side power constraints at both receivers; considering secrecy constraints at both receivers to study broadcast channels with confidential messages; and removing the secrecy constraints to study the classical broadcast channel.
|
2204.00541
|
Chuhan Wu
|
Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
|
FairRank: Fairness-aware Single-tower Ranking Framework for News
Recommendation
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Single-tower models are widely used in the ranking stage of news
recommendation to accurately rank candidate news according to their
fine-grained relatedness with user interest indicated by user behaviors.
However, these models can easily inherit the biases related to users' sensitive
attributes (e.g., demographics) encoded in training click data, and may
generate recommendation results that are unfair to users with certain
attributes. In this paper, we propose FairRank, which is a fairness-aware
single-tower ranking framework for news recommendation. Since candidate news
selection can be biased, we propose to use a shared candidate-aware user model
to match user interest with a real displayed candidate news and a random news,
respectively, to learn a candidate-aware user embedding that reflects user
interest in candidate news and a candidate-invariant user embedding that
indicates intrinsic user interest. We apply adversarial learning to both of
them to reduce the biases brought by sensitive user attributes. In addition, we
use a KL loss to regularize the attribute labels inferred from the two user
embeddings to be similar, which can make the model capture less candidate-aware
bias information. Extensive experiments on two datasets show that FairRank can
improve the fairness of various single-tower news ranking models with minor
performance losses.
|
[
{
"created": "Fri, 1 Apr 2022 16:07:31 GMT",
"version": "v1"
}
] |
2022-04-04
|
[
[
"Wu",
"Chuhan",
""
],
[
"Wu",
"Fangzhao",
""
],
[
"Qi",
"Tao",
""
],
[
"Huang",
"Yongfeng",
""
]
] |
Single-tower models are widely used in the ranking stage of news recommendation to accurately rank candidate news according to their fine-grained relatedness with user interest indicated by user behaviors. However, these models can easily inherit the biases related to users' sensitive attributes (e.g., demographics) encoded in training click data, and may generate recommendation results that are unfair to users with certain attributes. In this paper, we propose FairRank, which is a fairness-aware single-tower ranking framework for news recommendation. Since candidate news selection can be biased, we propose to use a shared candidate-aware user model to match user interest with a real displayed candidate news and a random news, respectively, to learn a candidate-aware user embedding that reflects user interest in candidate news and a candidate-invariant user embedding that indicates intrinsic user interest. We apply adversarial learning to both of them to reduce the biases brought by sensitive user attributes. In addition, we use a KL loss to regularize the attribute labels inferred from the two user embeddings to be similar, which can make the model capture less candidate-aware bias information. Extensive experiments on two datasets show that FairRank can improve the fairness of various single-tower news ranking models with minor performance losses.
|
2402.06044
|
Hainiu Xu
|
Hainiu Xu, Runcong Zhao, Lixing Zhu, Jinhua Du, Yulan He
|
OpenToM: A Comprehensive Benchmark for Evaluating Theory-of-Mind
Reasoning Capabilities of Large Language Models
|
ACL 2024
| null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural Theory-of-Mind (N-ToM), machine's ability to understand and keep track
of the mental states of others, is pivotal in developing socially intelligent
agents. However, prevalent N-ToM benchmarks have several shortcomings,
including the presence of ambiguous and artificial narratives, absence of
personality traits and preferences, a lack of questions addressing characters'
psychological mental states, and limited diversity in the questions posed. In
response to these issues, we construct OpenToM, a new benchmark for assessing
N-ToM with (1) longer and clearer narrative stories, (2) characters with
explicit personality traits, (3) actions that are triggered by character
intentions, and (4) questions designed to challenge LLMs' capabilities of
modeling characters' mental states of both the physical and psychological
world. Using OpenToM, we reveal that state-of-the-art LLMs thrive at modeling
certain aspects of mental states in the physical world but fall short when
tracking characters' mental states in the psychological world.
|
[
{
"created": "Thu, 8 Feb 2024 20:35:06 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Feb 2024 13:23:51 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Jun 2024 10:48:16 GMT",
"version": "v3"
}
] |
2024-06-04
|
[
[
"Xu",
"Hainiu",
""
],
[
"Zhao",
"Runcong",
""
],
[
"Zhu",
"Lixing",
""
],
[
"Du",
"Jinhua",
""
],
[
"He",
"Yulan",
""
]
] |
Neural Theory-of-Mind (N-ToM), machine's ability to understand and keep track of the mental states of others, is pivotal in developing socially intelligent agents. However, prevalent N-ToM benchmarks have several shortcomings, including the presence of ambiguous and artificial narratives, absence of personality traits and preferences, a lack of questions addressing characters' psychological mental states, and limited diversity in the questions posed. In response to these issues, we construct OpenToM, a new benchmark for assessing N-ToM with (1) longer and clearer narrative stories, (2) characters with explicit personality traits, (3) actions that are triggered by character intentions, and (4) questions designed to challenge LLMs' capabilities of modeling characters' mental states of both the physical and psychological world. Using OpenToM, we reveal that state-of-the-art LLMs thrive at modeling certain aspects of mental states in the physical world but fall short when tracking characters' mental states in the psychological world.
|
cs/0106008
|
M. H. van Emden
|
M.H. van Emden
|
Computing Functional and Relational Box Consistency by Structured
Propagation in Atomic Constraint Systems
|
Presented at the Sixth Annual Workshop of the ERCIM Working Group on
Constraints. 12 pages
| null | null |
Univ. of Victoria Computer Science Dept Technical Report DCS-266-IR
|
cs.PL cs.AI
| null |
Box consistency has been observed to yield exponentially better performance
than chaotic constraint propagation in the interval constraint system obtained
by decomposing the original expression into primitive constraints. The claim
was made that the improvement is due to avoiding decomposition. In this paper
we argue that the improvement is due to replacing chaotic iteration by a more
structured alternative.
To this end we distinguish the existing notion of box consistency from
relational box consistency. We show that from a computational point of view it
is important to maintain the functional structure in constraint systems that
are associated with a system of equations. So far, it has only been considered
computationally important that constraint propagation be fair. With the
additional structure of functional constraint systems, one can define and
implement computationally effective, structured, truncated constraint
propagations. The existing algorithm for box consistency is one such. Our
results suggest that there are others worth investigating.
|
[
{
"created": "Thu, 7 Jun 2001 14:50:40 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"van Emden",
"M. H.",
""
]
] |
Box consistency has been observed to yield exponentially better performance than chaotic constraint propagation in the interval constraint system obtained by decomposing the original expression into primitive constraints. The claim was made that the improvement is due to avoiding decomposition. In this paper we argue that the improvement is due to replacing chaotic iteration by a more structured alternative. To this end we distinguish the existing notion of box consistency from relational box consistency. We show that from a computational point of view it is important to maintain the functional structure in constraint systems that are associated with a system of equations. So far, it has only been considered computationally important that constraint propagation be fair. With the additional structure of functional constraint systems, one can define and implement computationally effective, structured, truncated constraint propagations. The existing algorithm for box consistency is one such. Our results suggest that there are others worth investigating.
|
2307.16045
|
Ana-Maria Bucur
|
Ana-Maria Bucur, Andreea Dinc\u{a}, M\u{a}d\u{a}lina Chitez and Roxana
Rogobete
|
Automatic Extraction of the Romanian Academic Word List: Data and
Methods
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents the methodology and data used for the automatic
extraction of the Romanian Academic Word List (Ro-AWL). Academic Word Lists are
useful in both L2 and L1 teaching contexts. For the Romanian language, no such
resource exists so far. Ro-AWL has been generated by combining methods from
corpus and computational linguistics with L2 academic writing approaches. We
use two types of data: (a) existing data, such as the Romanian Frequency List
based on the ROMBAC corpus, and (b) self-compiled data, such as the expert
academic writing corpus EXPRES. For constructing the academic word list, we
follow the methodology for building the Academic Vocabulary List for the
English language. The distribution of Ro-AWL features (general distribution,
POS distribution) into four disciplinary datasets is in line with previous
research. Ro-AWL is freely available and can be used for teaching, research and
NLP applications.
|
[
{
"created": "Sat, 29 Jul 2023 18:21:38 GMT",
"version": "v1"
}
] |
2023-08-01
|
[
[
"Bucur",
"Ana-Maria",
""
],
[
"Dincă",
"Andreea",
""
],
[
"Chitez",
"Mădălina",
""
],
[
"Rogobete",
"Roxana",
""
]
] |
This paper presents the methodology and data used for the automatic extraction of the Romanian Academic Word List (Ro-AWL). Academic Word Lists are useful in both L2 and L1 teaching contexts. For the Romanian language, no such resource exists so far. Ro-AWL has been generated by combining methods from corpus and computational linguistics with L2 academic writing approaches. We use two types of data: (a) existing data, such as the Romanian Frequency List based on the ROMBAC corpus, and (b) self-compiled data, such as the expert academic writing corpus EXPRES. For constructing the academic word list, we follow the methodology for building the Academic Vocabulary List for the English language. The distribution of Ro-AWL features (general distribution, POS distribution) into four disciplinary datasets is in line with previous research. Ro-AWL is freely available and can be used for teaching, research and NLP applications.
|
1306.5473
|
Emilio Ferrara
|
Michael D. Conover, Clayton Davis, Emilio Ferrara, Karissa McKelvey,
Filippo Menczer, Alessandro Flammini
|
The Geospatial Characteristics of a Social Movement Communication
Network
|
Open access available at:
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0064679
|
PLoS ONE 8(3):e55957 2013
|
10.1371/journal.pone.0055957
| null |
cs.CY cs.SI physics.data-an physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social movements rely in large measure on networked communication
technologies to organize and disseminate information relating to the movements'
objectives. In this work we seek to understand how the goals and needs of a
protest movement are reflected in the geographic patterns of its communication
network, and how these patterns differ from those of stable political
communication. To this end, we examine an online communication network
reconstructed from over 600,000 tweets from a thirty-six week period covering
the birth and maturation of the American anticapitalist movement, Occupy Wall
Street. We find that, compared to a network of stable domestic political
communication, the Occupy Wall Street network exhibits higher levels of
locality and a hub and spoke structure, in which the majority of non-local
attention is allocated to high-profile locations such as New York, California,
and Washington D.C. Moreover, we observe that information flows across state
boundaries are more likely to contain framing language and references to the
media, while communication among individuals in the same state is more likely
to reference protest action and specific places and and times. Tying these
results to social movement theory, we propose that these features reflect the
movement's efforts to mobilize resources at the local level and to develop
narrative frames that reinforce collective purpose at the national level.
|
[
{
"created": "Sun, 23 Jun 2013 21:31:16 GMT",
"version": "v1"
}
] |
2013-06-25
|
[
[
"Conover",
"Michael D.",
""
],
[
"Davis",
"Clayton",
""
],
[
"Ferrara",
"Emilio",
""
],
[
"McKelvey",
"Karissa",
""
],
[
"Menczer",
"Filippo",
""
],
[
"Flammini",
"Alessandro",
""
]
] |
Social movements rely in large measure on networked communication technologies to organize and disseminate information relating to the movements' objectives. In this work we seek to understand how the goals and needs of a protest movement are reflected in the geographic patterns of its communication network, and how these patterns differ from those of stable political communication. To this end, we examine an online communication network reconstructed from over 600,000 tweets from a thirty-six week period covering the birth and maturation of the American anticapitalist movement, Occupy Wall Street. We find that, compared to a network of stable domestic political communication, the Occupy Wall Street network exhibits higher levels of locality and a hub and spoke structure, in which the majority of non-local attention is allocated to high-profile locations such as New York, California, and Washington D.C. Moreover, we observe that information flows across state boundaries are more likely to contain framing language and references to the media, while communication among individuals in the same state is more likely to reference protest action and specific places and and times. Tying these results to social movement theory, we propose that these features reflect the movement's efforts to mobilize resources at the local level and to develop narrative frames that reinforce collective purpose at the national level.
|
1504.03363
|
Gabriel Fernando Pivaro
|
G. F. Pivaro, G. Fraindenraich
|
Outage Probability for Multi-Hop Full-Duplex Decode and Forward MIMO
Relay
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a multi-hop (MH) decode-and-forward (DF) multiple-input
multiple-output (MIMO) relay network has been studied. To consider a more
realistic scenario, Full-Duplex (FD) operation with Relay Self-Interference
(RSI) is employed.
Assuming that the MIMO channels are subject to Rayleigh fading, a simple and
compact closed-form outage probability expression has been derived. The key
assumption to derive this result is that the mutual information of each channel
could be well approximated by a Gaussian random variable. In order to obtain
the resultant outage probability, a new excellent accurate approximation has
been obtained for the sum of Wishart distributed complex random matrices.
Numerical Monte Carlo simulations have been performed to validate our result.
These simulations have shown that, for low and medium interference regime, FD
mode performs better than Half-Duplex (HD) mode. On the other hand, when RSI
increases, HD mode can outperforms FD mode.
|
[
{
"created": "Mon, 13 Apr 2015 21:01:15 GMT",
"version": "v1"
}
] |
2015-04-15
|
[
[
"Pivaro",
"G. F.",
""
],
[
"Fraindenraich",
"G.",
""
]
] |
In this paper, a multi-hop (MH) decode-and-forward (DF) multiple-input multiple-output (MIMO) relay network has been studied. To consider a more realistic scenario, Full-Duplex (FD) operation with Relay Self-Interference (RSI) is employed. Assuming that the MIMO channels are subject to Rayleigh fading, a simple and compact closed-form outage probability expression has been derived. The key assumption to derive this result is that the mutual information of each channel could be well approximated by a Gaussian random variable. In order to obtain the resultant outage probability, a new excellent accurate approximation has been obtained for the sum of Wishart distributed complex random matrices. Numerical Monte Carlo simulations have been performed to validate our result. These simulations have shown that, for low and medium interference regime, FD mode performs better than Half-Duplex (HD) mode. On the other hand, when RSI increases, HD mode can outperforms FD mode.
|
1411.2404
|
Jelani Nelson
|
Kasper Green Larsen, Jelani Nelson
|
The Johnson-Lindenstrauss lemma is optimal for linear dimensionality
reduction
| null | null | null | null |
cs.IT cs.CG cs.DS math.FA math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For any $n>1$ and $0<\varepsilon<1/2$, we show the existence of an
$n^{O(1)}$-point subset $X$ of $\mathbb{R}^n$ such that any linear map from
$(X,\ell_2)$ to $\ell_2^m$ with distortion at most $1+\varepsilon$ must have $m
= \Omega(\min\{n, \varepsilon^{-2}\log n\})$. Our lower bound matches the upper
bounds provided by the identity matrix and the Johnson-Lindenstrauss lemma,
improving the previous lower bound of Alon by a $\log(1/\varepsilon)$ factor.
|
[
{
"created": "Mon, 10 Nov 2014 12:53:41 GMT",
"version": "v1"
}
] |
2014-11-11
|
[
[
"Larsen",
"Kasper Green",
""
],
[
"Nelson",
"Jelani",
""
]
] |
For any $n>1$ and $0<\varepsilon<1/2$, we show the existence of an $n^{O(1)}$-point subset $X$ of $\mathbb{R}^n$ such that any linear map from $(X,\ell_2)$ to $\ell_2^m$ with distortion at most $1+\varepsilon$ must have $m = \Omega(\min\{n, \varepsilon^{-2}\log n\})$. Our lower bound matches the upper bounds provided by the identity matrix and the Johnson-Lindenstrauss lemma, improving the previous lower bound of Alon by a $\log(1/\varepsilon)$ factor.
|
2404.01196
|
Sondre Wold
|
Sondre Wold, Petter M{\ae}hlum, Oddbj{\o}rn Hove
|
Estimating Lexical Complexity from Document-Level Distributions
|
LREC-COLING 2024
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Existing methods for complexity estimation are typically developed for entire
documents. This limitation in scope makes them inapplicable for shorter pieces
of text, such as health assessment tools. These typically consist of lists of
independent sentences, all of which are too short for existing methods to
apply. The choice of wording in these assessment tools is crucial, as both the
cognitive capacity and the linguistic competency of the intended patient groups
could vary substantially. As a first step towards creating better tools for
supporting health practitioners, we develop a two-step approach for estimating
lexical complexity that does not rely on any pre-annotated data. We implement
our approach for the Norwegian language and verify its effectiveness using
statistical testing and a qualitative evaluation of samples from real
assessment tools. We also investigate the relationship between our complexity
measure and certain features typically associated with complexity in the
literature, such as word length, frequency, and the number of syllables.
|
[
{
"created": "Mon, 1 Apr 2024 15:55:18 GMT",
"version": "v1"
}
] |
2024-04-02
|
[
[
"Wold",
"Sondre",
""
],
[
"Mæhlum",
"Petter",
""
],
[
"Hove",
"Oddbjørn",
""
]
] |
Existing methods for complexity estimation are typically developed for entire documents. This limitation in scope makes them inapplicable for shorter pieces of text, such as health assessment tools. These typically consist of lists of independent sentences, all of which are too short for existing methods to apply. The choice of wording in these assessment tools is crucial, as both the cognitive capacity and the linguistic competency of the intended patient groups could vary substantially. As a first step towards creating better tools for supporting health practitioners, we develop a two-step approach for estimating lexical complexity that does not rely on any pre-annotated data. We implement our approach for the Norwegian language and verify its effectiveness using statistical testing and a qualitative evaluation of samples from real assessment tools. We also investigate the relationship between our complexity measure and certain features typically associated with complexity in the literature, such as word length, frequency, and the number of syllables.
|
cs/0605062
|
Al-Mukaddim Khan Pathan
|
Al-Mukaddim Khan Pathan and Md. Golam Shagadul Amin Talukder
|
QoSIP: A QoS Aware IP Routing Ptotocol for Multimedia Data
|
8th International Conference of Advanced Communication Technology
(ICACT 2006)
| null | null | null |
cs.NI
| null |
Conventional IP routing protocols are not suitable for multimedia
applications which have very stringent Quality-of-Service (QoS) demands and
they require a connection oriented service. For multimedia applications it is
expected that the router should be able to forward the packet according to the
demand of the packet and it is necessary to find a path that satisfies the
specific demands of a particular application. In order to address these issues,
in this paper, we have presented a QoS aware IP routing protocol where a router
stores information about the QoS parameters and routes the packet accordingly.
Keywords: IP Routing Protocol, Quality of Service (QoS) parameter, QoSIP,
Selective Flooding.
|
[
{
"created": "Mon, 15 May 2006 10:39:40 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Pathan",
"Al-Mukaddim Khan",
""
],
[
"Talukder",
"Md. Golam Shagadul Amin",
""
]
] |
Conventional IP routing protocols are not suitable for multimedia applications which have very stringent Quality-of-Service (QoS) demands and they require a connection oriented service. For multimedia applications it is expected that the router should be able to forward the packet according to the demand of the packet and it is necessary to find a path that satisfies the specific demands of a particular application. In order to address these issues, in this paper, we have presented a QoS aware IP routing protocol where a router stores information about the QoS parameters and routes the packet accordingly. Keywords: IP Routing Protocol, Quality of Service (QoS) parameter, QoSIP, Selective Flooding.
|
1502.02519
|
Fabrizio Montesi
|
Fabrizio Montesi
|
Kickstarting Choreographic Programming
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an overview of some recent efforts aimed at the development of
Choreographic Programming, a programming paradigm for the production of
concurrent software that is guaranteed to be correct by construction from
global descriptions of communication behaviour.
|
[
{
"created": "Mon, 9 Feb 2015 15:20:03 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Feb 2015 12:03:10 GMT",
"version": "v2"
}
] |
2015-02-11
|
[
[
"Montesi",
"Fabrizio",
""
]
] |
We present an overview of some recent efforts aimed at the development of Choreographic Programming, a programming paradigm for the production of concurrent software that is guaranteed to be correct by construction from global descriptions of communication behaviour.
|
2203.06766
|
Van Bang Le
|
Sun-Yuan Hsieh, Hoang-Oanh Le, Van Bang Le, Sheng-Lung Peng
|
On the $d$-Claw Vertex Deletion Problem
| null | null | null | null |
cs.DM cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Let $d$-claw (or $d$-star) stand for $K_{1,d}$, the complete bipartite graph
with 1 and $d\ge 1$ vertices on each part. The $d$-claw vertex deletion
problem, $d$-CLAW-VD, asks for a given graph $G$ and an integer $k$ if one can
delete at most $k$ vertices from $G$ such that the resulting graph has no
$d$-claw as an induced subgraph. Thus, 1-CLAW-VD and 2-CLAW-VD are just the
famous VERTEX COVER problem and the CLUSTER VERTEX DELETION problem,
respectively. In this paper, we strengthen a hardness result in [M. Yannakakis,
Node-Deletion Problems on Bipartite Graphs, SIAM J. Comput. (1981)], by showing
that CLUSTER VERTEX DELETION remains NP-complete when restricted to bipartite
graphs of maximum degree 3. Moreover, for every $d\ge 3$, we show that
$d$-CLAW-VD is NP-complete even when restricted to bipartite graphs of maximum
degree $d$. These hardness results are optimal with respect to degree
constraint. By extending the hardness result in [F. Bonomo-Braberman et al.,
Linear-Time Algorithms for Eliminating Claws in Graphs, COCOON 2020], we show
that, for every $d\ge 3$, $d$-CLAW-VD is NP-complete even when restricted to
split graphs without $(d+1)$-claws, and split graphs of diameter 2. On the
positive side, we prove that $d$-CLAW-VD is polynomially solvable on what we
call $d$-block graphs, a class properly contains all block graphs. This result
extends the polynomial-time algorithm in [Y. Cao et al., Vertex deletion
problems on chordal graphs, Theor. Comput. Sci. (2018)] for 2-CLAW-VD on block
graphs to $d$-CLAW-VD for all $d\ge 2$ and improves the polynomial-time
algorithm proposed by F. Bonomo-Brabeman et al. for (unweighted) 3-CLAW-VD on
block graphs to 3-block graphs.
|
[
{
"created": "Sun, 13 Mar 2022 21:36:48 GMT",
"version": "v1"
}
] |
2022-03-15
|
[
[
"Hsieh",
"Sun-Yuan",
""
],
[
"Le",
"Hoang-Oanh",
""
],
[
"Le",
"Van Bang",
""
],
[
"Peng",
"Sheng-Lung",
""
]
] |
Let $d$-claw (or $d$-star) stand for $K_{1,d}$, the complete bipartite graph with 1 and $d\ge 1$ vertices on each part. The $d$-claw vertex deletion problem, $d$-CLAW-VD, asks for a given graph $G$ and an integer $k$ if one can delete at most $k$ vertices from $G$ such that the resulting graph has no $d$-claw as an induced subgraph. Thus, 1-CLAW-VD and 2-CLAW-VD are just the famous VERTEX COVER problem and the CLUSTER VERTEX DELETION problem, respectively. In this paper, we strengthen a hardness result in [M. Yannakakis, Node-Deletion Problems on Bipartite Graphs, SIAM J. Comput. (1981)], by showing that CLUSTER VERTEX DELETION remains NP-complete when restricted to bipartite graphs of maximum degree 3. Moreover, for every $d\ge 3$, we show that $d$-CLAW-VD is NP-complete even when restricted to bipartite graphs of maximum degree $d$. These hardness results are optimal with respect to degree constraint. By extending the hardness result in [F. Bonomo-Braberman et al., Linear-Time Algorithms for Eliminating Claws in Graphs, COCOON 2020], we show that, for every $d\ge 3$, $d$-CLAW-VD is NP-complete even when restricted to split graphs without $(d+1)$-claws, and split graphs of diameter 2. On the positive side, we prove that $d$-CLAW-VD is polynomially solvable on what we call $d$-block graphs, a class properly contains all block graphs. This result extends the polynomial-time algorithm in [Y. Cao et al., Vertex deletion problems on chordal graphs, Theor. Comput. Sci. (2018)] for 2-CLAW-VD on block graphs to $d$-CLAW-VD for all $d\ge 2$ and improves the polynomial-time algorithm proposed by F. Bonomo-Brabeman et al. for (unweighted) 3-CLAW-VD on block graphs to 3-block graphs.
|
2304.11241
|
Pierre Marza
|
Pierre Marza, Laetitia Matignon, Olivier Simonin, Dhruv Batra,
Christian Wolf, Devendra Singh Chaplot
|
AutoNeRF: Training Implicit Scene Representations with Autonomous Agents
| null | null | null | null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Implicit representations such as Neural Radiance Fields (NeRF) have been
shown to be very effective at novel view synthesis. However, these models
typically require manual and careful human data collection for training. In
this paper, we present AutoNeRF, a method to collect data required to train
NeRFs using autonomous embodied agents. Our method allows an agent to explore
an unseen environment efficiently and use the experience to build an implicit
map representation autonomously. We compare the impact of different exploration
strategies including handcrafted frontier-based exploration, end-to-end and
modular approaches composed of trained high-level planners and classical
low-level path followers. We train these models with different reward functions
tailored to this problem and evaluate the quality of the learned
representations on four different downstream tasks: classical viewpoint
rendering, map reconstruction, planning, and pose refinement. Empirical results
show that NeRFs can be trained on actively collected data using just a single
episode of experience in an unseen environment, and can be used for several
downstream robotic tasks, and that modular trained exploration models
outperform other classical and end-to-end baselines. Finally, we show that
AutoNeRF can reconstruct large-scale scenes, and is thus a useful tool to
perform scene-specific adaptation as the produced 3D environment models can be
loaded into a simulator to fine-tune a policy of interest.
|
[
{
"created": "Fri, 21 Apr 2023 20:22:17 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Dec 2023 13:55:53 GMT",
"version": "v2"
}
] |
2023-12-25
|
[
[
"Marza",
"Pierre",
""
],
[
"Matignon",
"Laetitia",
""
],
[
"Simonin",
"Olivier",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Wolf",
"Christian",
""
],
[
"Chaplot",
"Devendra Singh",
""
]
] |
Implicit representations such as Neural Radiance Fields (NeRF) have been shown to be very effective at novel view synthesis. However, these models typically require manual and careful human data collection for training. In this paper, we present AutoNeRF, a method to collect data required to train NeRFs using autonomous embodied agents. Our method allows an agent to explore an unseen environment efficiently and use the experience to build an implicit map representation autonomously. We compare the impact of different exploration strategies including handcrafted frontier-based exploration, end-to-end and modular approaches composed of trained high-level planners and classical low-level path followers. We train these models with different reward functions tailored to this problem and evaluate the quality of the learned representations on four different downstream tasks: classical viewpoint rendering, map reconstruction, planning, and pose refinement. Empirical results show that NeRFs can be trained on actively collected data using just a single episode of experience in an unseen environment, and can be used for several downstream robotic tasks, and that modular trained exploration models outperform other classical and end-to-end baselines. Finally, we show that AutoNeRF can reconstruct large-scale scenes, and is thus a useful tool to perform scene-specific adaptation as the produced 3D environment models can be loaded into a simulator to fine-tune a policy of interest.
|
1302.6641
|
John Iacono
|
John Iacono
|
Why some heaps support constant-amortized-time decrease-key operations,
and others do not
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A lower bound is presented which shows that a class of heap algorithms in the
pointer model with only heap pointers must spend Omega(log log n / log log log
n) amortized time on the decrease-key operation (given O(log n) amortized-time
extract-min). Intuitively, this bound shows the key to having O(1)-time
decrease-key is the ability to sort O(log n) items in O(log n) time; Fibonacci
heaps [M.L. Fredman and R. E. Tarjan. J. ACM 34(3):596-615 (1987)] do this
through the use of bucket sort. Our lower bound also holds no matter how much
data is augmented; this is in contrast to the lower bound of Fredman [J. ACM
46(4):473-501 (1999)] who showed a tradeoff between the number of augmented
bits and the amortized cost of decrease-key. A new heap data structure, the
sort heap, is presented. This heap is a simplification of the heap of Elmasry
[SODA 2009: 471-476] and shares with it a O(log log n) amortized-time
decrease-key, but with a straightforward implementation such that our lower
bound holds. Thus a natural model is presented for a pointer-based heap such
that the amortized runtime of a self-adjusting structure and amortized lower
asymptotic bounds for decrease-key differ by but a O(log log log n) factor.
|
[
{
"created": "Wed, 27 Feb 2013 01:52:21 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Apr 2013 22:12:24 GMT",
"version": "v2"
},
{
"created": "Tue, 16 Jul 2013 18:50:20 GMT",
"version": "v3"
}
] |
2013-07-17
|
[
[
"Iacono",
"John",
""
]
] |
A lower bound is presented which shows that a class of heap algorithms in the pointer model with only heap pointers must spend Omega(log log n / log log log n) amortized time on the decrease-key operation (given O(log n) amortized-time extract-min). Intuitively, this bound shows the key to having O(1)-time decrease-key is the ability to sort O(log n) items in O(log n) time; Fibonacci heaps [M.L. Fredman and R. E. Tarjan. J. ACM 34(3):596-615 (1987)] do this through the use of bucket sort. Our lower bound also holds no matter how much data is augmented; this is in contrast to the lower bound of Fredman [J. ACM 46(4):473-501 (1999)] who showed a tradeoff between the number of augmented bits and the amortized cost of decrease-key. A new heap data structure, the sort heap, is presented. This heap is a simplification of the heap of Elmasry [SODA 2009: 471-476] and shares with it a O(log log n) amortized-time decrease-key, but with a straightforward implementation such that our lower bound holds. Thus a natural model is presented for a pointer-based heap such that the amortized runtime of a self-adjusting structure and amortized lower asymptotic bounds for decrease-key differ by but a O(log log log n) factor.
|
2011.03667
|
Yunhao Yang
|
Yunhao Yang, Andrew Whinston
|
Identifying Mislabeled Images in Supervised Learning Utilizing
Autoencoder
|
UTCS Tech Report: Honors Thesis. 12 pages, 11 figures
|
Lecture Notes in Networks and Systems vol 359 (2021) 266-282
|
10.1007/978-3-030-89880-9_21
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Supervised learning is based on the assumption that the ground truth in the
training data is accurate. However, this may not be guaranteed in real-world
settings. Inaccurate training data will result in some unexpected predictions.
In image classification, incorrect labels may cause the classification model to
be inaccurate as well. In this paper, I am going to apply unsupervised
techniques to the training data before training the classification network. A
convolutional autoencoder is applied to encode and reconstruct images. The
encoder will project the image data on to latent space. In the latent space,
image features are preserved in a lower dimension. The assumption is that data
samples with similar features are likely to have the same label. Noised samples
can be classified in the latent space by the Density-Base Scan (DBSCAN)
clustering algorithm. These incorrectly labeled data are visualized as outliers
in the latent space. Therefore, the outliers identified by the DBSCAN algorithm
can be classified as incorrectly labeled samples. After the outliers are
detected, all the outliers are treated as mislabeled data samples and removed
from the dataset. Thus the training data can be directly used in training the
supervised learning network. The algorithm can detect and remove above 67\% of
mislabeled data in the experimental dataset.
|
[
{
"created": "Sat, 7 Nov 2020 03:09:34 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Jan 2021 22:59:44 GMT",
"version": "v2"
}
] |
2022-01-06
|
[
[
"Yang",
"Yunhao",
""
],
[
"Whinston",
"Andrew",
""
]
] |
Supervised learning is based on the assumption that the ground truth in the training data is accurate. However, this may not be guaranteed in real-world settings. Inaccurate training data will result in some unexpected predictions. In image classification, incorrect labels may cause the classification model to be inaccurate as well. In this paper, I am going to apply unsupervised techniques to the training data before training the classification network. A convolutional autoencoder is applied to encode and reconstruct images. The encoder will project the image data on to latent space. In the latent space, image features are preserved in a lower dimension. The assumption is that data samples with similar features are likely to have the same label. Noised samples can be classified in the latent space by the Density-Base Scan (DBSCAN) clustering algorithm. These incorrectly labeled data are visualized as outliers in the latent space. Therefore, the outliers identified by the DBSCAN algorithm can be classified as incorrectly labeled samples. After the outliers are detected, all the outliers are treated as mislabeled data samples and removed from the dataset. Thus the training data can be directly used in training the supervised learning network. The algorithm can detect and remove above 67\% of mislabeled data in the experimental dataset.
|
1503.05187
|
Khaled Fawagreh
|
Khaled Fawagreh, Mohamad Medhat Gaber, Eyad Elyan
|
An Outlier Detection-based Tree Selection Approach to Extreme Pruning of
Random Forests
|
21 pages, 4 Figures. arXiv admin note: substantial text overlap with
arXiv:1503.04996
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Random Forest (RF) is an ensemble classification technique that was developed
by Breiman over a decade ago. Compared with other ensemble techniques, it has
proved its accuracy and superiority. Many researchers, however, believe that
there is still room for enhancing and improving its performance in terms of
predictive accuracy. This explains why, over the past decade, there have been
many extensions of RF where each extension employed a variety of techniques and
strategies to improve certain aspect(s) of RF. Since it has been proven
empirically that ensembles tend to yield better results when there is a
significant diversity among the constituent models, the objective of this paper
is twofolds. First, it investigates how an unsupervised learning technique,
namely, Local Outlier Factor (LOF) can be used to identify diverse trees in the
RF. Second, trees with the highest LOF scores are then used to produce an
extension of RF termed LOFB-DRF that is much smaller in size than RF, and yet
performs at least as good as RF, but mostly exhibits higher performance in
terms of accuracy. The latter refers to a known technique called ensemble
pruning. Experimental results on 10 real datasets prove the superiority of our
proposed extension over the traditional RF. Unprecedented pruning levels
reaching 99% have been achieved at the time of boosting the predictive accuracy
of the ensemble. The notably high pruning level makes the technique a good
candidate for real-time applications.
|
[
{
"created": "Tue, 17 Mar 2015 11:05:31 GMT",
"version": "v1"
}
] |
2015-03-19
|
[
[
"Fawagreh",
"Khaled",
""
],
[
"Gaber",
"Mohamad Medhat",
""
],
[
"Elyan",
"Eyad",
""
]
] |
Random Forest (RF) is an ensemble classification technique that was developed by Breiman over a decade ago. Compared with other ensemble techniques, it has proved its accuracy and superiority. Many researchers, however, believe that there is still room for enhancing and improving its performance in terms of predictive accuracy. This explains why, over the past decade, there have been many extensions of RF where each extension employed a variety of techniques and strategies to improve certain aspect(s) of RF. Since it has been proven empirically that ensembles tend to yield better results when there is a significant diversity among the constituent models, the objective of this paper is twofolds. First, it investigates how an unsupervised learning technique, namely, Local Outlier Factor (LOF) can be used to identify diverse trees in the RF. Second, trees with the highest LOF scores are then used to produce an extension of RF termed LOFB-DRF that is much smaller in size than RF, and yet performs at least as good as RF, but mostly exhibits higher performance in terms of accuracy. The latter refers to a known technique called ensemble pruning. Experimental results on 10 real datasets prove the superiority of our proposed extension over the traditional RF. Unprecedented pruning levels reaching 99% have been achieved at the time of boosting the predictive accuracy of the ensemble. The notably high pruning level makes the technique a good candidate for real-time applications.
|
1205.6423
|
Plawan Kumar Rath
|
Plawan Kumar Rath, G. N. Anil
|
Proposed Challenges And Areas of Concern in Operating System Research
and Development
|
5 pages; International Journal for Computer Science Issues(IJCSI),
Volume 9, Issue 2, March 2012
| null | null | null |
cs.OS
|
http://creativecommons.org/licenses/publicdomain/
|
Computers are a very important part of our lives and the major reason why
they have been such a success is because of the excellent graphical operating
systems that run on these powerful machines. As the computer hardware is
becoming more and more powerful, it is also vital to keep the software updated
in order to utilize the hardware of the system efficiently and make it faster
and smarter. This paper highlights some core issues that if dealt with in the
operating system level would make use of the full potential of the computer
hardware and provide an excellent user experience.
|
[
{
"created": "Tue, 29 May 2012 17:10:29 GMT",
"version": "v1"
}
] |
2012-05-30
|
[
[
"Rath",
"Plawan Kumar",
""
],
[
"Anil",
"G. N.",
""
]
] |
Computers are a very important part of our lives and the major reason why they have been such a success is because of the excellent graphical operating systems that run on these powerful machines. As the computer hardware is becoming more and more powerful, it is also vital to keep the software updated in order to utilize the hardware of the system efficiently and make it faster and smarter. This paper highlights some core issues that if dealt with in the operating system level would make use of the full potential of the computer hardware and provide an excellent user experience.
|
2104.14579
|
Matteo Zecchin
|
Matteo Zecchin, Mahdi Boloursaz Mashhadi, Mikolaj Jankowski, Deniz
Gunduz, Marios Kountouris, David Gesbert
|
LIDAR and Position-Aided mmWave Beam Selection with Non-local CNNs and
Curriculum Training
|
Submitted for publication
| null | null | null |
cs.IT cs.LG eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Efficient millimeter wave (mmWave) beam selection in
vehicle-to-infrastructure (V2I) communication is a crucial yet challenging task
due to the narrow mmWave beamwidth and high user mobility. To reduce the search
overhead of iterative beam discovery procedures, contextual information from
light detection and ranging (LIDAR) sensors mounted on vehicles has been
leveraged by data-driven methods to produce useful side information. In this
paper, we propose a lightweight neural network (NN) architecture along with the
corresponding LIDAR preprocessing, which significantly outperforms previous
works. Our solution comprises multiple novelties that improve both the
convergence speed and the final accuracy of the model. In particular, we define
a novel loss function inspired by the knowledge distillation idea, introduce a
curriculum training approach exploiting line-of-sight (LOS)/non-line-of-sight
(NLOS) information, and we propose a non-local attention module to improve the
performance for the more challenging NLOS cases. Simulation results on
benchmark datasets show that, utilizing solely LIDAR data and the receiver
position, our NN-based beam selection scheme can achieve 79.9% throughput of an
exhaustive beam sweeping approach without any beam search overhead and 95% by
searching among as few as 6 beams. In a typical mmWave V2I scenario, our
proposed method considerably reduces the beam search time required to achieve a
desired throughput, in comparison with the inverse fingerprinting and
hierarchical beam selection schemes.
|
[
{
"created": "Thu, 29 Apr 2021 18:07:31 GMT",
"version": "v1"
},
{
"created": "Mon, 3 May 2021 12:02:06 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Nov 2021 10:16:51 GMT",
"version": "v3"
}
] |
2021-11-18
|
[
[
"Zecchin",
"Matteo",
""
],
[
"Mashhadi",
"Mahdi Boloursaz",
""
],
[
"Jankowski",
"Mikolaj",
""
],
[
"Gunduz",
"Deniz",
""
],
[
"Kountouris",
"Marios",
""
],
[
"Gesbert",
"David",
""
]
] |
Efficient millimeter wave (mmWave) beam selection in vehicle-to-infrastructure (V2I) communication is a crucial yet challenging task due to the narrow mmWave beamwidth and high user mobility. To reduce the search overhead of iterative beam discovery procedures, contextual information from light detection and ranging (LIDAR) sensors mounted on vehicles has been leveraged by data-driven methods to produce useful side information. In this paper, we propose a lightweight neural network (NN) architecture along with the corresponding LIDAR preprocessing, which significantly outperforms previous works. Our solution comprises multiple novelties that improve both the convergence speed and the final accuracy of the model. In particular, we define a novel loss function inspired by the knowledge distillation idea, introduce a curriculum training approach exploiting line-of-sight (LOS)/non-line-of-sight (NLOS) information, and we propose a non-local attention module to improve the performance for the more challenging NLOS cases. Simulation results on benchmark datasets show that, utilizing solely LIDAR data and the receiver position, our NN-based beam selection scheme can achieve 79.9% throughput of an exhaustive beam sweeping approach without any beam search overhead and 95% by searching among as few as 6 beams. In a typical mmWave V2I scenario, our proposed method considerably reduces the beam search time required to achieve a desired throughput, in comparison with the inverse fingerprinting and hierarchical beam selection schemes.
|
2212.01005
|
Zhiying Xu
|
Zhiying Xu, Hongding Peng, Wei Wang
|
AGO: Boosting Mobile AI Inference Performance by Removing Constraints on
Graph Optimization
| null | null | null | null |
cs.LG cs.CL cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional deep learning compilers rely on heuristics for subgraph
generation, which impose extra constraints on graph optimization, e.g., each
subgraph can only contain at most one complex operator. In this paper, we
propose AGO, a framework for graph optimization with arbitrary structures to
boost the inference performance of deep models by removing such constraints. To
create new optimization opportunities for complicated subgraphs, we propose
intensive operator fusion, which can effectively stitch multiple complex
operators together for better performance. Further, we design a graph
partitioning scheme that allows an arbitrary structure for each subgraph while
guaranteeing the acyclic property among all generated subgraphs. Additionally,
to enable efficient performance tuning on complicated subgraphs, we devise a
novel divide-and-conquer tuning mechanism to orchestrate different system
components. Through extensive experiments on various neural networks and mobile
devices, we show that our system can improve the inference performance by up to
3.3x when compared with state-of-the-art deep compilers.
|
[
{
"created": "Fri, 2 Dec 2022 07:16:49 GMT",
"version": "v1"
}
] |
2022-12-05
|
[
[
"Xu",
"Zhiying",
""
],
[
"Peng",
"Hongding",
""
],
[
"Wang",
"Wei",
""
]
] |
Traditional deep learning compilers rely on heuristics for subgraph generation, which impose extra constraints on graph optimization, e.g., each subgraph can only contain at most one complex operator. In this paper, we propose AGO, a framework for graph optimization with arbitrary structures to boost the inference performance of deep models by removing such constraints. To create new optimization opportunities for complicated subgraphs, we propose intensive operator fusion, which can effectively stitch multiple complex operators together for better performance. Further, we design a graph partitioning scheme that allows an arbitrary structure for each subgraph while guaranteeing the acyclic property among all generated subgraphs. Additionally, to enable efficient performance tuning on complicated subgraphs, we devise a novel divide-and-conquer tuning mechanism to orchestrate different system components. Through extensive experiments on various neural networks and mobile devices, we show that our system can improve the inference performance by up to 3.3x when compared with state-of-the-art deep compilers.
|
2405.01159
|
Aleksei Dorkin
|
Aleksei Dorkin and Kairit Sirts
|
TartuNLP at EvaLatin 2024: Emotion Polarity Detection
|
Accepted to The Third Workshop on Language Technologies for
Historical and Ancient Languages (LT4HALA 2024)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents the TartuNLP team submission to EvaLatin 2024 shared task
of the emotion polarity detection for historical Latin texts. Our system relies
on two distinct approaches to annotating training data for supervised learning:
1) creating heuristics-based labels by adopting the polarity lexicon provided
by the organizers and 2) generating labels with GPT4. We employed parameter
efficient fine-tuning using the adapters framework and experimented with both
monolingual and cross-lingual knowledge transfer for training language and task
adapters. Our submission with the LLM-generated labels achieved the overall
first place in the emotion polarity detection task. Our results show that
LLM-based annotations show promising results on texts in Latin.
|
[
{
"created": "Thu, 2 May 2024 10:28:52 GMT",
"version": "v1"
}
] |
2024-05-03
|
[
[
"Dorkin",
"Aleksei",
""
],
[
"Sirts",
"Kairit",
""
]
] |
This paper presents the TartuNLP team submission to EvaLatin 2024 shared task of the emotion polarity detection for historical Latin texts. Our system relies on two distinct approaches to annotating training data for supervised learning: 1) creating heuristics-based labels by adopting the polarity lexicon provided by the organizers and 2) generating labels with GPT4. We employed parameter efficient fine-tuning using the adapters framework and experimented with both monolingual and cross-lingual knowledge transfer for training language and task adapters. Our submission with the LLM-generated labels achieved the overall first place in the emotion polarity detection task. Our results show that LLM-based annotations show promising results on texts in Latin.
|
1804.04438
|
Ari Morcos
|
Avraham Ruderman, Neil C. Rabinowitz, Ari S. Morcos, Daniel Zoran
|
Pooling is neither necessary nor sufficient for appropriate deformation
stability in CNNs
|
NIPS 2018 submission
| null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many of our core assumptions about how neural networks operate remain
empirically untested. One common assumption is that convolutional neural
networks need to be stable to small translations and deformations to solve
image recognition tasks. For many years, this stability was baked into CNN
architectures by incorporating interleaved pooling layers. Recently, however,
interleaved pooling has largely been abandoned. This raises a number of
questions: Are our intuitions about deformation stability right at all? Is it
important? Is pooling necessary for deformation invariance? If not, how is
deformation invariance achieved in its absence? In this work, we rigorously
test these questions, and find that deformation stability in convolutional
networks is more nuanced than it first appears: (1) Deformation invariance is
not a binary property, but rather that different tasks require different
degrees of deformation stability at different layers. (2) Deformation stability
is not a fixed property of a network and is heavily adjusted over the course of
training, largely through the smoothness of the convolutional filters. (3)
Interleaved pooling layers are neither necessary nor sufficient for achieving
the optimal form of deformation stability for natural image classification. (4)
Pooling confers too much deformation stability for image classification at
initialization, and during training, networks have to learn to counteract this
inductive bias. Together, these findings provide new insights into the role of
interleaved pooling and deformation invariance in CNNs, and demonstrate the
importance of rigorous empirical testing of even our most basic assumptions
about the working of neural networks.
|
[
{
"created": "Thu, 12 Apr 2018 11:44:05 GMT",
"version": "v1"
},
{
"created": "Fri, 25 May 2018 13:03:50 GMT",
"version": "v2"
}
] |
2018-05-28
|
[
[
"Ruderman",
"Avraham",
""
],
[
"Rabinowitz",
"Neil C.",
""
],
[
"Morcos",
"Ari S.",
""
],
[
"Zoran",
"Daniel",
""
]
] |
Many of our core assumptions about how neural networks operate remain empirically untested. One common assumption is that convolutional neural networks need to be stable to small translations and deformations to solve image recognition tasks. For many years, this stability was baked into CNN architectures by incorporating interleaved pooling layers. Recently, however, interleaved pooling has largely been abandoned. This raises a number of questions: Are our intuitions about deformation stability right at all? Is it important? Is pooling necessary for deformation invariance? If not, how is deformation invariance achieved in its absence? In this work, we rigorously test these questions, and find that deformation stability in convolutional networks is more nuanced than it first appears: (1) Deformation invariance is not a binary property, but rather that different tasks require different degrees of deformation stability at different layers. (2) Deformation stability is not a fixed property of a network and is heavily adjusted over the course of training, largely through the smoothness of the convolutional filters. (3) Interleaved pooling layers are neither necessary nor sufficient for achieving the optimal form of deformation stability for natural image classification. (4) Pooling confers too much deformation stability for image classification at initialization, and during training, networks have to learn to counteract this inductive bias. Together, these findings provide new insights into the role of interleaved pooling and deformation invariance in CNNs, and demonstrate the importance of rigorous empirical testing of even our most basic assumptions about the working of neural networks.
|
1407.3193
|
Awais Mansoor
|
Awais Mansoor, Ulas Bagci, Daniel J. Mollura
|
Optimally Stabilized PET Image Denoising Using Trilateral Filtering
|
8 pages, 3 figures; to appear in the Lecture Notes in Computer
Science (MICCAI 2014)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/publicdomain/
|
Low-resolution and signal-dependent noise distribution in positron emission
tomography (PET) images makes denoising process an inevitable step prior to
qualitative and quantitative image analysis tasks. Conventional PET denoising
methods either over-smooth small-sized structures due to resolution limitation
or make incorrect assumptions about the noise characteristics. Therefore,
clinically important quantitative information may be corrupted. To address
these challenges, we introduced a novel approach to remove signal-dependent
noise in the PET images where the noise distribution was considered as
Poisson-Gaussian mixed. Meanwhile, the generalized Anscombe's transformation
(GAT) was used to stabilize varying nature of the PET noise. Other than noise
stabilization, it is also desirable for the noise removal filter to preserve
the boundaries of the structures while smoothing the noisy regions. Indeed, it
is important to avoid significant loss of quantitative information such as
standard uptake value (SUV)-based metrics as well as metabolic lesion volume.
To satisfy all these properties, we extended bilateral filtering method into
trilateral filtering through multiscaling and optimal Gaussianization process.
The proposed method was tested on more than 50 PET-CT images from various
patients having different cancers and achieved the superior performance
compared to the widely used denoising techniques in the literature.
|
[
{
"created": "Fri, 11 Jul 2014 15:08:18 GMT",
"version": "v1"
}
] |
2014-07-14
|
[
[
"Mansoor",
"Awais",
""
],
[
"Bagci",
"Ulas",
""
],
[
"Mollura",
"Daniel J.",
""
]
] |
Low-resolution and signal-dependent noise distribution in positron emission tomography (PET) images makes denoising process an inevitable step prior to qualitative and quantitative image analysis tasks. Conventional PET denoising methods either over-smooth small-sized structures due to resolution limitation or make incorrect assumptions about the noise characteristics. Therefore, clinically important quantitative information may be corrupted. To address these challenges, we introduced a novel approach to remove signal-dependent noise in the PET images where the noise distribution was considered as Poisson-Gaussian mixed. Meanwhile, the generalized Anscombe's transformation (GAT) was used to stabilize varying nature of the PET noise. Other than noise stabilization, it is also desirable for the noise removal filter to preserve the boundaries of the structures while smoothing the noisy regions. Indeed, it is important to avoid significant loss of quantitative information such as standard uptake value (SUV)-based metrics as well as metabolic lesion volume. To satisfy all these properties, we extended bilateral filtering method into trilateral filtering through multiscaling and optimal Gaussianization process. The proposed method was tested on more than 50 PET-CT images from various patients having different cancers and achieved the superior performance compared to the widely used denoising techniques in the literature.
|
2306.10300
|
Subhashis Das
|
Subhashis Das, Debashis Naskar and Sayon Roy
|
Reorganizing Educational Institutional Domain using Faceted Ontological
Principles
|
26 pages, 12 figures, KNOWLEDGE ORGANIZATION Journal Paper
|
KO KNOWLEDGE ORGANIZATION, 49(1), 6-21 (2022)
|
10.5771/0943-7444-2022-1
| null |
cs.AI cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
The purpose of this work is to find out how different library classification
systems and linguistic ontologies arrange a particular domain of interest and
what are the limitations for information retrieval. We use knowledge
representation techniques and languages for construction of a domain specific
ontology. This ontology would help not only in problem solving, but it would
demonstrate the ease with which complex queries can be handled using principles
of domain ontology, thereby facilitating better information retrieval.
|
[
{
"created": "Sat, 17 Jun 2023 09:06:07 GMT",
"version": "v1"
}
] |
2023-06-21
|
[
[
"Das",
"Subhashis",
""
],
[
"Naskar",
"Debashis",
""
],
[
"Roy",
"Sayon",
""
]
] |
The purpose of this work is to find out how different library classification systems and linguistic ontologies arrange a particular domain of interest and what are the limitations for information retrieval. We use knowledge representation techniques and languages for construction of a domain specific ontology. This ontology would help not only in problem solving, but it would demonstrate the ease with which complex queries can be handled using principles of domain ontology, thereby facilitating better information retrieval.
|
0901.3950
|
Moshe Mishali
|
Moshe Mishali, Yonina C. Eldar and Joel A. Tropp
|
Efficient Sampling of Sparse Wideband Analog Signals
|
13 pages, 5 figs, conference paper (see ref. below)
|
Proc. of IEEEI, 25th convention, pp. 290-294, Dec. 2008
| null |
CCIT Report #705, Oct. 2008, EE Dept., Technion Israel
|
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Periodic nonuniform sampling is a known method to sample spectrally sparse
signals below the Nyquist rate. This strategy relies on the implicit assumption
that the individual samplers are exposed to the entire frequency range. This
assumption becomes impractical for wideband sparse signals. The current paper
proposes an alternative sampling stage that does not require a full-band front
end. Instead, signals are captured with an analog front end that consists of a
bank of multipliers and lowpass filters whose cutoff is much lower than the
Nyquist rate. The problem of recovering the original signal from the low-rate
samples can be studied within the framework of compressive sampling. An
appropriate parameter selection ensures that the samples uniquely determine the
analog input. Moreover, the analog input can be stably reconstructed with
digital algorithms. Numerical experiments support the theoretical analysis.
|
[
{
"created": "Mon, 26 Jan 2009 07:00:52 GMT",
"version": "v1"
}
] |
2009-01-27
|
[
[
"Mishali",
"Moshe",
""
],
[
"Eldar",
"Yonina C.",
""
],
[
"Tropp",
"Joel A.",
""
]
] |
Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The current paper proposes an alternative sampling stage that does not require a full-band front end. Instead, signals are captured with an analog front end that consists of a bank of multipliers and lowpass filters whose cutoff is much lower than the Nyquist rate. The problem of recovering the original signal from the low-rate samples can be studied within the framework of compressive sampling. An appropriate parameter selection ensures that the samples uniquely determine the analog input. Moreover, the analog input can be stably reconstructed with digital algorithms. Numerical experiments support the theoretical analysis.
|
2109.07923
|
Peisen Yao
|
Peisen Yao and Jinguo Zhou and Xiao Xiao and Qingkai Shi and Rongxin
Wu and Charles Zhang
|
Efficient Path-Sensitive Data-Dependence Analysis
| null | null | null | null |
cs.PL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a scalable path- and context-sensitive data-dependence
analysis. The key is to address the aliasing-path-explosion problem via a
sparse, demand-driven, and fused approach that piggybacks the computation of
pointer information with the resolution of data dependence. Specifically, our
approach decomposes the computational efforts of disjunctive reasoning into 1)
a context- and semi-path-sensitive analysis that concisely summarizes data
dependence as the symbolic and storeless value-flow graphs, and 2) a
demand-driven phase that resolves transitive data dependence over the graphs.
We have applied the approach to two clients, namely thin slicing and value flow
analysis. Using a suite of 16 programs ranging from 13 KLoC to 8 MLoC, we
compare our techniques against a diverse group of state-of-the-art analyses,
illustrating significant precision and scalability advantages of our approach.
|
[
{
"created": "Thu, 16 Sep 2021 12:17:05 GMT",
"version": "v1"
}
] |
2021-09-20
|
[
[
"Yao",
"Peisen",
""
],
[
"Zhou",
"Jinguo",
""
],
[
"Xiao",
"Xiao",
""
],
[
"Shi",
"Qingkai",
""
],
[
"Wu",
"Rongxin",
""
],
[
"Zhang",
"Charles",
""
]
] |
This paper presents a scalable path- and context-sensitive data-dependence analysis. The key is to address the aliasing-path-explosion problem via a sparse, demand-driven, and fused approach that piggybacks the computation of pointer information with the resolution of data dependence. Specifically, our approach decomposes the computational efforts of disjunctive reasoning into 1) a context- and semi-path-sensitive analysis that concisely summarizes data dependence as the symbolic and storeless value-flow graphs, and 2) a demand-driven phase that resolves transitive data dependence over the graphs. We have applied the approach to two clients, namely thin slicing and value flow analysis. Using a suite of 16 programs ranging from 13 KLoC to 8 MLoC, we compare our techniques against a diverse group of state-of-the-art analyses, illustrating significant precision and scalability advantages of our approach.
|
2110.14271
|
Sigal Oren
|
Sigal Oren and Oren Roth
|
Mechanisms for Trading Durable Goods
|
WINE'21
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider trading indivisible and easily transferable \emph{durable goods},
which are goods that an agent can receive, use, and trade again for a different
good. This is often the case with books that can be read and later exchanged
for unread ones. Other examples of such easily transferable durable goods
include puzzles, video games and baby clothes.
We introduce a model for the exchange of easily transferable durable goods.
In our model, each agent owns a set of items and demands a different set of
items. An agent is interested in receiving as many items as possible from his
demand set. We consider mechanisms that exchange items in cycles in which each
participating agent receives an item that he demands and gives an item that he
owns. We aim to develop mechanisms that have the following properties: they are
\emph{efficient}, in the sense that they maximize the total number of items
that agents receive from their demand set, they are \emph{strategyproof} (i.e.,
it is in the agents' best interest to report their preferences truthfully) and
they run in \emph{polynomial time}.
One challenge in developing mechanisms for our setting is that the supply and
demand sets of the agents are updated after a trade cycle is executed. This
makes constructing strategyproof mechanisms in our model significantly
different from previous works, both technically and conceptually and requires
developing new tools and techniques. We prove that simultaneously satisfying
all desired properties is impossible and thus focus on studying the tradeoffs
between these properties. To this end, we provide both approximation algorithms
and impossibility results.
|
[
{
"created": "Wed, 27 Oct 2021 08:51:23 GMT",
"version": "v1"
}
] |
2021-10-28
|
[
[
"Oren",
"Sigal",
""
],
[
"Roth",
"Oren",
""
]
] |
We consider trading indivisible and easily transferable \emph{durable goods}, which are goods that an agent can receive, use, and trade again for a different good. This is often the case with books that can be read and later exchanged for unread ones. Other examples of such easily transferable durable goods include puzzles, video games and baby clothes. We introduce a model for the exchange of easily transferable durable goods. In our model, each agent owns a set of items and demands a different set of items. An agent is interested in receiving as many items as possible from his demand set. We consider mechanisms that exchange items in cycles in which each participating agent receives an item that he demands and gives an item that he owns. We aim to develop mechanisms that have the following properties: they are \emph{efficient}, in the sense that they maximize the total number of items that agents receive from their demand set, they are \emph{strategyproof} (i.e., it is in the agents' best interest to report their preferences truthfully) and they run in \emph{polynomial time}. One challenge in developing mechanisms for our setting is that the supply and demand sets of the agents are updated after a trade cycle is executed. This makes constructing strategyproof mechanisms in our model significantly different from previous works, both technically and conceptually and requires developing new tools and techniques. We prove that simultaneously satisfying all desired properties is impossible and thus focus on studying the tradeoffs between these properties. To this end, we provide both approximation algorithms and impossibility results.
|
2208.09815
|
Pengqian Yu
|
Xinhan Di, Pengqian Yu
|
LWA-HAND: Lightweight Attention Hand for Interacting Hand Reconstruction
|
Accepted by ECCV 2022 Computer Vision for Metaverse Workshop (16
pages, 6 figures, 1 table)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent years have witnessed great success for hand reconstruction in
real-time applications such as visual reality and augmented reality while
interacting with two-hand reconstruction through efficient transformers is left
unexplored. In this paper, we propose a method called lightweight attention
hand (LWA-HAND) to reconstruct hands in low flops from a single RGB image. To
solve the occlusion and interaction problem in efficient attention
architectures, we propose three mobile attention modules in this paper. The
first module is a lightweight feature attention module that extracts both local
occlusion representation and global image patch representation in a
coarse-to-fine manner. The second module is a cross image and graph bridge
module which fuses image context and hand vertex. The third module is a
lightweight cross-attention mechanism that uses element-wise operation for the
cross-attention of two hands in linear complexity. The resulting model achieves
comparable performance on the InterHand2.6M benchmark in comparison with the
state-of-the-art models. Simultaneously, it reduces the flops to $0.47GFlops$
while the state-of-the-art models have heavy computations between $10GFlops$
and $20GFlops$.
|
[
{
"created": "Sun, 21 Aug 2022 06:25:56 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Aug 2022 03:54:47 GMT",
"version": "v2"
},
{
"created": "Sat, 27 Aug 2022 13:06:34 GMT",
"version": "v3"
}
] |
2022-08-30
|
[
[
"Di",
"Xinhan",
""
],
[
"Yu",
"Pengqian",
""
]
] |
Recent years have witnessed great success for hand reconstruction in real-time applications such as visual reality and augmented reality while interacting with two-hand reconstruction through efficient transformers is left unexplored. In this paper, we propose a method called lightweight attention hand (LWA-HAND) to reconstruct hands in low flops from a single RGB image. To solve the occlusion and interaction problem in efficient attention architectures, we propose three mobile attention modules in this paper. The first module is a lightweight feature attention module that extracts both local occlusion representation and global image patch representation in a coarse-to-fine manner. The second module is a cross image and graph bridge module which fuses image context and hand vertex. The third module is a lightweight cross-attention mechanism that uses element-wise operation for the cross-attention of two hands in linear complexity. The resulting model achieves comparable performance on the InterHand2.6M benchmark in comparison with the state-of-the-art models. Simultaneously, it reduces the flops to $0.47GFlops$ while the state-of-the-art models have heavy computations between $10GFlops$ and $20GFlops$.
|
2101.10620
|
Liang Lin
|
Liang Lin and Yiming Gao and Ke Gong and Meng Wang and Xiaodan Liang
|
Graphonomy: Universal Image Parsing via Graph Reasoning and Transfer
|
To appear in IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE
INTELLIGENCE (T-PAMI) 2021. We propose a graph reasoning and transfer
learning framework, which incorporates human knowledge and label taxonomy
into the intermediate graph representation learning beyond local
convolutions. arXiv admin note: substantial text overlap with
arXiv:1904.04536
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prior highly-tuned image parsing models are usually studied in a certain
domain with a specific set of semantic labels and can hardly be adapted into
other scenarios (e.g., sharing discrepant label granularity) without extensive
re-training. Learning a single universal parsing model by unifying label
annotations from different domains or at various levels of granularity is a
crucial but rarely addressed topic. This poses many fundamental learning
challenges, e.g., discovering underlying semantic structures among different
label granularity or mining label correlation across relevant tasks. To address
these challenges, we propose a graph reasoning and transfer learning framework,
named "Graphonomy", which incorporates human knowledge and label taxonomy into
the intermediate graph representation learning beyond local convolutions. In
particular, Graphonomy learns the global and structured semantic coherency in
multiple domains via semantic-aware graph reasoning and transfer, enforcing the
mutual benefits of the parsing across domains (e.g., different datasets or
co-related tasks). The Graphonomy includes two iterated modules: Intra-Graph
Reasoning and Inter-Graph Transfer modules. The former extracts the semantic
graph in each domain to improve the feature representation learning by
propagating information with the graph; the latter exploits the dependencies
among the graphs from different domains for bidirectional knowledge transfer.
We apply Graphonomy to two relevant but different image understanding research
topics: human parsing and panoptic segmentation, and show Graphonomy can handle
both of them well via a standard pipeline against current state-of-the-art
approaches. Moreover, some extra benefit of our framework is demonstrated,
e.g., generating the human parsing at various levels of granularity by unifying
annotations across different datasets.
|
[
{
"created": "Tue, 26 Jan 2021 08:19:03 GMT",
"version": "v1"
}
] |
2021-01-27
|
[
[
"Lin",
"Liang",
""
],
[
"Gao",
"Yiming",
""
],
[
"Gong",
"Ke",
""
],
[
"Wang",
"Meng",
""
],
[
"Liang",
"Xiaodan",
""
]
] |
Prior highly-tuned image parsing models are usually studied in a certain domain with a specific set of semantic labels and can hardly be adapted into other scenarios (e.g., sharing discrepant label granularity) without extensive re-training. Learning a single universal parsing model by unifying label annotations from different domains or at various levels of granularity is a crucial but rarely addressed topic. This poses many fundamental learning challenges, e.g., discovering underlying semantic structures among different label granularity or mining label correlation across relevant tasks. To address these challenges, we propose a graph reasoning and transfer learning framework, named "Graphonomy", which incorporates human knowledge and label taxonomy into the intermediate graph representation learning beyond local convolutions. In particular, Graphonomy learns the global and structured semantic coherency in multiple domains via semantic-aware graph reasoning and transfer, enforcing the mutual benefits of the parsing across domains (e.g., different datasets or co-related tasks). The Graphonomy includes two iterated modules: Intra-Graph Reasoning and Inter-Graph Transfer modules. The former extracts the semantic graph in each domain to improve the feature representation learning by propagating information with the graph; the latter exploits the dependencies among the graphs from different domains for bidirectional knowledge transfer. We apply Graphonomy to two relevant but different image understanding research topics: human parsing and panoptic segmentation, and show Graphonomy can handle both of them well via a standard pipeline against current state-of-the-art approaches. Moreover, some extra benefit of our framework is demonstrated, e.g., generating the human parsing at various levels of granularity by unifying annotations across different datasets.
|
0909.2622
|
Jiangyuan Li
|
Jiangyuan Li and Athina Petropulu
|
Transmitter Optimization for Achieving Secrecy Capacity in Gaussian MIMO
Wiretap Channels
|
29 pages, 10 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a Gaussian multiple-input multiple-output (MIMO) wiretap channel
model, where there exists a transmitter, a legitimate receiver and an
eavesdropper, each node equipped with multiple antennas. We study the problem
of finding the optimal input covariance matrix that achieves secrecy capacity
subject to a power constraint, which leads to a non-convex optimization problem
that is in general difficult to solve. Existing results for this problem
address the case in which the transmitter and the legitimate receiver have two
antennas each and the eavesdropper has one antenna. For the general cases, it
has been shown that the optimal input covariance matrix has low rank when the
difference between the Grams of the eavesdropper and the legitimate receiver
channel matrices is indefinite or semi-definite, while it may have low rank or
full rank when the difference is positive definite. In this paper, the
aforementioned non-convex optimization problem is investigated. In particular,
for the multiple-input single-output (MISO) wiretap channel, the optimal input
covariance matrix is obtained in closed form. For general cases, we derive the
necessary conditions for the optimal input covariance matrix consisting of a
set of equations. For the case in which the transmitter has two antennas, the
derived necessary conditions can result in a closed form solution; For the case
in which the difference between the Grams is indefinite and has all negative
eigenvalues except one positive eigenvalue, the optimal input covariance matrix
has rank one and can be obtained in closed form; For other cases, the solution
is proved to be a fixed point of a mapping from a convex set to itself and an
iterative procedure is provided to search for it. Numerical results are
presented to illustrate the proposed theoretical findings.
|
[
{
"created": "Mon, 14 Sep 2009 19:07:49 GMT",
"version": "v1"
}
] |
2009-09-15
|
[
[
"Li",
"Jiangyuan",
""
],
[
"Petropulu",
"Athina",
""
]
] |
We consider a Gaussian multiple-input multiple-output (MIMO) wiretap channel model, where there exists a transmitter, a legitimate receiver and an eavesdropper, each node equipped with multiple antennas. We study the problem of finding the optimal input covariance matrix that achieves secrecy capacity subject to a power constraint, which leads to a non-convex optimization problem that is in general difficult to solve. Existing results for this problem address the case in which the transmitter and the legitimate receiver have two antennas each and the eavesdropper has one antenna. For the general cases, it has been shown that the optimal input covariance matrix has low rank when the difference between the Grams of the eavesdropper and the legitimate receiver channel matrices is indefinite or semi-definite, while it may have low rank or full rank when the difference is positive definite. In this paper, the aforementioned non-convex optimization problem is investigated. In particular, for the multiple-input single-output (MISO) wiretap channel, the optimal input covariance matrix is obtained in closed form. For general cases, we derive the necessary conditions for the optimal input covariance matrix consisting of a set of equations. For the case in which the transmitter has two antennas, the derived necessary conditions can result in a closed form solution; For the case in which the difference between the Grams is indefinite and has all negative eigenvalues except one positive eigenvalue, the optimal input covariance matrix has rank one and can be obtained in closed form; For other cases, the solution is proved to be a fixed point of a mapping from a convex set to itself and an iterative procedure is provided to search for it. Numerical results are presented to illustrate the proposed theoretical findings.
|
2402.06331
|
Joanna Komorniczak
|
Joanna Komorniczak and Pawel Ksieniewicz
|
Taking Class Imbalance Into Account in Open Set Recognition Evaluation
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In recent years Deep Neural Network-based systems are not only increasing in
popularity but also receive growing user trust. However, due to the
closed-world assumption of such systems, they cannot recognize samples from
unknown classes and often induce an incorrect label with high confidence.
Presented work looks at the evaluation of methods for Open Set Recognition,
focusing on the impact of class imbalance, especially in the dichotomy between
known and unknown samples. As an outcome of problem analysis, we present a set
of guidelines for evaluation of methods in this field.
|
[
{
"created": "Fri, 9 Feb 2024 11:15:49 GMT",
"version": "v1"
}
] |
2024-02-12
|
[
[
"Komorniczak",
"Joanna",
""
],
[
"Ksieniewicz",
"Pawel",
""
]
] |
In recent years Deep Neural Network-based systems are not only increasing in popularity but also receive growing user trust. However, due to the closed-world assumption of such systems, they cannot recognize samples from unknown classes and often induce an incorrect label with high confidence. Presented work looks at the evaluation of methods for Open Set Recognition, focusing on the impact of class imbalance, especially in the dichotomy between known and unknown samples. As an outcome of problem analysis, we present a set of guidelines for evaluation of methods in this field.
|
2403.04384
|
Tomasz Winiarski
|
Tomasz Winiarski, Daniel Gie{\l}dowski, Jan Kaniuka, Jakub Ostrysz,
Jakub Sadowski
|
HeROS: a miniaturised platform for research and development on
Heterogeneous RObotic Systems
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Tests and prototyping are vital in the research and development of robotic
systems. Work with target hardware is problematic. Hence, in the article, a
low-cost, miniaturised physical platform is presented to deal with experiments
on heterogeneous robotic systems. The platform comprises a physical board with
tiles of the standardised base, diverse mobile robots, and manipulation robots.
The number of exemplary applications validates the usefulness of the solution.
|
[
{
"created": "Thu, 7 Mar 2024 10:23:39 GMT",
"version": "v1"
}
] |
2024-03-08
|
[
[
"Winiarski",
"Tomasz",
""
],
[
"Giełdowski",
"Daniel",
""
],
[
"Kaniuka",
"Jan",
""
],
[
"Ostrysz",
"Jakub",
""
],
[
"Sadowski",
"Jakub",
""
]
] |
Tests and prototyping are vital in the research and development of robotic systems. Work with target hardware is problematic. Hence, in the article, a low-cost, miniaturised physical platform is presented to deal with experiments on heterogeneous robotic systems. The platform comprises a physical board with tiles of the standardised base, diverse mobile robots, and manipulation robots. The number of exemplary applications validates the usefulness of the solution.
|
2403.04712
|
Harry Zhang Mr.
|
Sangli Teng, Harry Zhang, David Jin, Ashkan Jasour, Maani Ghaffari,
Luca Carlone
|
GMKF: Generalized Moment Kalman Filter for Polynomial Systems with
Arbitrary Noise
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper develops a new filtering approach for state estimation in
polynomial systems corrupted by arbitrary noise, which commonly arise in
robotics. We first consider a batch setup where we perform state estimation
using all data collected from the initial to the current time. We formulate the
batch state estimation problem as a Polynomial Optimization Problem (POP) and
relax the assumption of Gaussian noise by specifying a finite number of moments
of the noise. We solve the resulting POP using a moment relaxation and prove
that under suitable conditions on the rank of the relaxation, (i) we can
extract a provably optimal estimate from the moment relaxation, and (ii) we can
obtain a belief representation from the dual (sum-of-squares) relaxation. We
then turn our attention to the filtering setup and apply similar insights to
develop a GMKF for recursive state estimation in polynomial systems with
arbitrary noise. The GMKF formulates the prediction and update steps as POPs
and solves them using moment relaxations, carrying over a possibly non-Gaussian
belief. In the linear-Gaussian case, GMKF reduces to the standard Kalman
Filter. We demonstrate that GMKF performs well under highly non-Gaussian noise
and outperforms common alternatives, including the Extended and Unscented
Kalman Filter, and their variants on matrix Lie group.
|
[
{
"created": "Thu, 7 Mar 2024 18:07:41 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Mar 2024 05:08:07 GMT",
"version": "v2"
}
] |
2024-03-11
|
[
[
"Teng",
"Sangli",
""
],
[
"Zhang",
"Harry",
""
],
[
"Jin",
"David",
""
],
[
"Jasour",
"Ashkan",
""
],
[
"Ghaffari",
"Maani",
""
],
[
"Carlone",
"Luca",
""
]
] |
This paper develops a new filtering approach for state estimation in polynomial systems corrupted by arbitrary noise, which commonly arise in robotics. We first consider a batch setup where we perform state estimation using all data collected from the initial to the current time. We formulate the batch state estimation problem as a Polynomial Optimization Problem (POP) and relax the assumption of Gaussian noise by specifying a finite number of moments of the noise. We solve the resulting POP using a moment relaxation and prove that under suitable conditions on the rank of the relaxation, (i) we can extract a provably optimal estimate from the moment relaxation, and (ii) we can obtain a belief representation from the dual (sum-of-squares) relaxation. We then turn our attention to the filtering setup and apply similar insights to develop a GMKF for recursive state estimation in polynomial systems with arbitrary noise. The GMKF formulates the prediction and update steps as POPs and solves them using moment relaxations, carrying over a possibly non-Gaussian belief. In the linear-Gaussian case, GMKF reduces to the standard Kalman Filter. We demonstrate that GMKF performs well under highly non-Gaussian noise and outperforms common alternatives, including the Extended and Unscented Kalman Filter, and their variants on matrix Lie group.
|
2008.03325
|
Nathaniel Grammel
|
Brian Brubach, Nathaniel Grammel, David G. Harris, Aravind Srinivasan,
Leonidas Tsepenekas, Anil Vullikanti
|
Stochastic Optimization and Learning for Two-Stage Supplier Problems
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The main focus of this paper is radius-based (supplier) clustering in the
two-stage stochastic setting with recourse, where the inherent stochasticity of
the model comes in the form of a budget constraint. In addition to the standard
(homogeneous) setting where all clients must be within a distance $R$ of the
nearest facility, we provide results for the more general problem where the
radius demands may be inhomogeneous (i.e., different for each client). We also
explore a number of variants where additional constraints are imposed on the
first-stage decisions, specifically matroid and multi-knapsack constraints, and
provide results for these settings.
We derive results for the most general distributional setting, where there is
only black-box access to the underlying distribution. To accomplish this, we
first develop algorithms for the polynomial scenarios setting; we then employ a
novel scenario-discarding variant of the standard Sample Average Approximation
(SAA) method, which crucially exploits properties of the restricted-case
algorithms. We note that the scenario-discarding modification to the SAA method
is necessary in order to optimize over the radius.
|
[
{
"created": "Fri, 7 Aug 2020 18:18:29 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Feb 2021 16:50:30 GMT",
"version": "v2"
},
{
"created": "Thu, 6 May 2021 15:10:17 GMT",
"version": "v3"
},
{
"created": "Sun, 5 Jun 2022 23:35:10 GMT",
"version": "v4"
},
{
"created": "Wed, 17 Aug 2022 05:51:48 GMT",
"version": "v5"
},
{
"created": "Sat, 15 Jul 2023 16:02:59 GMT",
"version": "v6"
},
{
"created": "Sun, 7 Apr 2024 13:18:54 GMT",
"version": "v7"
}
] |
2024-04-09
|
[
[
"Brubach",
"Brian",
""
],
[
"Grammel",
"Nathaniel",
""
],
[
"Harris",
"David G.",
""
],
[
"Srinivasan",
"Aravind",
""
],
[
"Tsepenekas",
"Leonidas",
""
],
[
"Vullikanti",
"Anil",
""
]
] |
The main focus of this paper is radius-based (supplier) clustering in the two-stage stochastic setting with recourse, where the inherent stochasticity of the model comes in the form of a budget constraint. In addition to the standard (homogeneous) setting where all clients must be within a distance $R$ of the nearest facility, we provide results for the more general problem where the radius demands may be inhomogeneous (i.e., different for each client). We also explore a number of variants where additional constraints are imposed on the first-stage decisions, specifically matroid and multi-knapsack constraints, and provide results for these settings. We derive results for the most general distributional setting, where there is only black-box access to the underlying distribution. To accomplish this, we first develop algorithms for the polynomial scenarios setting; we then employ a novel scenario-discarding variant of the standard Sample Average Approximation (SAA) method, which crucially exploits properties of the restricted-case algorithms. We note that the scenario-discarding modification to the SAA method is necessary in order to optimize over the radius.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.