id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1906.10370
|
Miguel Sepulcre
|
Rafael Molina-Masegosa, Miguel Sepulcre and Javier Gozalvez
|
Geo-Based Scheduling for C-V2X Networks
| null |
IEEE Transactions on Vehicular Technology, Volume 68, Issue 9, pp.
8397 - 8407, September 2019
|
10.1109/TVT.2019.2924698
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cellular Vehicle-to-Everything (C-V2X) networks can operate without cellular
infrastructure support. Vehicles can autonomously select their radio resources
using the sensing-based Semi-Persistent Scheduling (SPS) algorithm specified by
the Third Generation Partnership Project (3GPP). The sensing nature of the SPS
scheme makes C-V2X communications prone to the well-known hidden-terminal
problem. To address this problem, this paper proposes a novel geo-based
scheduling scheme that allows vehicles to autonomously select their radio
resources based on the location and ordering of neighboring vehicles on the
road. The proposed scheme results in an implicit resource selection
coordination between vehicles (even with those outside the sensing range) that
reduces packet collisions. This paper evaluates analytically and through
simulations the proposed scheduling scheme. The obtained results demonstrate
that it reduces packet collisions and significantly increases the C-V2X
performance compared to when using the sensing-based SPS scheme.
|
[
{
"created": "Tue, 25 Jun 2019 08:12:28 GMT",
"version": "v1"
}
] |
2019-09-20
|
[
[
"Molina-Masegosa",
"Rafael",
""
],
[
"Sepulcre",
"Miguel",
""
],
[
"Gozalvez",
"Javier",
""
]
] |
Cellular Vehicle-to-Everything (C-V2X) networks can operate without cellular infrastructure support. Vehicles can autonomously select their radio resources using the sensing-based Semi-Persistent Scheduling (SPS) algorithm specified by the Third Generation Partnership Project (3GPP). The sensing nature of the SPS scheme makes C-V2X communications prone to the well-known hidden-terminal problem. To address this problem, this paper proposes a novel geo-based scheduling scheme that allows vehicles to autonomously select their radio resources based on the location and ordering of neighboring vehicles on the road. The proposed scheme results in an implicit resource selection coordination between vehicles (even with those outside the sensing range) that reduces packet collisions. This paper evaluates analytically and through simulations the proposed scheduling scheme. The obtained results demonstrate that it reduces packet collisions and significantly increases the C-V2X performance compared to when using the sensing-based SPS scheme.
|
2312.07340
|
Yusen Feng
|
Yusen Feng, Xiyan Xu, Libin Liu
|
MuscleVAE: Model-Based Controllers of Muscle-Actuated Characters
| null | null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a simulation and control framework for generating
biomechanically plausible motion for muscle-actuated characters. We incorporate
a fatigue dynamics model, the 3CC-r model, into the widely-adopted Hill-type
muscle model to simulate the development and recovery of fatigue in muscles,
which creates a natural evolution of motion style caused by the accumulation of
fatigue from prolonged activities. To address the challenging problem of
controlling a musculoskeletal system with high degrees of freedom, we propose a
novel muscle-space control strategy based on PD control. Our simulation and
control framework facilitates the training of a generative model for
muscle-based motion control, which we refer to as MuscleVAE. By leveraging the
variational autoencoders (VAEs), MuscleVAE is capable of learning a rich and
flexible latent representation of skills from a large unstructured motion
dataset, encoding not only motion features but also muscle control and fatigue
properties. We demonstrate that the MuscleVAE model can be efficiently trained
using a model-based approach, resulting in the production of high-fidelity
motions and enabling a variety of downstream tasks.
|
[
{
"created": "Tue, 12 Dec 2023 15:01:17 GMT",
"version": "v1"
}
] |
2023-12-13
|
[
[
"Feng",
"Yusen",
""
],
[
"Xu",
"Xiyan",
""
],
[
"Liu",
"Libin",
""
]
] |
In this paper, we present a simulation and control framework for generating biomechanically plausible motion for muscle-actuated characters. We incorporate a fatigue dynamics model, the 3CC-r model, into the widely-adopted Hill-type muscle model to simulate the development and recovery of fatigue in muscles, which creates a natural evolution of motion style caused by the accumulation of fatigue from prolonged activities. To address the challenging problem of controlling a musculoskeletal system with high degrees of freedom, we propose a novel muscle-space control strategy based on PD control. Our simulation and control framework facilitates the training of a generative model for muscle-based motion control, which we refer to as MuscleVAE. By leveraging the variational autoencoders (VAEs), MuscleVAE is capable of learning a rich and flexible latent representation of skills from a large unstructured motion dataset, encoding not only motion features but also muscle control and fatigue properties. We demonstrate that the MuscleVAE model can be efficiently trained using a model-based approach, resulting in the production of high-fidelity motions and enabling a variety of downstream tasks.
|
2211.08292
|
Jennifer Andreoli-Fang
|
Jennifer Andreoli-Fang, John T Chapman
|
Mobile-Aware Scheduling for Low Latency Backhaul over DOCSIS
|
IEEE International Symposium on Personal, Indoor and Mobile Radio
Communications (PIMRC), 2017
| null |
10.1109/PIMRC.2017.8292173
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we discuss latency reduction techniques for mobile backhaul
over Data Over Cable Service Interface Specifications (DOCSIS) networks. When
the latencies from both the wireless and the DOCSIS networks are added
together, it can result in noticeable end-to-end system latency, particularly
under network congestion. Previously, we proposed a method to improve upstream
user-to-mobile core latency by coordinating the LTE and DOCSIS scheduling. The
method reduces the impact on system latency from the DOCSIS network's
request-grant-data loop, which is the main contributor of backhaul upstream
latency. Since the method reduces latency on the DOCSIS data path, it will
therefore improve performance of latency sensitive applications, particularly
if TCP is used as the transport protocol, especially when the link is
congested. In this paper, we investigate the effect of HARQ failure on system
performance. Through simulation, we show that despite the uncertainty
introduced by the LTE protocol, coordinated scheduling improves overall system
latency.
|
[
{
"created": "Tue, 15 Nov 2022 16:45:26 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Nov 2022 16:38:44 GMT",
"version": "v2"
}
] |
2022-11-17
|
[
[
"Andreoli-Fang",
"Jennifer",
""
],
[
"Chapman",
"John T",
""
]
] |
In this paper, we discuss latency reduction techniques for mobile backhaul over Data Over Cable Service Interface Specifications (DOCSIS) networks. When the latencies from both the wireless and the DOCSIS networks are added together, it can result in noticeable end-to-end system latency, particularly under network congestion. Previously, we proposed a method to improve upstream user-to-mobile core latency by coordinating the LTE and DOCSIS scheduling. The method reduces the impact on system latency from the DOCSIS network's request-grant-data loop, which is the main contributor of backhaul upstream latency. Since the method reduces latency on the DOCSIS data path, it will therefore improve performance of latency sensitive applications, particularly if TCP is used as the transport protocol, especially when the link is congested. In this paper, we investigate the effect of HARQ failure on system performance. Through simulation, we show that despite the uncertainty introduced by the LTE protocol, coordinated scheduling improves overall system latency.
|
2103.14856
|
Ciriaco Andrea D'Angelo
|
Giovanni Abramo, Ciriaco Andrea D'Angelo, Lin Zhang
|
A comparison of two approaches for measuring interdisciplinary research
output: the disciplinary diversity of authors vs the disciplinary diversity
of the reference list
| null |
Journal of Informetrics, 12(4), 2018, 1182-1193
|
10.1016/j.joi.2018.09.001
| null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study investigates the convergence of two bibliometric approaches to the
measurement of interdisciplinary research: one based on analyzing disciplinary
diversity in the reference list of publications, the other based on the
disciplinary diversity of authors of publications. In particular we measure the
variety, balance, disparity and integrated diversity index of, respectively,
single-author, multi-author single-field, and multi-author multi-field
publications. We find that, in general, the diversity of the reference list
grows with the number of fields reflected in a paper's authors' list and, to a
lesser extent, with the number of authors being equal the number of fields.
Further, we find that when fields belonging to different disciplines are
reflected in the authors' list, the disparity in the reference list is higher
than in the case of fields belonging to the same discipline. However, this
general tendency varies across disciplines, and noticeable exceptions are found
at individual paper level.
|
[
{
"created": "Sat, 27 Mar 2021 09:34:53 GMT",
"version": "v1"
}
] |
2021-03-30
|
[
[
"Abramo",
"Giovanni",
""
],
[
"D'Angelo",
"Ciriaco Andrea",
""
],
[
"Zhang",
"Lin",
""
]
] |
This study investigates the convergence of two bibliometric approaches to the measurement of interdisciplinary research: one based on analyzing disciplinary diversity in the reference list of publications, the other based on the disciplinary diversity of authors of publications. In particular we measure the variety, balance, disparity and integrated diversity index of, respectively, single-author, multi-author single-field, and multi-author multi-field publications. We find that, in general, the diversity of the reference list grows with the number of fields reflected in a paper's authors' list and, to a lesser extent, with the number of authors being equal the number of fields. Further, we find that when fields belonging to different disciplines are reflected in the authors' list, the disparity in the reference list is higher than in the case of fields belonging to the same discipline. However, this general tendency varies across disciplines, and noticeable exceptions are found at individual paper level.
|
1910.12435
|
Marcel Keller
|
Anders Dalskov and Daniel Escudero and Marcel Keller
|
Secure Evaluation of Quantized Neural Networks
|
22 pages
|
Proceedings on Privacy Enhancing Technologies 4 (2020): 355-375
|
10.2478/popets-2020-0077
| null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We investigate two questions in this paper: First, we ask to what extent "MPC
friendly" models are already supported by major Machine Learning frameworks
such as TensorFlow or PyTorch. Prior works provide protocols that only work on
fixed-point integers and specialized activation functions, two aspects that are
not supported by popular Machine Learning frameworks, and the need for these
specialized model representations means that it is hard, and often impossible,
to use e.g., TensorFlow to design, train and test models that later have to be
evaluated securely. Second, we ask to what extent the functionality for
evaluating Neural Networks already exists in general-purpose MPC frameworks.
These frameworks have received more scrutiny, are better documented and
supported on more platforms. Furthermore, they are typically flexible in terms
of the threat model they support. In contrast, most secure evaluation protocols
in the literature are targeted to a specific threat model and their
implementations are only a "proof-of-concept", making it very hard for their
adoption in practice. We answer both of the above questions in a positive way:
We observe that the quantization techniques supported by both TensorFlow,
PyTorch and MXNet can provide models in a representation that can be evaluated
securely; and moreover, that this evaluation can be performed by a general
purpose MPC framework. We perform extensive benchmarks to understand the exact
trade-offs between different corruption models, network sizes and efficiency.
These experiments provide an interesting insight into cost between active and
passive security, as well as honest and dishonest majority. Our work shows then
that the separating line between existing ML frameworks and existing MPC
protocols may be narrower than implicitly suggested by previous works.
|
[
{
"created": "Mon, 28 Oct 2019 04:17:33 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Mar 2021 04:22:07 GMT",
"version": "v2"
}
] |
2021-03-02
|
[
[
"Dalskov",
"Anders",
""
],
[
"Escudero",
"Daniel",
""
],
[
"Keller",
"Marcel",
""
]
] |
We investigate two questions in this paper: First, we ask to what extent "MPC friendly" models are already supported by major Machine Learning frameworks such as TensorFlow or PyTorch. Prior works provide protocols that only work on fixed-point integers and specialized activation functions, two aspects that are not supported by popular Machine Learning frameworks, and the need for these specialized model representations means that it is hard, and often impossible, to use e.g., TensorFlow to design, train and test models that later have to be evaluated securely. Second, we ask to what extent the functionality for evaluating Neural Networks already exists in general-purpose MPC frameworks. These frameworks have received more scrutiny, are better documented and supported on more platforms. Furthermore, they are typically flexible in terms of the threat model they support. In contrast, most secure evaluation protocols in the literature are targeted to a specific threat model and their implementations are only a "proof-of-concept", making it very hard for their adoption in practice. We answer both of the above questions in a positive way: We observe that the quantization techniques supported by both TensorFlow, PyTorch and MXNet can provide models in a representation that can be evaluated securely; and moreover, that this evaluation can be performed by a general purpose MPC framework. We perform extensive benchmarks to understand the exact trade-offs between different corruption models, network sizes and efficiency. These experiments provide an interesting insight into cost between active and passive security, as well as honest and dishonest majority. Our work shows then that the separating line between existing ML frameworks and existing MPC protocols may be narrower than implicitly suggested by previous works.
|
2407.06855
|
Arnab Sharma
|
Sourabh Kapoor, Arnab Sharma, Michael R\"oder, Caglar Demir,
Axel-Cyrille Ngonga Ngomo
|
Performance Evaluation of Knowledge Graph Embedding Approaches under
Non-adversarial Attacks
| null | null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge Graph Embedding (KGE) transforms a discrete Knowledge Graph (KG)
into a continuous vector space facilitating its use in various AI-driven
applications like Semantic Search, Question Answering, or Recommenders. While
KGE approaches are effective in these applications, most existing approaches
assume that all information in the given KG is correct. This enables attackers
to influence the output of these approaches, e.g., by perturbing the input.
Consequently, the robustness of such KGE approaches has to be addressed. Recent
work focused on adversarial attacks. However, non-adversarial attacks on all
attack surfaces of these approaches have not been thoroughly examined. We close
this gap by evaluating the impact of non-adversarial attacks on the performance
of 5 state-of-the-art KGE algorithms on 5 datasets with respect to attacks on 3
attack surfaces-graph, parameter, and label perturbation. Our evaluation
results suggest that label perturbation has a strong effect on the KGE
performance, followed by parameter perturbation with a moderate and graph with
a low effect.
|
[
{
"created": "Tue, 9 Jul 2024 13:42:14 GMT",
"version": "v1"
}
] |
2024-07-10
|
[
[
"Kapoor",
"Sourabh",
""
],
[
"Sharma",
"Arnab",
""
],
[
"Röder",
"Michael",
""
],
[
"Demir",
"Caglar",
""
],
[
"Ngomo",
"Axel-Cyrille Ngonga",
""
]
] |
Knowledge Graph Embedding (KGE) transforms a discrete Knowledge Graph (KG) into a continuous vector space facilitating its use in various AI-driven applications like Semantic Search, Question Answering, or Recommenders. While KGE approaches are effective in these applications, most existing approaches assume that all information in the given KG is correct. This enables attackers to influence the output of these approaches, e.g., by perturbing the input. Consequently, the robustness of such KGE approaches has to be addressed. Recent work focused on adversarial attacks. However, non-adversarial attacks on all attack surfaces of these approaches have not been thoroughly examined. We close this gap by evaluating the impact of non-adversarial attacks on the performance of 5 state-of-the-art KGE algorithms on 5 datasets with respect to attacks on 3 attack surfaces-graph, parameter, and label perturbation. Our evaluation results suggest that label perturbation has a strong effect on the KGE performance, followed by parameter perturbation with a moderate and graph with a low effect.
|
1211.1861
|
Mohamed Firdhous
|
Mohamed Firdhous
|
Automating Legal Research through Data Mining
|
8 pages, 11 figures, published in (IJACSA) International Journal of
Advanced Computer Science and Applications. arXiv admin note: text overlap
with wikipedia entry on text mining
|
International Journal of Advanced Computer Science and
Applications (IJACSA) Vol. 1, No 6, December 2010, pp. 9-16
| null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The term legal research generally refers to the process of identifying and
retrieving appropriate information necessary to support legal decision making
from past case records. At present, the process is mostly manual, but some
traditional technologies such as keyword searching are commonly used to speed
the process up. But a keyword search is not a comprehensive search to cater to
the requirements of legal research as the search result includes too many false
hits in terms of irrelevant case records. Hence the present generic tools
cannot be used to automate legal research.
This paper presents a framework which was developed by combining several Text
Mining techniques to automate the process overcoming the difficulties in the
existing methods. Further, the research also identifies the possible
enhancements that could be done to enhance the effectiveness of the framework.
|
[
{
"created": "Thu, 8 Nov 2012 14:19:49 GMT",
"version": "v1"
}
] |
2012-11-09
|
[
[
"Firdhous",
"Mohamed",
""
]
] |
The term legal research generally refers to the process of identifying and retrieving appropriate information necessary to support legal decision making from past case records. At present, the process is mostly manual, but some traditional technologies such as keyword searching are commonly used to speed the process up. But a keyword search is not a comprehensive search to cater to the requirements of legal research as the search result includes too many false hits in terms of irrelevant case records. Hence the present generic tools cannot be used to automate legal research. This paper presents a framework which was developed by combining several Text Mining techniques to automate the process overcoming the difficulties in the existing methods. Further, the research also identifies the possible enhancements that could be done to enhance the effectiveness of the framework.
|
1602.00810
|
Jean-Guillaume Dumas
|
Jean-Guillaume Dumas (LJK), Erich Kaltofen (NCSU), Emmanuel Thom\'e
(CARAMBA), Gilles Villard (ARIC, LIP)
|
Linear Time Interactive Certificates for the Minimal Polynomial and the
Determinant of a Sparse Matrix
| null |
International Symposium on Symbolic and Algebraic Computation, Jul
2016, Waterloo, Canada. pp.199-206,
\&\#x27E8;10.1145/2930889.2930908\&\#x27E9
| null | null |
cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computational problem certificates are additional data structures for each
output, which can be used by a-possibly randomized-verification algorithm that
proves the correctness of each output. In this paper, we give an algorithm that
computes a certificate for the minimal polynomial of sparse or structured nxn
matrices over an abstract field, of sufficiently large cardinality, whose Monte
Carlo verification complexity requires a single matrix-vector multiplication
and a linear number of extra field operations. We also propose a novel
preconditioner that ensures irreducibility of the characteristic polynomial of
the generically preconditioned matrix. This preconditioner takes linear time to
be applied and uses only two random entries. We then combine these two
techniques to give algorithms that compute certificates for the determinant,
and thus for the characteristic polynomial, whose Monte Carlo verification
complexity is therefore also linear.
|
[
{
"created": "Tue, 2 Feb 2016 07:29:28 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Dec 2019 13:02:25 GMT",
"version": "v2"
}
] |
2019-12-03
|
[
[
"Dumas",
"Jean-Guillaume",
"",
"LJK"
],
[
"Kaltofen",
"Erich",
"",
"NCSU"
],
[
"Thomé",
"Emmanuel",
"",
"CARAMBA"
],
[
"Villard",
"Gilles",
"",
"ARIC, LIP"
]
] |
Computational problem certificates are additional data structures for each output, which can be used by a-possibly randomized-verification algorithm that proves the correctness of each output. In this paper, we give an algorithm that computes a certificate for the minimal polynomial of sparse or structured nxn matrices over an abstract field, of sufficiently large cardinality, whose Monte Carlo verification complexity requires a single matrix-vector multiplication and a linear number of extra field operations. We also propose a novel preconditioner that ensures irreducibility of the characteristic polynomial of the generically preconditioned matrix. This preconditioner takes linear time to be applied and uses only two random entries. We then combine these two techniques to give algorithms that compute certificates for the determinant, and thus for the characteristic polynomial, whose Monte Carlo verification complexity is therefore also linear.
|
1912.07976
|
Heng Yang
|
Heng Yang, Biqing Zeng, JianHao Yang, Youwei Song and Ruyang Xu
|
A Multi-task Learning Model for Chinese-oriented Aspect Polarity
Classification and Aspect Term Extraction
|
Submitted to Elsevier
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aspect-based sentiment analysis (ABSA) task is a multi-grained task of
natural language processing and consists of two subtasks: aspect term
extraction (ATE) and aspect polarity classification (APC). Most of the existing
work focuses on the subtask of aspect term polarity inferring and ignores the
significance of aspect term extraction. Besides, the existing researches do not
pay attention to the research of the Chinese-oriented ABSA task. Based on the
local context focus (LCF) mechanism, this paper firstly proposes a multi-task
learning model for Chinese-oriented aspect-based sentiment analysis, namely
LCF-ATEPC. Compared with existing models, this model equips the capability of
extracting aspect term and inferring aspect term polarity synchronously,
moreover, this model is effective to analyze both Chinese and English comments
simultaneously and the experiment on a multilingual mixed dataset proved its
availability. By integrating the domain-adapted BERT model, the LCF-ATEPC model
achieved the state-of-the-art performance of aspect term extraction and aspect
polarity classification in four Chinese review datasets. Besides, the
experimental results on the most commonly used SemEval-2014 task4 Restaurant
and Laptop datasets outperform the state-of-the-art performance on the ATE and
APC subtask.
|
[
{
"created": "Tue, 17 Dec 2019 12:47:33 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Dec 2019 01:38:38 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Feb 2020 09:20:28 GMT",
"version": "v3"
}
] |
2020-02-13
|
[
[
"Yang",
"Heng",
""
],
[
"Zeng",
"Biqing",
""
],
[
"Yang",
"JianHao",
""
],
[
"Song",
"Youwei",
""
],
[
"Xu",
"Ruyang",
""
]
] |
Aspect-based sentiment analysis (ABSA) task is a multi-grained task of natural language processing and consists of two subtasks: aspect term extraction (ATE) and aspect polarity classification (APC). Most of the existing work focuses on the subtask of aspect term polarity inferring and ignores the significance of aspect term extraction. Besides, the existing researches do not pay attention to the research of the Chinese-oriented ABSA task. Based on the local context focus (LCF) mechanism, this paper firstly proposes a multi-task learning model for Chinese-oriented aspect-based sentiment analysis, namely LCF-ATEPC. Compared with existing models, this model equips the capability of extracting aspect term and inferring aspect term polarity synchronously, moreover, this model is effective to analyze both Chinese and English comments simultaneously and the experiment on a multilingual mixed dataset proved its availability. By integrating the domain-adapted BERT model, the LCF-ATEPC model achieved the state-of-the-art performance of aspect term extraction and aspect polarity classification in four Chinese review datasets. Besides, the experimental results on the most commonly used SemEval-2014 task4 Restaurant and Laptop datasets outperform the state-of-the-art performance on the ATE and APC subtask.
|
2007.00584
|
Guillaume Jaume
|
Pushpak Pati, Guillaume Jaume, Lauren Alisha Fernandes, Antonio
Foncubierta, Florinda Feroce, Anna Maria Anniciello, Giosue Scognamiglio,
Nadia Brancati, Daniel Riccio, Maurizio Do Bonito, Giuseppe De Pietro,
Gerardo Botti, Orcun Goksel, Jean-Philippe Thiran, Maria Frucci, Maria
Gabrani
|
HACT-Net: A Hierarchical Cell-to-Tissue Graph Neural Network for
Histopathological Image Classification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cancer diagnosis, prognosis, and therapeutic response prediction are heavily
influenced by the relationship between the histopathological structures and the
function of the tissue. Recent approaches acknowledging the structure-function
relationship, have linked the structural and spatial patterns of cell
organization in tissue via cell-graphs to tumor grades. Though cell
organization is imperative, it is insufficient to entirely represent the
histopathological structure. We propose a novel hierarchical
cell-to-tissue-graph (HACT) representation to improve the structural depiction
of the tissue. It consists of a low-level cell-graph, capturing cell morphology
and interactions, a high-level tissue-graph, capturing morphology and spatial
distribution of tissue parts, and cells-to-tissue hierarchies, encoding the
relative spatial distribution of the cells with respect to the tissue
distribution. Further, a hierarchical graph neural network (HACT-Net) is
proposed to efficiently map the HACT representations to histopathological
breast cancer subtypes. We assess the methodology on a large set of annotated
tissue regions of interest from H\&E stained breast carcinoma whole-slides.
Upon evaluation, the proposed method outperformed recent convolutional neural
network and graph neural network approaches for breast cancer multi-class
subtyping. The proposed entity-based topological analysis is more inline with
the pathological diagnostic procedure of the tissue. It provides more command
over the tissue modelling, therefore encourages the further inclusion of
pathological priors into task-specific tissue representation.
|
[
{
"created": "Wed, 1 Jul 2020 16:22:48 GMT",
"version": "v1"
}
] |
2020-07-02
|
[
[
"Pati",
"Pushpak",
""
],
[
"Jaume",
"Guillaume",
""
],
[
"Fernandes",
"Lauren Alisha",
""
],
[
"Foncubierta",
"Antonio",
""
],
[
"Feroce",
"Florinda",
""
],
[
"Anniciello",
"Anna Maria",
""
],
[
"Scognamiglio",
"Giosue",
""
],
[
"Brancati",
"Nadia",
""
],
[
"Riccio",
"Daniel",
""
],
[
"Bonito",
"Maurizio Do",
""
],
[
"De Pietro",
"Giuseppe",
""
],
[
"Botti",
"Gerardo",
""
],
[
"Goksel",
"Orcun",
""
],
[
"Thiran",
"Jean-Philippe",
""
],
[
"Frucci",
"Maria",
""
],
[
"Gabrani",
"Maria",
""
]
] |
Cancer diagnosis, prognosis, and therapeutic response prediction are heavily influenced by the relationship between the histopathological structures and the function of the tissue. Recent approaches acknowledging the structure-function relationship, have linked the structural and spatial patterns of cell organization in tissue via cell-graphs to tumor grades. Though cell organization is imperative, it is insufficient to entirely represent the histopathological structure. We propose a novel hierarchical cell-to-tissue-graph (HACT) representation to improve the structural depiction of the tissue. It consists of a low-level cell-graph, capturing cell morphology and interactions, a high-level tissue-graph, capturing morphology and spatial distribution of tissue parts, and cells-to-tissue hierarchies, encoding the relative spatial distribution of the cells with respect to the tissue distribution. Further, a hierarchical graph neural network (HACT-Net) is proposed to efficiently map the HACT representations to histopathological breast cancer subtypes. We assess the methodology on a large set of annotated tissue regions of interest from H\&E stained breast carcinoma whole-slides. Upon evaluation, the proposed method outperformed recent convolutional neural network and graph neural network approaches for breast cancer multi-class subtyping. The proposed entity-based topological analysis is more inline with the pathological diagnostic procedure of the tissue. It provides more command over the tissue modelling, therefore encourages the further inclusion of pathological priors into task-specific tissue representation.
|
0810.5057
|
Patricia Gautier
|
Claire Fran\c{c}ois (INIST), Jean-Charles Lamirel (INRIA Lorraine -
LORIA), Shadi Al Shehabi (INRIA Lorraine - LORIA)
|
Combining Advanced Visualization and Automatized Reasoning for
Webometrics: A Test Study
| null |
COLLNET 2006, France (2006)
| null | null |
cs.IR cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a first attempt at performing a precise and automatic
identification of the linking behaviour in a scientific domain through the
analysis of the communication of the related academic institutions on the web.
The proposed approach is based on the paradigm of multiple viewpoint data
analysis (MVDA) than can be fruitfully exploited to highlight relationships
between data, like websites, carrying several kinds of description. It uses the
MultiSOM clustering and mapping method. The domain that has been chosen for
this study is the domain of Computer Science in Germany. The analysis is
conduced on a set of 438 websites of this domain using all together, thematic,
geographic and linking information. It highlights interesting results
concerning both global and local linking behaviour.
|
[
{
"created": "Tue, 28 Oct 2008 15:43:45 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Oct 2009 15:27:41 GMT",
"version": "v2"
}
] |
2009-10-20
|
[
[
"François",
"Claire",
"",
"INIST"
],
[
"Lamirel",
"Jean-Charles",
"",
"INRIA Lorraine -\n LORIA"
],
[
"Shehabi",
"Shadi Al",
"",
"INRIA Lorraine - LORIA"
]
] |
This paper presents a first attempt at performing a precise and automatic identification of the linking behaviour in a scientific domain through the analysis of the communication of the related academic institutions on the web. The proposed approach is based on the paradigm of multiple viewpoint data analysis (MVDA) than can be fruitfully exploited to highlight relationships between data, like websites, carrying several kinds of description. It uses the MultiSOM clustering and mapping method. The domain that has been chosen for this study is the domain of Computer Science in Germany. The analysis is conduced on a set of 438 websites of this domain using all together, thematic, geographic and linking information. It highlights interesting results concerning both global and local linking behaviour.
|
0808.3651
|
Lijun Zhang
|
Lijun Zhang, Holger Hermanns, Friedrich Eisenbrand and David N. Jansen
|
Flow Faster: Efficient Decision Algorithms for Probabilistic Simulations
|
LMCS
|
Logical Methods in Computer Science, Volume 4, Issue 4 (November
11, 2008) lmcs:989
|
10.2168/LMCS-4(4:6)2008
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Strong and weak simulation relations have been proposed for Markov chains,
while strong simulation and strong probabilistic simulation relations have been
proposed for probabilistic automata. However, decision algorithms for strong
and weak simulation over Markov chains, and for strong simulation over
probabilistic automata are not efficient, which makes it as yet unclear whether
they can be used as effectively as their non-probabilistic counterparts. This
paper presents drastically improved algorithms to decide whether some
(discrete- or continuous-time) Markov chain strongly or weakly simulates
another, or whether a probabilistic automaton strongly simulates another. The
key innovation is the use of parametric maximum flow techniques to amortize
computations. We also present a novel algorithm for deciding strong
probabilistic simulation preorders on probabilistic automata, which has
polynomial complexity via a reduction to an LP problem. When extending the
algorithms for probabilistic automata to their continuous-time counterpart, we
retain the same complexity for both strong and strong probabilistic
simulations.
|
[
{
"created": "Wed, 27 Aug 2008 08:35:44 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Nov 2008 23:56:01 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Nov 2008 17:00:30 GMT",
"version": "v3"
}
] |
2015-07-01
|
[
[
"Zhang",
"Lijun",
""
],
[
"Hermanns",
"Holger",
""
],
[
"Eisenbrand",
"Friedrich",
""
],
[
"Jansen",
"David N.",
""
]
] |
Strong and weak simulation relations have been proposed for Markov chains, while strong simulation and strong probabilistic simulation relations have been proposed for probabilistic automata. However, decision algorithms for strong and weak simulation over Markov chains, and for strong simulation over probabilistic automata are not efficient, which makes it as yet unclear whether they can be used as effectively as their non-probabilistic counterparts. This paper presents drastically improved algorithms to decide whether some (discrete- or continuous-time) Markov chain strongly or weakly simulates another, or whether a probabilistic automaton strongly simulates another. The key innovation is the use of parametric maximum flow techniques to amortize computations. We also present a novel algorithm for deciding strong probabilistic simulation preorders on probabilistic automata, which has polynomial complexity via a reduction to an LP problem. When extending the algorithms for probabilistic automata to their continuous-time counterpart, we retain the same complexity for both strong and strong probabilistic simulations.
|
2403.06757
|
Anthony Frion
|
Anthony Frion, Lucas Drumetz, Guillaume Tochon, Mauro Dalla Mura,
Albdeldjalil A\"issa El Bey
|
Koopman Ensembles for Probabilistic Time Series Forecasting
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In the context of an increasing popularity of data-driven models to represent
dynamical systems, many machine learning-based implementations of the Koopman
operator have recently been proposed. However, the vast majority of those works
are limited to deterministic predictions, while the knowledge of uncertainty is
critical in fields like meteorology and climatology. In this work, we
investigate the training of ensembles of models to produce stochastic outputs.
We show through experiments on real remote sensing image time series that
ensembles of independently trained models are highly overconfident and that
using a training criterion that explicitly encourages the members to produce
predictions with high inter-model variances greatly improves the uncertainty
quantification of the ensembles.
|
[
{
"created": "Mon, 11 Mar 2024 14:29:56 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Mar 2024 13:57:42 GMT",
"version": "v2"
}
] |
2024-03-14
|
[
[
"Frion",
"Anthony",
""
],
[
"Drumetz",
"Lucas",
""
],
[
"Tochon",
"Guillaume",
""
],
[
"Mura",
"Mauro Dalla",
""
],
[
"Bey",
"Albdeldjalil Aïssa El",
""
]
] |
In the context of an increasing popularity of data-driven models to represent dynamical systems, many machine learning-based implementations of the Koopman operator have recently been proposed. However, the vast majority of those works are limited to deterministic predictions, while the knowledge of uncertainty is critical in fields like meteorology and climatology. In this work, we investigate the training of ensembles of models to produce stochastic outputs. We show through experiments on real remote sensing image time series that ensembles of independently trained models are highly overconfident and that using a training criterion that explicitly encourages the members to produce predictions with high inter-model variances greatly improves the uncertainty quantification of the ensembles.
|
1702.06968
|
Arthur-Jozsef Molnar
|
Arthur-Jozsef Molnar
|
A Heuristic Process for GUI Widget Matching Across Application Versions
| null |
Annales Universitatis Scientiarum Budapest, Sectio Computatorica
Vol 36, pages 255 - 275 (2012)
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces an automated heuristic process able to achieve high
accuracy when matching graphical user interface widgets across multiple
versions of a target application. The proposed implementation is flexible as it
allows full customization of the process and easy integration with existing
tools for long term graphical user interface test case maintenance, software
visualization and analysis.
|
[
{
"created": "Wed, 22 Feb 2017 19:08:11 GMT",
"version": "v1"
}
] |
2017-02-24
|
[
[
"Molnar",
"Arthur-Jozsef",
""
]
] |
This paper introduces an automated heuristic process able to achieve high accuracy when matching graphical user interface widgets across multiple versions of a target application. The proposed implementation is flexible as it allows full customization of the process and easy integration with existing tools for long term graphical user interface test case maintenance, software visualization and analysis.
|
2307.10936
|
Raphael Boige
|
Raphael Boige and Yannis Flet-Berliac and Arthur Flajolet and
Guillaume Richard and Thomas Pierrot
|
PASTA: Pretrained Action-State Transformer Agents
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning has brought about a revolutionary paradigm shift in
various computing domains, including NLP, vision, and biology. Recent
approaches involve pre-training transformer models on vast amounts of unlabeled
data, serving as a starting point for efficiently solving downstream tasks. In
reinforcement learning, researchers have recently adapted these approaches,
developing models pre-trained on expert trajectories. This advancement enables
the models to tackle a broad spectrum of tasks, ranging from robotics to
recommendation systems. However, existing methods mostly rely on intricate
pre-training objectives tailored to specific downstream applications. This
paper conducts a comprehensive investigation of models, referred to as
pre-trained action-state transformer agents (PASTA). Our study covers a unified
methodology and covers an extensive set of general downstream tasks including
behavioral cloning, offline RL, sensor failure robustness, and dynamics change
adaptation. Our objective is to systematically compare various design choices
and offer valuable insights that will aid practitioners in developing robust
models. Key highlights of our study include tokenization at the component level
for actions and states, the use of fundamental pre-training objectives such as
next token prediction or masked language modeling, simultaneous training of
models across multiple domains, and the application of various fine-tuning
strategies. In this study, the developed models contain fewer than 7 million
parameters allowing a broad community to use these models and reproduce our
experiments. We hope that this study will encourage further research into the
use of transformers with first principle design choices to represent RL
trajectories and contribute to robust policy learning.
|
[
{
"created": "Thu, 20 Jul 2023 15:09:06 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Dec 2023 10:15:26 GMT",
"version": "v2"
}
] |
2023-12-05
|
[
[
"Boige",
"Raphael",
""
],
[
"Flet-Berliac",
"Yannis",
""
],
[
"Flajolet",
"Arthur",
""
],
[
"Richard",
"Guillaume",
""
],
[
"Pierrot",
"Thomas",
""
]
] |
Self-supervised learning has brought about a revolutionary paradigm shift in various computing domains, including NLP, vision, and biology. Recent approaches involve pre-training transformer models on vast amounts of unlabeled data, serving as a starting point for efficiently solving downstream tasks. In reinforcement learning, researchers have recently adapted these approaches, developing models pre-trained on expert trajectories. This advancement enables the models to tackle a broad spectrum of tasks, ranging from robotics to recommendation systems. However, existing methods mostly rely on intricate pre-training objectives tailored to specific downstream applications. This paper conducts a comprehensive investigation of models, referred to as pre-trained action-state transformer agents (PASTA). Our study covers a unified methodology and covers an extensive set of general downstream tasks including behavioral cloning, offline RL, sensor failure robustness, and dynamics change adaptation. Our objective is to systematically compare various design choices and offer valuable insights that will aid practitioners in developing robust models. Key highlights of our study include tokenization at the component level for actions and states, the use of fundamental pre-training objectives such as next token prediction or masked language modeling, simultaneous training of models across multiple domains, and the application of various fine-tuning strategies. In this study, the developed models contain fewer than 7 million parameters allowing a broad community to use these models and reproduce our experiments. We hope that this study will encourage further research into the use of transformers with first principle design choices to represent RL trajectories and contribute to robust policy learning.
|
1612.00604
|
Andrii Maksai
|
Andrii Maksai, Xinchao Wang, Francois Fleuret, and Pascal Fua
|
Globally Consistent Multi-People Tracking using Motion Patterns
|
8 pages, 7 figures. 11 pages supplementary
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many state-of-the-art approaches to people tracking rely on detecting them in
each frame independently, grouping detections into short but reliable
trajectory segments, and then further grouping them into full trajectories.
This grouping typically relies on imposing local smoothness constraints but
almost never on enforcing more global constraints on the trajectories. In this
paper, we propose an approach to imposing global consistency by first inferring
behavioral patterns from the ground truth and then using them to guide the
tracking algorithm. When used in conjunction with several state-of-the-art
algorithms, this further increases their already good performance. Furthermore,
we propose an unsupervised scheme that yields almost similar improvements
without the need for ground truth.
|
[
{
"created": "Fri, 2 Dec 2016 09:24:30 GMT",
"version": "v1"
}
] |
2016-12-05
|
[
[
"Maksai",
"Andrii",
""
],
[
"Wang",
"Xinchao",
""
],
[
"Fleuret",
"Francois",
""
],
[
"Fua",
"Pascal",
""
]
] |
Many state-of-the-art approaches to people tracking rely on detecting them in each frame independently, grouping detections into short but reliable trajectory segments, and then further grouping them into full trajectories. This grouping typically relies on imposing local smoothness constraints but almost never on enforcing more global constraints on the trajectories. In this paper, we propose an approach to imposing global consistency by first inferring behavioral patterns from the ground truth and then using them to guide the tracking algorithm. When used in conjunction with several state-of-the-art algorithms, this further increases their already good performance. Furthermore, we propose an unsupervised scheme that yields almost similar improvements without the need for ground truth.
|
2311.11532
|
Gustavo Silva
|
Gustavo Silva, Paul Rodriguez
|
Optimal Hyperparameter $\epsilon$ for Adaptive Stochastic Optimizers
through Gradient Histograms
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Optimizers are essential components for successfully training deep neural
network models. In order to achieve the best performance from such models,
designers need to carefully choose the optimizer hyperparameters. However, this
can be a computationally expensive and time-consuming process. Although it is
known that all optimizer hyperparameters must be tuned for maximum performance,
there is still a lack of clarity regarding the individual influence of minor
priority hyperparameters, including the safeguard factor $\epsilon$ and
momentum factor $\beta$, in leading adaptive optimizers (specifically, those
based on the Adam optimizers). In this manuscript, we introduce a new framework
based on gradient histograms to analyze and justify important attributes of
adaptive optimizers, such as their optimal performance and the relationships
and dependencies among hyperparameters. Furthermore, we propose a novel
gradient histogram-based algorithm that automatically estimates a reduced and
accurate search space for the safeguard hyperparameter $\epsilon$, where the
optimal value can be easily found.
|
[
{
"created": "Mon, 20 Nov 2023 04:34:19 GMT",
"version": "v1"
}
] |
2023-11-21
|
[
[
"Silva",
"Gustavo",
""
],
[
"Rodriguez",
"Paul",
""
]
] |
Optimizers are essential components for successfully training deep neural network models. In order to achieve the best performance from such models, designers need to carefully choose the optimizer hyperparameters. However, this can be a computationally expensive and time-consuming process. Although it is known that all optimizer hyperparameters must be tuned for maximum performance, there is still a lack of clarity regarding the individual influence of minor priority hyperparameters, including the safeguard factor $\epsilon$ and momentum factor $\beta$, in leading adaptive optimizers (specifically, those based on the Adam optimizers). In this manuscript, we introduce a new framework based on gradient histograms to analyze and justify important attributes of adaptive optimizers, such as their optimal performance and the relationships and dependencies among hyperparameters. Furthermore, we propose a novel gradient histogram-based algorithm that automatically estimates a reduced and accurate search space for the safeguard hyperparameter $\epsilon$, where the optimal value can be easily found.
|
2302.02187
|
Niklas K\"uhl Prof Dr
|
Max Schemmer, Niklas K\"uhl, Carina Benz, Andrea Bartos, Gerhard
Satzger
|
Appropriate Reliance on AI Advice: Conceptualization and the Effect of
Explanations
|
arXiv admin note: text overlap with arXiv:2204.06916
|
ACM 28th International Conference on Intelligent User Interfaces
(IUI), 2023
|
10.1145/3581641.3584066
| null |
cs.AI cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
AI advice is becoming increasingly popular, e.g., in investment and medical
treatment decisions. As this advice is typically imperfect, decision-makers
have to exert discretion as to whether actually follow that advice: they have
to "appropriately" rely on correct and turn down incorrect advice. However,
current research on appropriate reliance still lacks a common definition as
well as an operational measurement concept. Additionally, no in-depth
behavioral experiments have been conducted that help understand the factors
influencing this behavior. In this paper, we propose Appropriateness of
Reliance (AoR) as an underlying, quantifiable two-dimensional measurement
concept. We develop a research model that analyzes the effect of providing
explanations for AI advice. In an experiment with 200 participants, we
demonstrate how these explanations influence the AoR, and, thus, the
effectiveness of AI advice. Our work contributes fundamental concepts for the
analysis of reliance behavior and the purposeful design of AI advisors.
|
[
{
"created": "Sat, 4 Feb 2023 15:48:24 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Feb 2023 07:47:43 GMT",
"version": "v2"
},
{
"created": "Thu, 13 Apr 2023 08:50:16 GMT",
"version": "v3"
}
] |
2023-04-14
|
[
[
"Schemmer",
"Max",
""
],
[
"Kühl",
"Niklas",
""
],
[
"Benz",
"Carina",
""
],
[
"Bartos",
"Andrea",
""
],
[
"Satzger",
"Gerhard",
""
]
] |
AI advice is becoming increasingly popular, e.g., in investment and medical treatment decisions. As this advice is typically imperfect, decision-makers have to exert discretion as to whether actually follow that advice: they have to "appropriately" rely on correct and turn down incorrect advice. However, current research on appropriate reliance still lacks a common definition as well as an operational measurement concept. Additionally, no in-depth behavioral experiments have been conducted that help understand the factors influencing this behavior. In this paper, we propose Appropriateness of Reliance (AoR) as an underlying, quantifiable two-dimensional measurement concept. We develop a research model that analyzes the effect of providing explanations for AI advice. In an experiment with 200 participants, we demonstrate how these explanations influence the AoR, and, thus, the effectiveness of AI advice. Our work contributes fundamental concepts for the analysis of reliance behavior and the purposeful design of AI advisors.
|
1101.1815
|
Suvansh Lal Mr.
|
Suvansh Lal, Mohit Jain, Vikrant Chaplot
|
Approaches to Formal Verification of Security Protocols
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/3.0/
|
In recent times, many protocols have been proposed to provide security for
various information and communication systems. Such protocols must be tested
for their functional correctness before they are used in practice. Application
of formal methods for verification of security protocols would enhance their
reliability thereby, increasing the usability of systems that employ them.
Thus, formal verification of security protocols has become a key issue in
computer and communications security. In this paper we present, analyze and
compare some prevalent approaches towards verification of secure systems. We
follow the notion of - same goal through different approaches - as we formally
analyze the Needham Schroeder Public Key protocol for Lowe's attack using each
of our presented approaches.
|
[
{
"created": "Mon, 10 Jan 2011 13:53:25 GMT",
"version": "v1"
}
] |
2011-01-11
|
[
[
"Lal",
"Suvansh",
""
],
[
"Jain",
"Mohit",
""
],
[
"Chaplot",
"Vikrant",
""
]
] |
In recent times, many protocols have been proposed to provide security for various information and communication systems. Such protocols must be tested for their functional correctness before they are used in practice. Application of formal methods for verification of security protocols would enhance their reliability thereby, increasing the usability of systems that employ them. Thus, formal verification of security protocols has become a key issue in computer and communications security. In this paper we present, analyze and compare some prevalent approaches towards verification of secure systems. We follow the notion of - same goal through different approaches - as we formally analyze the Needham Schroeder Public Key protocol for Lowe's attack using each of our presented approaches.
|
2405.11530
|
Sejik Park
|
Sejik Park
|
Learning More Generalized Experts by Merging Experts in
Mixture-of-Experts
|
12 pages, 3 figures
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We observe that incorporating a shared layer in a mixture-of-experts can lead
to performance degradation. This leads us to hypothesize that learning shared
features poses challenges in deep learning, potentially caused by the same
feature being learned as various different features. To address this issue, we
track each expert's usage frequency and merge the two most frequently selected
experts. We then update the least frequently selected expert using the
combination of experts. This approach, combined with the subsequent learning of
the router's expert selection, allows the model to determine if the most
frequently selected experts have learned the same feature differently. If they
have, the combined expert can be further trained to learn a more general
feature. Consequently, our algorithm enhances transfer learning and mitigates
catastrophic forgetting when applied to multi-domain task incremental learning.
|
[
{
"created": "Sun, 19 May 2024 11:55:48 GMT",
"version": "v1"
}
] |
2024-05-21
|
[
[
"Park",
"Sejik",
""
]
] |
We observe that incorporating a shared layer in a mixture-of-experts can lead to performance degradation. This leads us to hypothesize that learning shared features poses challenges in deep learning, potentially caused by the same feature being learned as various different features. To address this issue, we track each expert's usage frequency and merge the two most frequently selected experts. We then update the least frequently selected expert using the combination of experts. This approach, combined with the subsequent learning of the router's expert selection, allows the model to determine if the most frequently selected experts have learned the same feature differently. If they have, the combined expert can be further trained to learn a more general feature. Consequently, our algorithm enhances transfer learning and mitigates catastrophic forgetting when applied to multi-domain task incremental learning.
|
1701.01216
|
Tony T. Luo
|
T. Luo, S. S. Kanhere, H-P. Tan, F. Wu, and H. Wu
|
Crowdsourcing with Tullock contests: A new perspective
|
9 pages, 4 figures, 3 tables
|
Proc. IEEE INFOCOM, 2015, pp. 2515-2523
|
10.1109/INFOCOM.2015.7218641
| null |
cs.GT cs.HC cs.MA cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Incentive mechanisms for crowdsourcing have been extensively studied under
the framework of all-pay auctions. Along a distinct line, this paper proposes
to use Tullock contests as an alternative tool to design incentive mechanisms
for crowdsourcing. We are inspired by the conduciveness of Tullock contests to
attracting user entry (yet not necessarily a higher revenue) in other domains.
In this paper, we explore a new dimension in optimal Tullock contest design, by
superseding the contest prize---which is fixed in conventional Tullock
contests---with a prize function that is dependent on the (unknown) winner's
contribution, in order to maximize the crowdsourcer's utility. We show that
this approach leads to attractive practical advantages: (a) it is well-suited
for rapid prototyping in fully distributed web agents and smartphone apps; (b)
it overcomes the disincentive to participate caused by players' antagonism to
an increasing number of rivals. Furthermore, we optimize conventional,
fixed-prize Tullock contests to construct the most superior benchmark to
compare against our mechanism. Through extensive evaluations, we show that our
mechanism significantly outperforms the optimal benchmark, by over three folds
on the crowdsourcer's utility cum profit and up to nine folds on the players'
social welfare.
|
[
{
"created": "Thu, 5 Jan 2017 05:44:25 GMT",
"version": "v1"
}
] |
2017-01-06
|
[
[
"Luo",
"T.",
""
],
[
"Kanhere",
"S. S.",
""
],
[
"Tan",
"H-P.",
""
],
[
"Wu",
"F.",
""
],
[
"Wu",
"H.",
""
]
] |
Incentive mechanisms for crowdsourcing have been extensively studied under the framework of all-pay auctions. Along a distinct line, this paper proposes to use Tullock contests as an alternative tool to design incentive mechanisms for crowdsourcing. We are inspired by the conduciveness of Tullock contests to attracting user entry (yet not necessarily a higher revenue) in other domains. In this paper, we explore a new dimension in optimal Tullock contest design, by superseding the contest prize---which is fixed in conventional Tullock contests---with a prize function that is dependent on the (unknown) winner's contribution, in order to maximize the crowdsourcer's utility. We show that this approach leads to attractive practical advantages: (a) it is well-suited for rapid prototyping in fully distributed web agents and smartphone apps; (b) it overcomes the disincentive to participate caused by players' antagonism to an increasing number of rivals. Furthermore, we optimize conventional, fixed-prize Tullock contests to construct the most superior benchmark to compare against our mechanism. Through extensive evaluations, we show that our mechanism significantly outperforms the optimal benchmark, by over three folds on the crowdsourcer's utility cum profit and up to nine folds on the players' social welfare.
|
1704.00355
|
Neha Gupta
|
Moses Charikar, Neha Gupta, Roy Schwartz
|
Local Guarantees in Graph Cuts and Clustering
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Correlation Clustering is an elegant model that captures fundamental graph
cut problems such as Min $s-t$ Cut, Multiway Cut, and Multicut, extensively
studied in combinatorial optimization. Here, we are given a graph with edges
labeled $+$ or $-$ and the goal is to produce a clustering that agrees with the
labels as much as possible: $+$ edges within clusters and $-$ edges across
clusters. The classical approach towards Correlation Clustering (and other
graph cut problems) is to optimize a global objective. We depart from this and
study local objectives: minimizing the maximum number of disagreements for
edges incident on a single node, and the analogous max min agreements
objective. This naturally gives rise to a family of basic min-max graph cut
problems. A prototypical representative is Min Max $s-t$ Cut: find an $s-t$ cut
minimizing the largest number of cut edges incident on any node. We present the
following results: $(1)$ an $O(\sqrt{n})$-approximation for the problem of
minimizing the maximum total weight of disagreement edges incident on any node
(thus providing the first known approximation for the above family of min-max
graph cut problems), $(2)$ a remarkably simple $7$-approximation for minimizing
local disagreements in complete graphs (improving upon the previous best known
approximation of $48$), and $(3)$ a $1/(2+\varepsilon)$-approximation for
maximizing the minimum total weight of agreement edges incident on any node,
hence improving upon the $1/(4+\varepsilon)$-approximation that follows from
the study of approximate pure Nash equilibria in cut and party affiliation
games.
|
[
{
"created": "Sun, 2 Apr 2017 19:34:22 GMT",
"version": "v1"
}
] |
2017-04-04
|
[
[
"Charikar",
"Moses",
""
],
[
"Gupta",
"Neha",
""
],
[
"Schwartz",
"Roy",
""
]
] |
Correlation Clustering is an elegant model that captures fundamental graph cut problems such as Min $s-t$ Cut, Multiway Cut, and Multicut, extensively studied in combinatorial optimization. Here, we are given a graph with edges labeled $+$ or $-$ and the goal is to produce a clustering that agrees with the labels as much as possible: $+$ edges within clusters and $-$ edges across clusters. The classical approach towards Correlation Clustering (and other graph cut problems) is to optimize a global objective. We depart from this and study local objectives: minimizing the maximum number of disagreements for edges incident on a single node, and the analogous max min agreements objective. This naturally gives rise to a family of basic min-max graph cut problems. A prototypical representative is Min Max $s-t$ Cut: find an $s-t$ cut minimizing the largest number of cut edges incident on any node. We present the following results: $(1)$ an $O(\sqrt{n})$-approximation for the problem of minimizing the maximum total weight of disagreement edges incident on any node (thus providing the first known approximation for the above family of min-max graph cut problems), $(2)$ a remarkably simple $7$-approximation for minimizing local disagreements in complete graphs (improving upon the previous best known approximation of $48$), and $(3)$ a $1/(2+\varepsilon)$-approximation for maximizing the minimum total weight of agreement edges incident on any node, hence improving upon the $1/(4+\varepsilon)$-approximation that follows from the study of approximate pure Nash equilibria in cut and party affiliation games.
|
1502.03358
|
Farshad Naghibi
|
Farshad Naghibi, Somayeh Salimi, Mikael Skoglund
|
The CEO Problem with Secrecy Constraints
|
Accepted for publication in IEEE Transactions on Information
Forensics and Security, 17 pages, 4 figures
| null |
10.1109/TIFS.2015.2404134
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study a lossy source coding problem with secrecy constraints in which a
remote information source should be transmitted to a single destination via
multiple agents in the presence of a passive eavesdropper. The agents observe
noisy versions of the source and independently encode and transmit their
observations to the destination via noiseless rate-limited links. The
destination should estimate the remote source based on the information received
from the agents within a certain mean distortion threshold. The eavesdropper,
with access to side information correlated to the source, is able to listen in
on one of the links from the agents to the destination in order to obtain as
much information as possible about the source. This problem can be viewed as
the so-called CEO problem with additional secrecy constraints. We establish
inner and outer bounds on the rate-distortion-equivocation region of this
problem. We also obtain the region in special cases where the bounds are tight.
Furthermore, we study the quadratic Gaussian case and provide the optimal
rate-distortion-equivocation region when the eavesdropper has no side
information and an achievable region for a more general setup with side
information at the eavesdropper.
|
[
{
"created": "Wed, 11 Feb 2015 16:19:21 GMT",
"version": "v1"
}
] |
2015-02-19
|
[
[
"Naghibi",
"Farshad",
""
],
[
"Salimi",
"Somayeh",
""
],
[
"Skoglund",
"Mikael",
""
]
] |
We study a lossy source coding problem with secrecy constraints in which a remote information source should be transmitted to a single destination via multiple agents in the presence of a passive eavesdropper. The agents observe noisy versions of the source and independently encode and transmit their observations to the destination via noiseless rate-limited links. The destination should estimate the remote source based on the information received from the agents within a certain mean distortion threshold. The eavesdropper, with access to side information correlated to the source, is able to listen in on one of the links from the agents to the destination in order to obtain as much information as possible about the source. This problem can be viewed as the so-called CEO problem with additional secrecy constraints. We establish inner and outer bounds on the rate-distortion-equivocation region of this problem. We also obtain the region in special cases where the bounds are tight. Furthermore, we study the quadratic Gaussian case and provide the optimal rate-distortion-equivocation region when the eavesdropper has no side information and an achievable region for a more general setup with side information at the eavesdropper.
|
2007.00236
|
Cunxiang Wang
|
Cunxiang Wang, Shuailong Liang, Yili Jin, Yilong Wang, Xiaodan Zhu and
Yue Zhang
|
SemEval-2020 Task 4: Commonsense Validation and Explanation
|
Task description paper of SemEval-2020 Task 4: Commonsense Validation
and Explanation
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present SemEval-2020 Task 4, Commonsense Validation and
Explanation (ComVE), which includes three subtasks, aiming to evaluate whether
a system can distinguish a natural language statement that makes sense to
humans from one that does not, and provide the reasons. Specifically, in our
first subtask, the participating systems are required to choose from two
natural language statements of similar wording the one that makes sense and the
one does not. The second subtask additionally asks a system to select the key
reason from three options why a given statement does not make sense. In the
third subtask, a participating system needs to generate the reason. We finally
attracted 39 teams participating at least one of the three subtasks. For
Subtask A and Subtask B, the performances of top-ranked systems are close to
that of humans. However, for Subtask C, there is still a relatively large gap
between systems and human performance. The dataset used in our task can be
found at https://github.com/wangcunxiang/SemEval2020-
Task4-Commonsense-Validation-and-Explanation; The leaderboard can be found at
https://competitions.codalab.org/competitions/21080#results.
|
[
{
"created": "Wed, 1 Jul 2020 04:41:05 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Aug 2020 15:13:40 GMT",
"version": "v2"
}
] |
2020-08-04
|
[
[
"Wang",
"Cunxiang",
""
],
[
"Liang",
"Shuailong",
""
],
[
"Jin",
"Yili",
""
],
[
"Wang",
"Yilong",
""
],
[
"Zhu",
"Xiaodan",
""
],
[
"Zhang",
"Yue",
""
]
] |
In this paper, we present SemEval-2020 Task 4, Commonsense Validation and Explanation (ComVE), which includes three subtasks, aiming to evaluate whether a system can distinguish a natural language statement that makes sense to humans from one that does not, and provide the reasons. Specifically, in our first subtask, the participating systems are required to choose from two natural language statements of similar wording the one that makes sense and the one does not. The second subtask additionally asks a system to select the key reason from three options why a given statement does not make sense. In the third subtask, a participating system needs to generate the reason. We finally attracted 39 teams participating at least one of the three subtasks. For Subtask A and Subtask B, the performances of top-ranked systems are close to that of humans. However, for Subtask C, there is still a relatively large gap between systems and human performance. The dataset used in our task can be found at https://github.com/wangcunxiang/SemEval2020- Task4-Commonsense-Validation-and-Explanation; The leaderboard can be found at https://competitions.codalab.org/competitions/21080#results.
|
2304.13518
|
Yuqi Han
|
Yuqi Han and Tao Yu and Xiaohang Yu and Yuwang Wang and Qionghai Dai
|
Super-NeRF: View-consistent Detail Generation for NeRF super-resolution
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The neural radiance field (NeRF) achieved remarkable success in modeling 3D
scenes and synthesizing high-fidelity novel views. However, existing NeRF-based
methods focus more on the make full use of the image resolution to generate
novel views, but less considering the generation of details under the limited
input resolution. In analogy to the extensive usage of image super-resolution,
NeRF super-resolution is an effective way to generate the high-resolution
implicit representation of 3D scenes and holds great potential applications. Up
to now, such an important topic is still under-explored. In this paper, we
propose a NeRF super-resolution method, named Super-NeRF, to generate
high-resolution NeRF from only low-resolution inputs. Given multi-view
low-resolution images, Super-NeRF constructs a consistency-controlling
super-resolution module to generate view-consistent high-resolution details for
NeRF. Specifically, an optimizable latent code is introduced for each
low-resolution input image to control the 2D super-resolution images to
converge to the view-consistent output. The latent codes of each low-resolution
image are optimized synergistically with the target Super-NeRF representation
to fully utilize the view consistency constraint inherent in NeRF construction.
We verify the effectiveness of Super-NeRF on synthetic, real-world, and
AI-generated NeRF datasets. Super-NeRF achieves state-of-the-art NeRF
super-resolution performance on high-resolution detail generation and
cross-view consistency.
|
[
{
"created": "Wed, 26 Apr 2023 12:54:40 GMT",
"version": "v1"
}
] |
2023-04-27
|
[
[
"Han",
"Yuqi",
""
],
[
"Yu",
"Tao",
""
],
[
"Yu",
"Xiaohang",
""
],
[
"Wang",
"Yuwang",
""
],
[
"Dai",
"Qionghai",
""
]
] |
The neural radiance field (NeRF) achieved remarkable success in modeling 3D scenes and synthesizing high-fidelity novel views. However, existing NeRF-based methods focus more on the make full use of the image resolution to generate novel views, but less considering the generation of details under the limited input resolution. In analogy to the extensive usage of image super-resolution, NeRF super-resolution is an effective way to generate the high-resolution implicit representation of 3D scenes and holds great potential applications. Up to now, such an important topic is still under-explored. In this paper, we propose a NeRF super-resolution method, named Super-NeRF, to generate high-resolution NeRF from only low-resolution inputs. Given multi-view low-resolution images, Super-NeRF constructs a consistency-controlling super-resolution module to generate view-consistent high-resolution details for NeRF. Specifically, an optimizable latent code is introduced for each low-resolution input image to control the 2D super-resolution images to converge to the view-consistent output. The latent codes of each low-resolution image are optimized synergistically with the target Super-NeRF representation to fully utilize the view consistency constraint inherent in NeRF construction. We verify the effectiveness of Super-NeRF on synthetic, real-world, and AI-generated NeRF datasets. Super-NeRF achieves state-of-the-art NeRF super-resolution performance on high-resolution detail generation and cross-view consistency.
|
1809.05606
|
Yimin Yang
|
Yimin Yang, Q.M.Jonathan Wu, Xiexing Feng, Thangarajah Akilan
|
Non-iterative recomputation of dense layers for performance improvement
of DCNN
|
11
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An iterative method of learning has become a paradigm for training deep
convolutional neural networks (DCNN). However, utilizing a non-iterative
learning strategy can accelerate the training process of the DCNN and
surprisingly such approach has been rarely explored by the deep learning (DL)
community. It motivates this paper to introduce a non-iterative learning
strategy that eliminates the backpropagation (BP) at the top dense or fully
connected (FC) layers of DCNN, resulting in, lower training time and higher
performance. The proposed method exploits the Moore-Penrose Inverse to pull
back the current residual error to each FC layer, generating well-generalized
features. Then using the recomputed features, i.e., the new generalized
features the weights of each FC layer is computed according to the
Moore-Penrose Inverse. We evaluate the proposed approach on six widely accepted
object recognition benchmark datasets: Scene-15, CIFAR-10, CIFAR-100, SUN-397,
Places365, and ImageNet. The experimental results show that the proposed method
obtains significant improvements over 30 state-of-the-art methods.
Interestingly, it also indicates that any DCNN with the proposed method can
provide better performance than the same network with its original training
based on BP.
|
[
{
"created": "Fri, 14 Sep 2018 22:24:52 GMT",
"version": "v1"
}
] |
2018-09-18
|
[
[
"Yang",
"Yimin",
""
],
[
"Wu",
"Q. M. Jonathan",
""
],
[
"Feng",
"Xiexing",
""
],
[
"Akilan",
"Thangarajah",
""
]
] |
An iterative method of learning has become a paradigm for training deep convolutional neural networks (DCNN). However, utilizing a non-iterative learning strategy can accelerate the training process of the DCNN and surprisingly such approach has been rarely explored by the deep learning (DL) community. It motivates this paper to introduce a non-iterative learning strategy that eliminates the backpropagation (BP) at the top dense or fully connected (FC) layers of DCNN, resulting in, lower training time and higher performance. The proposed method exploits the Moore-Penrose Inverse to pull back the current residual error to each FC layer, generating well-generalized features. Then using the recomputed features, i.e., the new generalized features the weights of each FC layer is computed according to the Moore-Penrose Inverse. We evaluate the proposed approach on six widely accepted object recognition benchmark datasets: Scene-15, CIFAR-10, CIFAR-100, SUN-397, Places365, and ImageNet. The experimental results show that the proposed method obtains significant improvements over 30 state-of-the-art methods. Interestingly, it also indicates that any DCNN with the proposed method can provide better performance than the same network with its original training based on BP.
|
1309.0871
|
EPTCS
|
Oded Maler (CNRS-VERIMAG, University of Grenoble), \'Ad\'am M.
Hal\'asz (Department of Methematics, West Virginia University), Olivier
Lebeltel (CNRS-VERIMAG, University of Grenoble), Ouri Maler (Grenoble)
|
Exploring the Dynamics of Mass Action Systems
|
In Proceedings HSB 2013, arXiv:1308.5724
|
EPTCS 125, 2013, pp. 84-91
|
10.4204/EPTCS.125.6
| null |
cs.CE cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the Populus toolkit for exploring the dynamics of mass action
systems under different assumptions.
|
[
{
"created": "Tue, 3 Sep 2013 23:41:36 GMT",
"version": "v1"
}
] |
2013-09-05
|
[
[
"Maler",
"Oded",
"",
"CNRS-VERIMAG, University of Grenoble"
],
[
"Halász",
"Ádám M.",
"",
"Department of Methematics, West Virginia University"
],
[
"Lebeltel",
"Olivier",
"",
"CNRS-VERIMAG, University of Grenoble"
],
[
"Maler",
"Ouri",
"",
"Grenoble"
]
] |
We present the Populus toolkit for exploring the dynamics of mass action systems under different assumptions.
|
2403.05193
|
Martina Benini
|
Martina Benini, Silvia Gallucci, Marta Bonato, Marta Parazzini,
Gabriella Tognola
|
Evaluation of Road User Radio-Frequency Exposure Levels in an Urban
Environment from Vehicular Antennas and the Infrastructure in ITS-G5 5.9 GHz
Communication
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
This study aims to investigate the variability of exposure levels among road
users generated in a realistic urban scenario by Vehicle-to-Vehicle (V2V) and
Vehicle-to-Infrastructure (V2I) communication technologies operating at 5.9
GHz. The exposure levels were evaluated in terms of whole-body Specific
Absorption Rate (wbSAR) [W/kg] in three different human models, ranging from
children to adults. We calculated the electromagnetic field exposure level
generated by V2V and V2I using raytracing and we assessed wbSAR resulting in
urban exposure scenarios with an increasing number of transmitting antennas.
Whole-body SAR was generally very low, on the order of 10^-4 W/kg. The maximum
wbSAR, of 4.9x10^-4 W/kg, was obtained in the worst-case exposure condition
comprising more than one transmitting vehicle and was found in the adult model
for a distance within 10 m from the transmitting cars. We found that the height
of the human model highly impacted the exposure level. Namely, the child (which
is the shortest human model) was generally much less exposed than adults. All
the wbSAR values found by varying the number of transmitting antennas, the
distance of the road user from the antennas, and the type of human model (adult
vs. child) were very well below the limits set by the ICNIRP and IEEE
guidelines of 0.08 W/kg for human exposure in the 100 kHz - 300 GHz range.
|
[
{
"created": "Fri, 8 Mar 2024 10:14:40 GMT",
"version": "v1"
}
] |
2024-03-11
|
[
[
"Benini",
"Martina",
""
],
[
"Gallucci",
"Silvia",
""
],
[
"Bonato",
"Marta",
""
],
[
"Parazzini",
"Marta",
""
],
[
"Tognola",
"Gabriella",
""
]
] |
This study aims to investigate the variability of exposure levels among road users generated in a realistic urban scenario by Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication technologies operating at 5.9 GHz. The exposure levels were evaluated in terms of whole-body Specific Absorption Rate (wbSAR) [W/kg] in three different human models, ranging from children to adults. We calculated the electromagnetic field exposure level generated by V2V and V2I using raytracing and we assessed wbSAR resulting in urban exposure scenarios with an increasing number of transmitting antennas. Whole-body SAR was generally very low, on the order of 10^-4 W/kg. The maximum wbSAR, of 4.9x10^-4 W/kg, was obtained in the worst-case exposure condition comprising more than one transmitting vehicle and was found in the adult model for a distance within 10 m from the transmitting cars. We found that the height of the human model highly impacted the exposure level. Namely, the child (which is the shortest human model) was generally much less exposed than adults. All the wbSAR values found by varying the number of transmitting antennas, the distance of the road user from the antennas, and the type of human model (adult vs. child) were very well below the limits set by the ICNIRP and IEEE guidelines of 0.08 W/kg for human exposure in the 100 kHz - 300 GHz range.
|
2408.03945
|
Kristina Schaaff
|
Kristina Schaaff and Marc-Andr\'e Heidelmann
|
Impacts of Anthropomorphizing Large Language Models in Learning
Environments
|
Presented at Affective Computing Pre-Conference at ISRE 2024
| null | null | null |
cs.CL cs.AI cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Large Language Models (LLMs) are increasingly being used in learning
environments to support teaching-be it as learning companions or as tutors.
With our contribution, we aim to discuss the implications of the
anthropomorphization of LLMs in learning environments on educational theory to
build a foundation for more effective learning outcomes and understand their
emotional impact on learners. According to the media equation, people tend to
respond to media in the same way as they would respond to another person. A
study conducted by the Georgia Institute of Technology showed that chatbots can
be successfully implemented in learning environments. In this study, learners
in selected online courses were unable to distinguish the chatbot from a "real"
teacher. As LLM-based chatbots such as OpenAI's GPT series are increasingly
used in educational tools, it is important to understand how the attribution
processes to LLM-based chatbots in terms of anthropomorphization affect
learners' emotions.
|
[
{
"created": "Mon, 22 Jul 2024 06:28:54 GMT",
"version": "v1"
}
] |
2024-08-09
|
[
[
"Schaaff",
"Kristina",
""
],
[
"Heidelmann",
"Marc-André",
""
]
] |
Large Language Models (LLMs) are increasingly being used in learning environments to support teaching-be it as learning companions or as tutors. With our contribution, we aim to discuss the implications of the anthropomorphization of LLMs in learning environments on educational theory to build a foundation for more effective learning outcomes and understand their emotional impact on learners. According to the media equation, people tend to respond to media in the same way as they would respond to another person. A study conducted by the Georgia Institute of Technology showed that chatbots can be successfully implemented in learning environments. In this study, learners in selected online courses were unable to distinguish the chatbot from a "real" teacher. As LLM-based chatbots such as OpenAI's GPT series are increasingly used in educational tools, it is important to understand how the attribution processes to LLM-based chatbots in terms of anthropomorphization affect learners' emotions.
|
2310.04197
|
Angel Casanova
|
\'Angel Casanova Bienzobas, Alfonso S\'anchez-Maci\'an
|
Threat Trekker: An Approach to Cyber Threat Hunting
|
I am disseminating this outcome to all of you, despite the fact that
the results may appear somewhat idealistic, given that certain datasets
utilized for the training of the machine learning model comprise simulated
data
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Threat hunting is a proactive methodology for exploring, detecting and
mitigating cyberattacks within complex environments. As opposed to conventional
detection systems, threat hunting strategies assume adversaries have
infiltrated the system; as a result they proactively search out any unusual
patterns or activities which might indicate intrusion attempts.
Historically, this endeavour has been pursued using three investigation
methodologies: (1) Hypothesis-Driven Investigations; (2) Indicator of
Compromise (IOC); and (3) High-level machine learning analysis-based
approaches. Therefore, this paper introduces a novel machine learning paradigm
known as Threat Trekker. This proposal utilizes connectors to feed data
directly into an event streaming channel for processing by the algorithm and
provide feedback back into its host network.
Conclusions drawn from these experiments clearly establish the efficacy of
employing machine learning for classifying more subtle attacks.
|
[
{
"created": "Fri, 6 Oct 2023 12:29:41 GMT",
"version": "v1"
}
] |
2023-10-09
|
[
[
"Bienzobas",
"Ángel Casanova",
""
],
[
"Sánchez-Macián",
"Alfonso",
""
]
] |
Threat hunting is a proactive methodology for exploring, detecting and mitigating cyberattacks within complex environments. As opposed to conventional detection systems, threat hunting strategies assume adversaries have infiltrated the system; as a result they proactively search out any unusual patterns or activities which might indicate intrusion attempts. Historically, this endeavour has been pursued using three investigation methodologies: (1) Hypothesis-Driven Investigations; (2) Indicator of Compromise (IOC); and (3) High-level machine learning analysis-based approaches. Therefore, this paper introduces a novel machine learning paradigm known as Threat Trekker. This proposal utilizes connectors to feed data directly into an event streaming channel for processing by the algorithm and provide feedback back into its host network. Conclusions drawn from these experiments clearly establish the efficacy of employing machine learning for classifying more subtle attacks.
|
2205.06255
|
Qianqian Wang
|
Qianqian Wang, Zhengqi Li, David Salesin, Noah Snavely, Brian Curless,
Janne Kontkanen
|
3D Moments from Near-Duplicate Photos
|
CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce 3D Moments, a new computational photography effect. As input we
take a pair of near-duplicate photos, i.e., photos of moving subjects from
similar viewpoints, common in people's photo collections. As output, we produce
a video that smoothly interpolates the scene motion from the first photo to the
second, while also producing camera motion with parallax that gives a
heightened sense of 3D. To achieve this effect, we represent the scene as a
pair of feature-based layered depth images augmented with scene flow. This
representation enables motion interpolation along with independent control of
the camera viewpoint. Our system produces photorealistic space-time videos with
motion parallax and scene dynamics, while plausibly recovering regions occluded
in the original views. We conduct extensive experiments demonstrating superior
performance over baselines on public datasets and in-the-wild photos. Project
page: https://3d-moments.github.io/
|
[
{
"created": "Thu, 12 May 2022 17:56:18 GMT",
"version": "v1"
}
] |
2022-05-13
|
[
[
"Wang",
"Qianqian",
""
],
[
"Li",
"Zhengqi",
""
],
[
"Salesin",
"David",
""
],
[
"Snavely",
"Noah",
""
],
[
"Curless",
"Brian",
""
],
[
"Kontkanen",
"Janne",
""
]
] |
We introduce 3D Moments, a new computational photography effect. As input we take a pair of near-duplicate photos, i.e., photos of moving subjects from similar viewpoints, common in people's photo collections. As output, we produce a video that smoothly interpolates the scene motion from the first photo to the second, while also producing camera motion with parallax that gives a heightened sense of 3D. To achieve this effect, we represent the scene as a pair of feature-based layered depth images augmented with scene flow. This representation enables motion interpolation along with independent control of the camera viewpoint. Our system produces photorealistic space-time videos with motion parallax and scene dynamics, while plausibly recovering regions occluded in the original views. We conduct extensive experiments demonstrating superior performance over baselines on public datasets and in-the-wild photos. Project page: https://3d-moments.github.io/
|
2407.15373
|
Dizhi Ma
|
Dizhi Ma, Xiyun Hu, Jingyu Shi, Mayank Patel, Rahul Jain, Ziyi Liu,
Zhengzhe Zhu and Karthik Ramani
|
avaTTAR: Table Tennis Stroke Training with On-body and Detached
Visualization in Augmented Reality
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Table tennis stroke training is a critical aspect of player development. We
designed a new augmented reality (AR) system, avaTTAR, for table tennis stroke
training. The system provides both "on-body" (first-person view) and "detached"
(third-person view) visual cues, enabling users to visualize target strokes and
correct their attempts effectively with this dual perspectives setup. By
employing a combination of pose estimation algorithms and IMU sensors, avaTTAR
captures and reconstructs the 3D body pose and paddle orientation of users
during practice, allowing real-time comparison with expert strokes. Through a
user study, we affirm avaTTAR's capacity to amplify player experience and
training results.
|
[
{
"created": "Mon, 22 Jul 2024 04:47:16 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Jul 2024 15:13:46 GMT",
"version": "v2"
}
] |
2024-07-29
|
[
[
"Ma",
"Dizhi",
""
],
[
"Hu",
"Xiyun",
""
],
[
"Shi",
"Jingyu",
""
],
[
"Patel",
"Mayank",
""
],
[
"Jain",
"Rahul",
""
],
[
"Liu",
"Ziyi",
""
],
[
"Zhu",
"Zhengzhe",
""
],
[
"Ramani",
"Karthik",
""
]
] |
Table tennis stroke training is a critical aspect of player development. We designed a new augmented reality (AR) system, avaTTAR, for table tennis stroke training. The system provides both "on-body" (first-person view) and "detached" (third-person view) visual cues, enabling users to visualize target strokes and correct their attempts effectively with this dual perspectives setup. By employing a combination of pose estimation algorithms and IMU sensors, avaTTAR captures and reconstructs the 3D body pose and paddle orientation of users during practice, allowing real-time comparison with expert strokes. Through a user study, we affirm avaTTAR's capacity to amplify player experience and training results.
|
2005.13102
|
Bangalore Ravi Kiran
|
Leonardo Gigli, B Ravi Kiran, Thomas Paul, Andres Serna, Nagarjuna
Vemuri, Beatriz Marcotegui, Santiago Velasco-Forero
|
Road Segmentation on low resolution Lidar point clouds for autonomous
vehicles
|
ISPRS 2020
| null | null | null |
cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point cloud datasets for perception tasks in the context of autonomous
driving often rely on high resolution 64-layer Light Detection and Ranging
(LIDAR) scanners. They are expensive to deploy on real-world autonomous driving
sensor architectures which usually employ 16/32 layer LIDARs. We evaluate the
effect of subsampling image based representations of dense point clouds on the
accuracy of the road segmentation task. In our experiments the low resolution
16/32 layer LIDAR point clouds are simulated by subsampling the original 64
layer data, for subsequent transformation in to a feature map in the
Bird-Eye-View (BEV) and SphericalView (SV) representations of the point cloud.
We introduce the usage of the local normal vector with the LIDAR's spherical
coordinates as an input channel to existing LoDNN architectures. We demonstrate
that this local normal feature in conjunction with classical features not only
improves performance for binary road segmentation on full resolution point
clouds, but it also reduces the negative impact on the accuracy when
subsampling dense point clouds as compared to the usage of classical features
alone. We assess our method with several experiments on two datasets: KITTI
Road-segmentation benchmark and the recently released Semantic KITTI dataset.
|
[
{
"created": "Wed, 27 May 2020 00:38:39 GMT",
"version": "v1"
}
] |
2020-05-28
|
[
[
"Gigli",
"Leonardo",
""
],
[
"Kiran",
"B Ravi",
""
],
[
"Paul",
"Thomas",
""
],
[
"Serna",
"Andres",
""
],
[
"Vemuri",
"Nagarjuna",
""
],
[
"Marcotegui",
"Beatriz",
""
],
[
"Velasco-Forero",
"Santiago",
""
]
] |
Point cloud datasets for perception tasks in the context of autonomous driving often rely on high resolution 64-layer Light Detection and Ranging (LIDAR) scanners. They are expensive to deploy on real-world autonomous driving sensor architectures which usually employ 16/32 layer LIDARs. We evaluate the effect of subsampling image based representations of dense point clouds on the accuracy of the road segmentation task. In our experiments the low resolution 16/32 layer LIDAR point clouds are simulated by subsampling the original 64 layer data, for subsequent transformation in to a feature map in the Bird-Eye-View (BEV) and SphericalView (SV) representations of the point cloud. We introduce the usage of the local normal vector with the LIDAR's spherical coordinates as an input channel to existing LoDNN architectures. We demonstrate that this local normal feature in conjunction with classical features not only improves performance for binary road segmentation on full resolution point clouds, but it also reduces the negative impact on the accuracy when subsampling dense point clouds as compared to the usage of classical features alone. We assess our method with several experiments on two datasets: KITTI Road-segmentation benchmark and the recently released Semantic KITTI dataset.
|
2105.05026
|
Chongxuan Li
|
Guoqiang Wu, Chongxuan Li, Kun Xu, Jun Zhu
|
Rethinking and Reweighting the Univariate Losses for Multi-Label
Ranking: Consistency and Generalization
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
(Partial) ranking loss is a commonly used evaluation measure for multi-label
classification, which is usually optimized with convex surrogates for
computational efficiency. Prior theoretical work on multi-label ranking mainly
focuses on (Fisher) consistency analyses. However, there is a gap between
existing theory and practice -- some pairwise losses can lead to promising
performance but lack consistency, while some univariate losses are consistent
but usually have no clear superiority in practice. In this paper, we attempt to
fill this gap through a systematic study from two complementary perspectives of
consistency and generalization error bounds of learning algorithms. Our results
show that learning algorithms with the consistent univariate loss have an error
bound of $O(c)$ ($c$ is the number of labels), while algorithms with the
inconsistent pairwise loss depend on $O(\sqrt{c})$ as shown in prior work. This
explains that the latter can achieve better performance than the former in
practice. Moreover, we present an inconsistent reweighted univariate loss-based
learning algorithm that enjoys an error bound of $O(\sqrt{c})$ for promising
performance as well as the computational efficiency of univariate losses.
Finally, experimental results validate our theoretical analyses.
|
[
{
"created": "Mon, 10 May 2021 09:23:27 GMT",
"version": "v1"
}
] |
2021-05-12
|
[
[
"Wu",
"Guoqiang",
""
],
[
"Li",
"Chongxuan",
""
],
[
"Xu",
"Kun",
""
],
[
"Zhu",
"Jun",
""
]
] |
(Partial) ranking loss is a commonly used evaluation measure for multi-label classification, which is usually optimized with convex surrogates for computational efficiency. Prior theoretical work on multi-label ranking mainly focuses on (Fisher) consistency analyses. However, there is a gap between existing theory and practice -- some pairwise losses can lead to promising performance but lack consistency, while some univariate losses are consistent but usually have no clear superiority in practice. In this paper, we attempt to fill this gap through a systematic study from two complementary perspectives of consistency and generalization error bounds of learning algorithms. Our results show that learning algorithms with the consistent univariate loss have an error bound of $O(c)$ ($c$ is the number of labels), while algorithms with the inconsistent pairwise loss depend on $O(\sqrt{c})$ as shown in prior work. This explains that the latter can achieve better performance than the former in practice. Moreover, we present an inconsistent reweighted univariate loss-based learning algorithm that enjoys an error bound of $O(\sqrt{c})$ for promising performance as well as the computational efficiency of univariate losses. Finally, experimental results validate our theoretical analyses.
|
2403.14623
|
Zhicong Tang
|
Zhicong Tang, Tiankai Hang, Shuyang Gu, Dong Chen, Baining Guo
|
Simplified Diffusion Schr\"odinger Bridge
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a novel theoretical simplification of the Diffusion
Schr\"odinger Bridge (DSB) that facilitates its unification with Score-based
Generative Models (SGMs), addressing the limitations of DSB in complex data
generation and enabling faster convergence and enhanced performance. By
employing SGMs as an initial solution for DSB, our approach capitalizes on the
strengths of both frameworks, ensuring a more efficient training process and
improving the performance of SGM. We also propose a reparameterization
technique that, despite theoretical approximations, practically improves the
network's fitting capabilities. Our extensive experimental evaluations confirm
the effectiveness of the simplified DSB, demonstrating its significant
improvements. We believe the contributions of this work pave the way for
advanced generative modeling.
|
[
{
"created": "Thu, 21 Mar 2024 17:59:41 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Mar 2024 16:49:35 GMT",
"version": "v2"
},
{
"created": "Mon, 27 May 2024 04:44:22 GMT",
"version": "v3"
},
{
"created": "Tue, 13 Aug 2024 04:34:58 GMT",
"version": "v4"
}
] |
2024-08-14
|
[
[
"Tang",
"Zhicong",
""
],
[
"Hang",
"Tiankai",
""
],
[
"Gu",
"Shuyang",
""
],
[
"Chen",
"Dong",
""
],
[
"Guo",
"Baining",
""
]
] |
This paper introduces a novel theoretical simplification of the Diffusion Schr\"odinger Bridge (DSB) that facilitates its unification with Score-based Generative Models (SGMs), addressing the limitations of DSB in complex data generation and enabling faster convergence and enhanced performance. By employing SGMs as an initial solution for DSB, our approach capitalizes on the strengths of both frameworks, ensuring a more efficient training process and improving the performance of SGM. We also propose a reparameterization technique that, despite theoretical approximations, practically improves the network's fitting capabilities. Our extensive experimental evaluations confirm the effectiveness of the simplified DSB, demonstrating its significant improvements. We believe the contributions of this work pave the way for advanced generative modeling.
|
1511.04376
|
Martin Reisslein
|
Akhilesh Thyagaturu, Anu Mercian, Michael P. McGarry, Martin
Reisslein, Wolfgang Kellerer
|
Software Defined Optical Networks (SDONs): A Comprehensive Survey
| null |
IEEE Communications Surveys & Tutorials, vol. 18, no. 4, pp.
2738-2786, 4th Qu. 2016
|
10.1109/COMST.2016.2586999
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emerging Software Defined Networking (SDN) paradigm separates the data
plane from the control plane and centralizes network control in an SDN
controller. Applications interact with controllers to implement network
services, such as network transport with Quality of Service (QoS). SDN
facilitates the virtualization of network functions so that multiple virtual
networks can operate over a given installed physical network infrastructure.
Due to the specific characteristics of optical (photonic) communication
components and the high optical transmission capacities, SDN based optical
networking poses particular challenges, but holds also great potential. In this
article, we comprehensively survey studies that examine the SDN paradigm in
optical networks; in brief, we survey the area of Software Defined Optical
Networks (SDONs). We mainly organize the SDON studies into studies focused on
the infrastructure layer, the control layer, and the application layer.
Moreover, we cover SDON studies focused on network virtualization, as well as
SDON studies focused on the orchestration of multilayer and multidomain
networking. Based on the survey, we identify open challenges for SDONs and
outline future directions.
|
[
{
"created": "Fri, 13 Nov 2015 17:31:10 GMT",
"version": "v1"
},
{
"created": "Thu, 26 May 2016 10:19:58 GMT",
"version": "v2"
},
{
"created": "Sun, 17 Jul 2016 07:46:47 GMT",
"version": "v3"
}
] |
2016-11-29
|
[
[
"Thyagaturu",
"Akhilesh",
""
],
[
"Mercian",
"Anu",
""
],
[
"McGarry",
"Michael P.",
""
],
[
"Reisslein",
"Martin",
""
],
[
"Kellerer",
"Wolfgang",
""
]
] |
The emerging Software Defined Networking (SDN) paradigm separates the data plane from the control plane and centralizes network control in an SDN controller. Applications interact with controllers to implement network services, such as network transport with Quality of Service (QoS). SDN facilitates the virtualization of network functions so that multiple virtual networks can operate over a given installed physical network infrastructure. Due to the specific characteristics of optical (photonic) communication components and the high optical transmission capacities, SDN based optical networking poses particular challenges, but holds also great potential. In this article, we comprehensively survey studies that examine the SDN paradigm in optical networks; in brief, we survey the area of Software Defined Optical Networks (SDONs). We mainly organize the SDON studies into studies focused on the infrastructure layer, the control layer, and the application layer. Moreover, we cover SDON studies focused on network virtualization, as well as SDON studies focused on the orchestration of multilayer and multidomain networking. Based on the survey, we identify open challenges for SDONs and outline future directions.
|
2306.02346
|
Shuo Ye
|
Shuo Ye and Yufeng Shi and Ruxin Wang and Yu Wang and Jiamiao Xu and
Chuanwu Yang and Xinge You
|
CDLT: A Dataset with Concept Drift and Long-Tailed Distribution for
Fine-Grained Visual Categorization
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data is the foundation for the development of computer vision, and the
establishment of datasets plays an important role in advancing the techniques
of fine-grained visual categorization~(FGVC). In the existing FGVC datasets
used in computer vision, it is generally assumed that each collected instance
has fixed characteristics and the distribution of different categories is
relatively balanced. In contrast, the real world scenario reveals the fact that
the characteristics of instances tend to vary with time and exhibit a
long-tailed distribution. Hence, the collected datasets may mislead the
optimization of the fine-grained classifiers, resulting in unpleasant
performance in real applications. Starting from the real-world conditions and
to promote the practical progress of fine-grained visual categorization, we
present a Concept Drift and Long-Tailed Distribution dataset. Specifically, the
dataset is collected by gathering 11195 images of 250 instances in different
species for 47 consecutive months in their natural contexts. The collection
process involves dozens of crowd workers for photographing and domain experts
for labelling. Extensive baseline experiments using the state-of-the-art
fine-grained classification models demonstrate the issues of concept drift and
long-tailed distribution existed in the dataset, which require the attention of
future researches.
|
[
{
"created": "Sun, 4 Jun 2023 12:42:45 GMT",
"version": "v1"
}
] |
2023-06-06
|
[
[
"Ye",
"Shuo",
""
],
[
"Shi",
"Yufeng",
""
],
[
"Wang",
"Ruxin",
""
],
[
"Wang",
"Yu",
""
],
[
"Xu",
"Jiamiao",
""
],
[
"Yang",
"Chuanwu",
""
],
[
"You",
"Xinge",
""
]
] |
Data is the foundation for the development of computer vision, and the establishment of datasets plays an important role in advancing the techniques of fine-grained visual categorization~(FGVC). In the existing FGVC datasets used in computer vision, it is generally assumed that each collected instance has fixed characteristics and the distribution of different categories is relatively balanced. In contrast, the real world scenario reveals the fact that the characteristics of instances tend to vary with time and exhibit a long-tailed distribution. Hence, the collected datasets may mislead the optimization of the fine-grained classifiers, resulting in unpleasant performance in real applications. Starting from the real-world conditions and to promote the practical progress of fine-grained visual categorization, we present a Concept Drift and Long-Tailed Distribution dataset. Specifically, the dataset is collected by gathering 11195 images of 250 instances in different species for 47 consecutive months in their natural contexts. The collection process involves dozens of crowd workers for photographing and domain experts for labelling. Extensive baseline experiments using the state-of-the-art fine-grained classification models demonstrate the issues of concept drift and long-tailed distribution existed in the dataset, which require the attention of future researches.
|
2303.06545
|
Hao Zhou
|
Hao Zhou, Chongyang Zhang, Yanjun Chen, Chuanping Hu
|
Towards Diverse Temporal Grounding under Single Positive Labels
|
The source codes are available at
https://github.com/zhouhaocv/DTG-SPL
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal grounding aims to retrieve moments of the described event within an
untrimmed video by a language query. Typically, existing methods assume
annotations are precise and unique, yet one query may describe multiple moments
in many cases. Hence, simply taking it as a one-vs-one mapping task and
striving to match single-label annotations will inevitably introduce false
negatives during optimization. In this study, we reformulate this task as a
one-vs-many optimization problem under the condition of single positive labels.
The unlabeled moments are considered unobserved rather than negative, and we
explore mining potential positive moments to assist in multiple moment
retrieval. In this setting, we propose a novel Diverse Temporal Grounding
framework, termed DTG-SPL, which mainly consists of a positive moment
estimation (PME) module and a diverse moment regression (DMR) module. PME
leverages semantic reconstruction information and an expected positive
regularization to uncover potential positive moments in an online fashion.
Under the supervision of these pseudo positives, DMR is able to localize
diverse moments in parallel that meet different users. The entire framework
allows for end-to-end optimization as well as fast inference. Extensive
experiments on Charades-STA and ActivityNet Captions show that our method
achieves superior performance in terms of both single-label and multi-label
metrics.
|
[
{
"created": "Sun, 12 Mar 2023 02:54:18 GMT",
"version": "v1"
}
] |
2023-03-14
|
[
[
"Zhou",
"Hao",
""
],
[
"Zhang",
"Chongyang",
""
],
[
"Chen",
"Yanjun",
""
],
[
"Hu",
"Chuanping",
""
]
] |
Temporal grounding aims to retrieve moments of the described event within an untrimmed video by a language query. Typically, existing methods assume annotations are precise and unique, yet one query may describe multiple moments in many cases. Hence, simply taking it as a one-vs-one mapping task and striving to match single-label annotations will inevitably introduce false negatives during optimization. In this study, we reformulate this task as a one-vs-many optimization problem under the condition of single positive labels. The unlabeled moments are considered unobserved rather than negative, and we explore mining potential positive moments to assist in multiple moment retrieval. In this setting, we propose a novel Diverse Temporal Grounding framework, termed DTG-SPL, which mainly consists of a positive moment estimation (PME) module and a diverse moment regression (DMR) module. PME leverages semantic reconstruction information and an expected positive regularization to uncover potential positive moments in an online fashion. Under the supervision of these pseudo positives, DMR is able to localize diverse moments in parallel that meet different users. The entire framework allows for end-to-end optimization as well as fast inference. Extensive experiments on Charades-STA and ActivityNet Captions show that our method achieves superior performance in terms of both single-label and multi-label metrics.
|
2110.15919
|
Pranay Bhardwaj
|
Vinay U. Pai, Pranay Bhardwaj, and S. M. Zafaruddin
|
Performance Analysis of Dual-Hop THz Wireless Transmission for Backhaul
Applications
|
This paper has been accepted for presentation in 2021 IEEE
International Conference on Advanced Networks and Telecommunications Systems
(ANTS), Hyderabad, India
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
THz transmissions suffer from pointing errors due to antenna misalignment and
incur higher path loss from the molecular absorption in addition to the channel
fading. In this paper, we employ an amplify-and-forward (AF) dual-hop relaying
to mitigate the effect of pointing errors and extend the range of the THz
wireless system for backhaul connectivity. We provide statistical analysis on
the performance of the considered system by deriving analytical expressions for
the outage probability, average bit-error-rate (BER), average signal-to-noise
ratio (SNR), and a lower bound on the ergodic capacity over independent and
identical (i.i.d) $\alpha$-$\mu$ fading combined with the statistical effect of
pointing errors. Using computer simulations, we validate the derived analysis
of the relay-assisted system. We also demonstrate the effect of the system
parameters on outage probability and average BER with the help of diversity
order. We show that data rates up to several \mbox{Gbps} can be achieved using
THz transmissions, which is desirable for next-generation wireless systems,
especially for backhaul applications.
|
[
{
"created": "Fri, 29 Oct 2021 17:15:38 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Nov 2021 19:24:23 GMT",
"version": "v2"
}
] |
2021-11-23
|
[
[
"Pai",
"Vinay U.",
""
],
[
"Bhardwaj",
"Pranay",
""
],
[
"Zafaruddin",
"S. M.",
""
]
] |
THz transmissions suffer from pointing errors due to antenna misalignment and incur higher path loss from the molecular absorption in addition to the channel fading. In this paper, we employ an amplify-and-forward (AF) dual-hop relaying to mitigate the effect of pointing errors and extend the range of the THz wireless system for backhaul connectivity. We provide statistical analysis on the performance of the considered system by deriving analytical expressions for the outage probability, average bit-error-rate (BER), average signal-to-noise ratio (SNR), and a lower bound on the ergodic capacity over independent and identical (i.i.d) $\alpha$-$\mu$ fading combined with the statistical effect of pointing errors. Using computer simulations, we validate the derived analysis of the relay-assisted system. We also demonstrate the effect of the system parameters on outage probability and average BER with the help of diversity order. We show that data rates up to several \mbox{Gbps} can be achieved using THz transmissions, which is desirable for next-generation wireless systems, especially for backhaul applications.
|
2202.02071
|
Henrique Moniz
|
Afonso Oliveira, Henrique Moniz, Rodrigo Rodrigues
|
Alea-BFT: Practical Asynchronous Byzantine Fault Tolerance
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Traditional Byzantine Fault Tolerance (BFT) state machine replication
protocols assume a partial synchrony model, leading to a design where a leader
replica drives the protocol and is replaced after a timeout. Recently, we
witnessed a surge of asynchronous BFT protocols that use randomization to
remove the assumptions of bounds on message delivery times, making them more
resilient to adverse network conditions. However, these protocols still fall
short of being practical across a broad range of scenarios due to their cubic
communication costs, use of expensive primitives, and overall protocol
complexity. In this paper, we present Alea-BFT, the first asynchronous BFT
protocol to achieve quadratic communication complexity, allowing it to scale to
large networks. Alea-BFT brings the key design insight from classical protocols
of concentrating part of the work on a single designated replica, and
incorporates this principle in a two stage pipelined design, with an efficient
broadcast led by the designated replica followed by an inexpensive binary
agreement. We evaluated our prototype implementation across 10 sites in 4
continents, and our results show significant scalability gains from the
proposed design.
|
[
{
"created": "Fri, 4 Feb 2022 10:53:37 GMT",
"version": "v1"
}
] |
2022-02-07
|
[
[
"Oliveira",
"Afonso",
""
],
[
"Moniz",
"Henrique",
""
],
[
"Rodrigues",
"Rodrigo",
""
]
] |
Traditional Byzantine Fault Tolerance (BFT) state machine replication protocols assume a partial synchrony model, leading to a design where a leader replica drives the protocol and is replaced after a timeout. Recently, we witnessed a surge of asynchronous BFT protocols that use randomization to remove the assumptions of bounds on message delivery times, making them more resilient to adverse network conditions. However, these protocols still fall short of being practical across a broad range of scenarios due to their cubic communication costs, use of expensive primitives, and overall protocol complexity. In this paper, we present Alea-BFT, the first asynchronous BFT protocol to achieve quadratic communication complexity, allowing it to scale to large networks. Alea-BFT brings the key design insight from classical protocols of concentrating part of the work on a single designated replica, and incorporates this principle in a two stage pipelined design, with an efficient broadcast led by the designated replica followed by an inexpensive binary agreement. We evaluated our prototype implementation across 10 sites in 4 continents, and our results show significant scalability gains from the proposed design.
|
1405.2066
|
Jehad Al Dallal Prof.
|
Jehad Al Dallal
|
How and When to Flatten Java Classes?
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Improving modularity and reusability are two key objectives in
object-oriented programming. These objectives are achieved by applying several
key concepts, such as data encapsulation and inheritance. A class in an
object-oriented system is the basic unit of design. Assessing the quality of an
object-oriented class may require flattening the class and representing it as
it really is, including all accessible inherited class members. Thus, class
flattening helps in exploring the impact of inheritance on improving code
quality. This paper explains how to flatten Java classes and discusses the
relationship between class flattening and some applications of interest to
software practitioners, such as refactoring and indicating external quality
attributes.
|
[
{
"created": "Thu, 8 May 2014 19:48:33 GMT",
"version": "v1"
}
] |
2014-05-09
|
[
[
"Dallal",
"Jehad Al",
""
]
] |
Improving modularity and reusability are two key objectives in object-oriented programming. These objectives are achieved by applying several key concepts, such as data encapsulation and inheritance. A class in an object-oriented system is the basic unit of design. Assessing the quality of an object-oriented class may require flattening the class and representing it as it really is, including all accessible inherited class members. Thus, class flattening helps in exploring the impact of inheritance on improving code quality. This paper explains how to flatten Java classes and discusses the relationship between class flattening and some applications of interest to software practitioners, such as refactoring and indicating external quality attributes.
|
1406.3583
|
Aaron D. Jaggard
|
Aaron D. Jaggard, Aaron Johnson, Paul Syverson, and Joan Feigenbaum
|
Representing Network Trust and Using It to Improve Anonymous
Communication
|
24 pages; talk to be presented at HotPETs 2014
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by the effectiveness of correlation attacks against Tor, the
censorship arms race, and observations of malicious relays in Tor, we propose
that Tor users capture their trust in network elements using probability
distributions over the sets of elements observed by network adversaries. We
present a modular system that allows users to efficiently and conveniently
create such distributions and use them to improve their security. The major
components of this system are (i) an ontology of network-element types that
represents the main threats to and vulnerabilities of anonymous communication
over Tor, (ii) a formal language that allows users to naturally express trust
beliefs about network elements, and (iii) a conversion procedure that takes the
ontology, public information about the network, and user beliefs written in the
trust language and produce a Bayesian Belief Network that represents the
probability distribution in a way that is concise and easily sampleable. We
also present preliminary experimental results that show the distribution
produced by our system can improve security when employed by users; further
improvement is seen when the system is employed by both users and services.
|
[
{
"created": "Fri, 13 Jun 2014 16:23:12 GMT",
"version": "v1"
}
] |
2014-06-16
|
[
[
"Jaggard",
"Aaron D.",
""
],
[
"Johnson",
"Aaron",
""
],
[
"Syverson",
"Paul",
""
],
[
"Feigenbaum",
"Joan",
""
]
] |
Motivated by the effectiveness of correlation attacks against Tor, the censorship arms race, and observations of malicious relays in Tor, we propose that Tor users capture their trust in network elements using probability distributions over the sets of elements observed by network adversaries. We present a modular system that allows users to efficiently and conveniently create such distributions and use them to improve their security. The major components of this system are (i) an ontology of network-element types that represents the main threats to and vulnerabilities of anonymous communication over Tor, (ii) a formal language that allows users to naturally express trust beliefs about network elements, and (iii) a conversion procedure that takes the ontology, public information about the network, and user beliefs written in the trust language and produce a Bayesian Belief Network that represents the probability distribution in a way that is concise and easily sampleable. We also present preliminary experimental results that show the distribution produced by our system can improve security when employed by users; further improvement is seen when the system is employed by both users and services.
|
1702.06235
|
Will Radford
|
Andrew Chisholm, Will Radford, Ben Hachey
|
Learning to generate one-sentence biographies from Wikidata
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We investigate the generation of one-sentence Wikipedia biographies from
facts derived from Wikidata slot-value pairs. We train a recurrent neural
network sequence-to-sequence model with attention to select facts and generate
textual summaries. Our model incorporates a novel secondary objective that
helps ensure it generates sentences that contain the input facts. The model
achieves a BLEU score of 41, improving significantly upon the vanilla
sequence-to-sequence model and scoring roughly twice that of a simple template
baseline. Human preference evaluation suggests the model is nearly as good as
the Wikipedia reference. Manual analysis explores content selection, suggesting
the model can trade the ability to infer knowledge against the risk of
hallucinating incorrect information.
|
[
{
"created": "Tue, 21 Feb 2017 01:30:59 GMT",
"version": "v1"
}
] |
2017-02-22
|
[
[
"Chisholm",
"Andrew",
""
],
[
"Radford",
"Will",
""
],
[
"Hachey",
"Ben",
""
]
] |
We investigate the generation of one-sentence Wikipedia biographies from facts derived from Wikidata slot-value pairs. We train a recurrent neural network sequence-to-sequence model with attention to select facts and generate textual summaries. Our model incorporates a novel secondary objective that helps ensure it generates sentences that contain the input facts. The model achieves a BLEU score of 41, improving significantly upon the vanilla sequence-to-sequence model and scoring roughly twice that of a simple template baseline. Human preference evaluation suggests the model is nearly as good as the Wikipedia reference. Manual analysis explores content selection, suggesting the model can trade the ability to infer knowledge against the risk of hallucinating incorrect information.
|
2402.11173
|
Andrew Lowy
|
Andrew Lowy, Jonathan Ullman, Stephen J. Wright
|
How to Make the Gradients Small Privately: Improved Rates for
Differentially Private Non-Convex Optimization
| null | null | null | null |
cs.LG cs.CR math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
We provide a simple and flexible framework for designing differentially
private algorithms to find approximate stationary points of non-convex loss
functions. Our framework is based on using a private approximate risk minimizer
to "warm start" another private algorithm for finding stationary points. We use
this framework to obtain improved, and sometimes optimal, rates for several
classes of non-convex loss functions. First, we obtain improved rates for
finding stationary points of smooth non-convex empirical loss functions.
Second, we specialize to quasar-convex functions, which generalize star-convex
functions and arise in learning dynamical systems and training some neural
nets. We achieve the optimal rate for this class. Third, we give an optimal
algorithm for finding stationary points of functions satisfying the
Kurdyka-Lojasiewicz (KL) condition. For example, over-parameterized neural
networks often satisfy this condition. Fourth, we provide new state-of-the-art
rates for stationary points of non-convex population loss functions. Fifth, we
obtain improved rates for non-convex generalized linear models. A modification
of our algorithm achieves nearly the same rates for second-order stationary
points of functions with Lipschitz Hessian, improving over the previous
state-of-the-art for each of the above problems.
|
[
{
"created": "Sat, 17 Feb 2024 02:42:56 GMT",
"version": "v1"
}
] |
2024-02-20
|
[
[
"Lowy",
"Andrew",
""
],
[
"Ullman",
"Jonathan",
""
],
[
"Wright",
"Stephen J.",
""
]
] |
We provide a simple and flexible framework for designing differentially private algorithms to find approximate stationary points of non-convex loss functions. Our framework is based on using a private approximate risk minimizer to "warm start" another private algorithm for finding stationary points. We use this framework to obtain improved, and sometimes optimal, rates for several classes of non-convex loss functions. First, we obtain improved rates for finding stationary points of smooth non-convex empirical loss functions. Second, we specialize to quasar-convex functions, which generalize star-convex functions and arise in learning dynamical systems and training some neural nets. We achieve the optimal rate for this class. Third, we give an optimal algorithm for finding stationary points of functions satisfying the Kurdyka-Lojasiewicz (KL) condition. For example, over-parameterized neural networks often satisfy this condition. Fourth, we provide new state-of-the-art rates for stationary points of non-convex population loss functions. Fifth, we obtain improved rates for non-convex generalized linear models. A modification of our algorithm achieves nearly the same rates for second-order stationary points of functions with Lipschitz Hessian, improving over the previous state-of-the-art for each of the above problems.
|
2401.11946
|
Jiajun Liu
|
Jiajun Liu, Lina Tan, Zhili Zhou, Yi Li, Peng Chen
|
A Dynamic YOLO-Based Sequence-Matching Model for Efficient Coverless
Image Steganography
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Many existing coverless steganography methods establish a mapping
relationship between cover images and hidden data. There exists an issue that
the number of images stored in the database grows exponentially as the
steganographic capacity rises. The need for a high steganographic capacity
makes it challenging to build an image database. To improve the image library
utilization and anti-attack capability of the steganography system, we present
an efficient coverless scheme based on dynamically matched substrings. YOLO is
employed for selecting optimal objects, and a mapping dictionary is established
between these objects and scrambling factors. With the aid of this dictionary,
each image is effectively assigned to a specific scrambling factor, which is
used to scramble the receiver's sequence key. To achieve sufficient
steganography capability based on a limited image library, all substrings of
the scrambled sequences hold the potential to hide data. After completing the
secret information matching, the ideal number of stego images will be obtained
from the database. According to experimental results, this technology
outperforms most previous works on data load, transmission security, and hiding
capacity. Under typical geometric attacks, it can recover 79.85\% of secret
information on average. Furthermore, only approximately 200 random images are
needed to meet a capacity of 19 bits per image.
|
[
{
"created": "Mon, 22 Jan 2024 13:35:27 GMT",
"version": "v1"
}
] |
2024-01-23
|
[
[
"Liu",
"Jiajun",
""
],
[
"Tan",
"Lina",
""
],
[
"Zhou",
"Zhili",
""
],
[
"Li",
"Yi",
""
],
[
"Chen",
"Peng",
""
]
] |
Many existing coverless steganography methods establish a mapping relationship between cover images and hidden data. There exists an issue that the number of images stored in the database grows exponentially as the steganographic capacity rises. The need for a high steganographic capacity makes it challenging to build an image database. To improve the image library utilization and anti-attack capability of the steganography system, we present an efficient coverless scheme based on dynamically matched substrings. YOLO is employed for selecting optimal objects, and a mapping dictionary is established between these objects and scrambling factors. With the aid of this dictionary, each image is effectively assigned to a specific scrambling factor, which is used to scramble the receiver's sequence key. To achieve sufficient steganography capability based on a limited image library, all substrings of the scrambled sequences hold the potential to hide data. After completing the secret information matching, the ideal number of stego images will be obtained from the database. According to experimental results, this technology outperforms most previous works on data load, transmission security, and hiding capacity. Under typical geometric attacks, it can recover 79.85\% of secret information on average. Furthermore, only approximately 200 random images are needed to meet a capacity of 19 bits per image.
|
2203.10837
|
Marcos Faundez-Zanuy
|
K. L\'opez-de-Ipi\~na, Marcos Faundez-Zanuy, Jordi Sol\'e-Casals,
Fernando Zelarin, Pilar Calvo
|
Multi-class versus One-class classifier in spontaneous speech analysis
oriented to Alzheimer Disease diagnosis
|
10 pages, published in International Conference on NONLINEAR SPEECH
PROCESSING, NOLISP 2015 jointly organized with the 25th Italian Workshop on
Neural Networks, WIRN 2015, held at May 2015, Vietri sul Mare, Salerno, Italy
|
Recent Advances in Nonlinear Speech Processing. Smart Innovation,
Systems and Technologies, vol 48. Springer, Cham 2015
|
10.1007/978-3-319-28109-4_7
| null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Most of medical developments require the ability to identify samples that are
anomalous with respect to a target group or control group, in the sense they
could belong to a new, previously unseen class or are not class data. In this
case when there are not enough data to train two-class One-class classification
appear like an available solution. On the other hand non-linear approaches
could give very useful information. The aim of our project is to contribute to
earlier diagnosis of AD and better estimates of its severity by using automatic
analysis performed through new biomarkers extracted from speech signal. The
methods selected in this case are speech biomarkers oriented to Spontaneous
Speech and Emotional Response Analysis. In this approach One-class classifiers
and two-class classifiers are analyzed. The use of information about outlier
and Fractal Dimension features improves the system performance.
|
[
{
"created": "Mon, 21 Mar 2022 09:57:20 GMT",
"version": "v1"
}
] |
2022-03-22
|
[
[
"López-de-Ipiña",
"K.",
""
],
[
"Faundez-Zanuy",
"Marcos",
""
],
[
"Solé-Casals",
"Jordi",
""
],
[
"Zelarin",
"Fernando",
""
],
[
"Calvo",
"Pilar",
""
]
] |
Most of medical developments require the ability to identify samples that are anomalous with respect to a target group or control group, in the sense they could belong to a new, previously unseen class or are not class data. In this case when there are not enough data to train two-class One-class classification appear like an available solution. On the other hand non-linear approaches could give very useful information. The aim of our project is to contribute to earlier diagnosis of AD and better estimates of its severity by using automatic analysis performed through new biomarkers extracted from speech signal. The methods selected in this case are speech biomarkers oriented to Spontaneous Speech and Emotional Response Analysis. In this approach One-class classifiers and two-class classifiers are analyzed. The use of information about outlier and Fractal Dimension features improves the system performance.
|
2408.06047
|
Xuanpu Zhang
|
Xuanpu Zhang and Dan Song and Pengxin Zhan and Qingguo Chen and Zhao
Xu and Weihua Luo and Kaifu Zhang and Anan Liu
|
BooW-VTON: Boosting In-the-Wild Virtual Try-On via Mask-Free Pseudo Data
Training
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image-based virtual try-on is an increasingly popular and important task to
generate realistic try-on images of specific person. Existing methods always
employ an accurate mask to remove the original garment in the source image,
thus achieving realistic synthesized images in simple and conventional try-on
scenarios based on powerful diffusion model. Therefore, acquiring suitable mask
is vital to the try-on performance of these methods. However, obtaining precise
inpainting masks, especially for complex wild try-on data containing diverse
foreground occlusions and person poses, is not easy as Figure 1-Top shows. This
difficulty often results in poor performance in more practical and challenging
real-life scenarios, such as the selfie scene shown in Figure 1-Bottom. To this
end, we propose a novel training paradigm combined with an efficient data
augmentation method to acquire large-scale unpaired training data from wild
scenarios, thereby significantly facilitating the try-on performance of our
model without the need for additional inpainting masks. Besides, a try-on
localization loss is designed to localize a more accurate try-on area to obtain
more reasonable try-on results. It is noted that our method only needs the
reference cloth image, source pose image and source person image as input,
which is more cost-effective and user-friendly compared to existing methods.
Extensive qualitative and quantitative experiments have demonstrated superior
performance in wild scenarios with such a low-demand input.
|
[
{
"created": "Mon, 12 Aug 2024 10:39:59 GMT",
"version": "v1"
}
] |
2024-08-13
|
[
[
"Zhang",
"Xuanpu",
""
],
[
"Song",
"Dan",
""
],
[
"Zhan",
"Pengxin",
""
],
[
"Chen",
"Qingguo",
""
],
[
"Xu",
"Zhao",
""
],
[
"Luo",
"Weihua",
""
],
[
"Zhang",
"Kaifu",
""
],
[
"Liu",
"Anan",
""
]
] |
Image-based virtual try-on is an increasingly popular and important task to generate realistic try-on images of specific person. Existing methods always employ an accurate mask to remove the original garment in the source image, thus achieving realistic synthesized images in simple and conventional try-on scenarios based on powerful diffusion model. Therefore, acquiring suitable mask is vital to the try-on performance of these methods. However, obtaining precise inpainting masks, especially for complex wild try-on data containing diverse foreground occlusions and person poses, is not easy as Figure 1-Top shows. This difficulty often results in poor performance in more practical and challenging real-life scenarios, such as the selfie scene shown in Figure 1-Bottom. To this end, we propose a novel training paradigm combined with an efficient data augmentation method to acquire large-scale unpaired training data from wild scenarios, thereby significantly facilitating the try-on performance of our model without the need for additional inpainting masks. Besides, a try-on localization loss is designed to localize a more accurate try-on area to obtain more reasonable try-on results. It is noted that our method only needs the reference cloth image, source pose image and source person image as input, which is more cost-effective and user-friendly compared to existing methods. Extensive qualitative and quantitative experiments have demonstrated superior performance in wild scenarios with such a low-demand input.
|
1604.01186
|
EPTCS
|
Denis Firsov, Tarmo Uustalu, Niccol\`o Veltri
|
Variations on Noetherianness
|
In Proceedings MSFP 2016, arXiv:1604.00384
|
EPTCS 207, 2016, pp. 76-88
|
10.4204/EPTCS.207.4
| null |
cs.LO cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In constructive mathematics, several nonequivalent notions of finiteness
exist. In this paper, we continue the study of Noetherian sets in the
dependently typed setting of the Agda programming language. We want to say that
a set is Noetherian, if, when we are shown elements from it one after another,
we will sooner or later have seen some element twice. This idea can be made
precise in a number of ways. We explore the properties and connections of some
of the possible encodings. In particular, we show that certain implementations
imply decidable equality while others do not, and we construct counterexamples
in the latter case. Additionally, we explore the relation between
Noetherianness and other notions of finiteness.
|
[
{
"created": "Tue, 5 Apr 2016 09:04:13 GMT",
"version": "v1"
}
] |
2016-04-06
|
[
[
"Firsov",
"Denis",
""
],
[
"Uustalu",
"Tarmo",
""
],
[
"Veltri",
"Niccolò",
""
]
] |
In constructive mathematics, several nonequivalent notions of finiteness exist. In this paper, we continue the study of Noetherian sets in the dependently typed setting of the Agda programming language. We want to say that a set is Noetherian, if, when we are shown elements from it one after another, we will sooner or later have seen some element twice. This idea can be made precise in a number of ways. We explore the properties and connections of some of the possible encodings. In particular, we show that certain implementations imply decidable equality while others do not, and we construct counterexamples in the latter case. Additionally, we explore the relation between Noetherianness and other notions of finiteness.
|
2109.06612
|
Johanna Schmidt
|
Raphael Sahann, Torsten M\"oller, Johanna Schmidt
|
Histogram binning revisited with a focus on human perception
|
Accepted as short paper at VIS 2021. Supplemental material can be
found at https://github.com/johanna-schmidt/histogram-binning-revisited
|
Proceedings of VIS short papers 2021
| null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents a quantitative user study to evaluate how well users can
visually perceive the underlying data distribution from a histogram
representation. We used different sample and bin sizes and four different
distributions (uniform, normal, bimodal, and gamma). The study results confirm
that, in general, more bins correlate with fewer errors by the viewers.
However, upon a certain number of bins, the error rate cannot be improved by
adding more bins. By comparing our study results with the outcomes of existing
mathematical models for histogram binning (e.g., Sturges' formula, Scott's
normal reference rule, the Rice Rule, or Freedman-Diaconis' choice), we can see
that most of them overestimate the number of bins necessary to make the
distribution visible to a human viewer.
|
[
{
"created": "Tue, 14 Sep 2021 12:08:27 GMT",
"version": "v1"
}
] |
2021-09-15
|
[
[
"Sahann",
"Raphael",
""
],
[
"Möller",
"Torsten",
""
],
[
"Schmidt",
"Johanna",
""
]
] |
This paper presents a quantitative user study to evaluate how well users can visually perceive the underlying data distribution from a histogram representation. We used different sample and bin sizes and four different distributions (uniform, normal, bimodal, and gamma). The study results confirm that, in general, more bins correlate with fewer errors by the viewers. However, upon a certain number of bins, the error rate cannot be improved by adding more bins. By comparing our study results with the outcomes of existing mathematical models for histogram binning (e.g., Sturges' formula, Scott's normal reference rule, the Rice Rule, or Freedman-Diaconis' choice), we can see that most of them overestimate the number of bins necessary to make the distribution visible to a human viewer.
|
1802.09657
|
Dipankar Maity
|
Dipankar Maity, John S. Baras
|
Event-Triggered Controller Synthesis for Dynamical Systems with Temporal
Logic Constraints
| null | null | null | null |
cs.RO math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose an event-triggered con- trol framework for dynamical
systems with temporal logical constraints. Event-triggered control
methodologies have proven to be very efficient in reducing sensing,
communication and computation costs. When a continuous feedback control is re-
placed with an event-triggered strategy, the corresponding state trajectories
also differ. In a system with logical constraints, such small deviation in the
trajectory might lead to unsatisfiability of the logical constraints. In this
work, we develop an approach where we ensure that the event-triggered state
trajectory is confined within an tube of the ideal trajectory associated with
the continuous state feedback. At the same time, we will ensure satisfiability
of the logical constraints as well. Furthermore, we show that the proposed
method works for delayed systems as long as the delay is bounded by a certain
quantity.
|
[
{
"created": "Tue, 27 Feb 2018 00:17:41 GMT",
"version": "v1"
}
] |
2018-02-28
|
[
[
"Maity",
"Dipankar",
""
],
[
"Baras",
"John S.",
""
]
] |
In this work, we propose an event-triggered con- trol framework for dynamical systems with temporal logical constraints. Event-triggered control methodologies have proven to be very efficient in reducing sensing, communication and computation costs. When a continuous feedback control is re- placed with an event-triggered strategy, the corresponding state trajectories also differ. In a system with logical constraints, such small deviation in the trajectory might lead to unsatisfiability of the logical constraints. In this work, we develop an approach where we ensure that the event-triggered state trajectory is confined within an tube of the ideal trajectory associated with the continuous state feedback. At the same time, we will ensure satisfiability of the logical constraints as well. Furthermore, we show that the proposed method works for delayed systems as long as the delay is bounded by a certain quantity.
|
1911.03059
|
Chowdhury Rahman
|
Afra Anika, Md. Hasibur Rahman, Salekul Islam, Abu Shafin Mohammad
Mahdee Jameel and Chowdhury Rafeed Rahman
|
A Comprehensive Comparison of Machine Learning Based Methods Used in
Bengali Question Classification
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
QA classification system maps questions asked by humans to an appropriate
answer category. A sound question classification (QC) system model is the
pre-requisite of a sound QA system. This work demonstrates phases of assembling
a QA type classification model. We present a comprehensive comparison
(performance and computational complexity) among some machine learning based
approaches used in QC for Bengali language.
|
[
{
"created": "Fri, 8 Nov 2019 05:30:33 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Nov 2019 16:37:41 GMT",
"version": "v2"
}
] |
2019-11-20
|
[
[
"Anika",
"Afra",
""
],
[
"Rahman",
"Md. Hasibur",
""
],
[
"Islam",
"Salekul",
""
],
[
"Jameel",
"Abu Shafin Mohammad Mahdee",
""
],
[
"Rahman",
"Chowdhury Rafeed",
""
]
] |
QA classification system maps questions asked by humans to an appropriate answer category. A sound question classification (QC) system model is the pre-requisite of a sound QA system. This work demonstrates phases of assembling a QA type classification model. We present a comprehensive comparison (performance and computational complexity) among some machine learning based approaches used in QC for Bengali language.
|
2212.09448
|
Senem Tanberk PhD
|
Senem Tanberk, Mustafa Can
|
Smart Journey in Istanbul: A Mobile Application in Smart Cities for
Traffic Estimation by Harnessing Time Series
| null | null |
10.1109/ASYU58738.2023.10296669
| null |
cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
In recent decades, mobile applications (apps) have gained enormous
popularity. Smart services for smart cities increasingly gain attention. The
main goal of the proposed research is to present a new AI-powered mobile
application on Istanbul's traffic congestion forecast by using traffic density
data. It addresses the research question by using time series approaches (LSTM,
Transformer, and XGBoost) based on past data over the traffic load dataset
combined with meteorological conditions. Analysis of simulation results on
predicted models will be discussed according to performance indicators such as
MAPE, MAE, and RMSE. And then, it was observed that the Transformer model made
the most accurate traffic prediction. The developed traffic forecasting
prototype is expected to be a starting point on future products for a mobile
application suitable for citizens' daily use.
|
[
{
"created": "Tue, 13 Dec 2022 12:10:52 GMT",
"version": "v1"
}
] |
2023-11-09
|
[
[
"Tanberk",
"Senem",
""
],
[
"Can",
"Mustafa",
""
]
] |
In recent decades, mobile applications (apps) have gained enormous popularity. Smart services for smart cities increasingly gain attention. The main goal of the proposed research is to present a new AI-powered mobile application on Istanbul's traffic congestion forecast by using traffic density data. It addresses the research question by using time series approaches (LSTM, Transformer, and XGBoost) based on past data over the traffic load dataset combined with meteorological conditions. Analysis of simulation results on predicted models will be discussed according to performance indicators such as MAPE, MAE, and RMSE. And then, it was observed that the Transformer model made the most accurate traffic prediction. The developed traffic forecasting prototype is expected to be a starting point on future products for a mobile application suitable for citizens' daily use.
|
2401.11150
|
Junxiao Shen Mr
|
Junxiao Shen, Xuhai Xu, Ran Tan, Amy Karlson, Evan Strasnick
|
Simultaneous Gesture Classification and Localization with an Automatic
Gesture Annotation Model
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Training a real-time gesture recognition model heavily relies on annotated
data. However, manual data annotation is costly and demands substantial human
effort. In order to address this challenge, we propose a novel annotation model
that can automatically annotate gesture classes and identify their temporal
ranges. Our ablation study demonstrates that our annotation model design
surpasses the baseline in terms of both gesture classification accuracy (3-4\%
improvement) and localization accuracy (71-75\% improvement). We believe that
this annotation model has immense potential to improve the training of
downstream gesture recognition models using unlabeled datasets.
|
[
{
"created": "Sat, 20 Jan 2024 07:11:03 GMT",
"version": "v1"
}
] |
2024-01-23
|
[
[
"Shen",
"Junxiao",
""
],
[
"Xu",
"Xuhai",
""
],
[
"Tan",
"Ran",
""
],
[
"Karlson",
"Amy",
""
],
[
"Strasnick",
"Evan",
""
]
] |
Training a real-time gesture recognition model heavily relies on annotated data. However, manual data annotation is costly and demands substantial human effort. In order to address this challenge, we propose a novel annotation model that can automatically annotate gesture classes and identify their temporal ranges. Our ablation study demonstrates that our annotation model design surpasses the baseline in terms of both gesture classification accuracy (3-4\% improvement) and localization accuracy (71-75\% improvement). We believe that this annotation model has immense potential to improve the training of downstream gesture recognition models using unlabeled datasets.
|
1005.0907
|
Rdv Ijcsis
|
Yasser M. Alginaih, Abdul Ahad Siddiqi
|
Multistage Hybrid Arabic/Indian Numeral OCR System
|
IEEE Publication format, International Journal of Computer Science
and Information Security, IJCSIS, Vol. 8 No. 1, April 2010, USA. ISSN 1947
5500, http://sites.google.com/site/ijcsis/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
The use of OCR in postal services is not yet universal and there are still
many countries that process mail sorting manually. Automated Arabic/Indian
numeral Optical Character Recognition (OCR) systems for Postal services are
being used in some countries, but still there are errors during the mail
sorting process, thus causing a reduction in efficiency. The need to
investigate fast and efficient recognition algorithms/systems is important so
as to correctly read the postal codes from mail addresses and to eliminate any
errors during the mail sorting stage. The objective of this study is to
recognize printed numerical postal codes from mail addresses. The proposed
system is a multistage hybrid system which consists of three different feature
extraction methods, i.e., binary, zoning, and fuzzy features, and three
different classifiers, i.e., Hamming Nets, Euclidean Distance, and Fuzzy Neural
Network Classifiers. The proposed system, systematically compares the
performance of each of these methods, and ensures that the numerals are
recognized correctly. Comprehensive results provide a very high recognition
rate, outperforming the other known developed methods in literature.
|
[
{
"created": "Thu, 6 May 2010 07:25:23 GMT",
"version": "v1"
}
] |
2010-05-07
|
[
[
"Alginaih",
"Yasser M.",
""
],
[
"Siddiqi",
"Abdul Ahad",
""
]
] |
The use of OCR in postal services is not yet universal and there are still many countries that process mail sorting manually. Automated Arabic/Indian numeral Optical Character Recognition (OCR) systems for Postal services are being used in some countries, but still there are errors during the mail sorting process, thus causing a reduction in efficiency. The need to investigate fast and efficient recognition algorithms/systems is important so as to correctly read the postal codes from mail addresses and to eliminate any errors during the mail sorting stage. The objective of this study is to recognize printed numerical postal codes from mail addresses. The proposed system is a multistage hybrid system which consists of three different feature extraction methods, i.e., binary, zoning, and fuzzy features, and three different classifiers, i.e., Hamming Nets, Euclidean Distance, and Fuzzy Neural Network Classifiers. The proposed system, systematically compares the performance of each of these methods, and ensures that the numerals are recognized correctly. Comprehensive results provide a very high recognition rate, outperforming the other known developed methods in literature.
|
2405.13937
|
Xingtong Yu
|
Xingtong Yu, Zhenghao Liu, Yuan Fang, Xinming Zhang
|
DyGPrompt: Learning Feature and Time Prompts on Dynamic Graphs
|
Under review
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic graphs are pervasive in the real world, modeling dynamic relations
between objects across various fields. For dynamic graph modeling, dynamic
graph neural networks (DGNNs) have emerged as a mainstream technique, which are
generally pre-trained on the link prediction task, leaving a significant gap
from the objectives of downstream tasks such as node classification. To bridge
the gap, prompt-based learning has gained traction on graphs. However, existing
efforts focus on static graphs, neglecting the evolution of dynamic graphs. In
this paper, we propose DyGPrompt, a novel pre-training and prompting framework
for dynamic graph modeling. First, we design dual prompts to address the gap in
both task objectives and dynamic variations across pre-training and downstream
tasks. Second, we recognize that node and time features mutually characterize
each other, and propose dual condition-nets to model the evolving node-time
patterns in downstream tasks. Finally, we thoroughly evaluate and analyze
DyGPrompt through extensive experiments on three public datasets.
|
[
{
"created": "Wed, 22 May 2024 19:10:24 GMT",
"version": "v1"
},
{
"created": "Sun, 26 May 2024 01:46:11 GMT",
"version": "v2"
},
{
"created": "Tue, 28 May 2024 10:07:29 GMT",
"version": "v3"
},
{
"created": "Tue, 2 Jul 2024 05:14:10 GMT",
"version": "v4"
},
{
"created": "Wed, 3 Jul 2024 02:06:07 GMT",
"version": "v5"
}
] |
2024-07-04
|
[
[
"Yu",
"Xingtong",
""
],
[
"Liu",
"Zhenghao",
""
],
[
"Fang",
"Yuan",
""
],
[
"Zhang",
"Xinming",
""
]
] |
Dynamic graphs are pervasive in the real world, modeling dynamic relations between objects across various fields. For dynamic graph modeling, dynamic graph neural networks (DGNNs) have emerged as a mainstream technique, which are generally pre-trained on the link prediction task, leaving a significant gap from the objectives of downstream tasks such as node classification. To bridge the gap, prompt-based learning has gained traction on graphs. However, existing efforts focus on static graphs, neglecting the evolution of dynamic graphs. In this paper, we propose DyGPrompt, a novel pre-training and prompting framework for dynamic graph modeling. First, we design dual prompts to address the gap in both task objectives and dynamic variations across pre-training and downstream tasks. Second, we recognize that node and time features mutually characterize each other, and propose dual condition-nets to model the evolving node-time patterns in downstream tasks. Finally, we thoroughly evaluate and analyze DyGPrompt through extensive experiments on three public datasets.
|
2208.02884
|
Nicolaas Kaashoek
|
Nicolaas Kaashoek and Robert Morris
|
CheckSync: Using Runtime-Integrated Checkpoints to Achieve High
Availability}
|
14 pages, 6 figures
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CheckSync provides applications with high availability via runtime-integrated
checkpointing. This allows CheckSync to take checkpoints of a process running
in a memory-managed language (Go, for now), which can be resumed on another
machine after a failure. CheckSync uses the runtime to checkpoint only the
process' live memory, doing without requiring significant changes to
applications.
CheckSync maintains the ease of use provided by virtual machines for the
applications it supports without requiring that an entire virtual machine image
be snapshotted. Because CheckSync captures only the memory used by an
application, it produces checkpoints that are smaller (by an order of
magnitude) than virtual machine snapshots if the memory footprint of the
application is relatively small compared to the state of the rest of the
operating system. Additionally, when running go-cache, a popular in-memory
key/value store, CheckSync reduces throughput by only 12% compared to the 78%
throughput loss when using go-cache's snapshot functionality, the 45% loss when
using CRIU, and the 68% loss when using virtual machine live migration.
|
[
{
"created": "Thu, 4 Aug 2022 20:53:50 GMT",
"version": "v1"
}
] |
2022-08-08
|
[
[
"Kaashoek",
"Nicolaas",
""
],
[
"Morris",
"Robert",
""
]
] |
CheckSync provides applications with high availability via runtime-integrated checkpointing. This allows CheckSync to take checkpoints of a process running in a memory-managed language (Go, for now), which can be resumed on another machine after a failure. CheckSync uses the runtime to checkpoint only the process' live memory, doing without requiring significant changes to applications. CheckSync maintains the ease of use provided by virtual machines for the applications it supports without requiring that an entire virtual machine image be snapshotted. Because CheckSync captures only the memory used by an application, it produces checkpoints that are smaller (by an order of magnitude) than virtual machine snapshots if the memory footprint of the application is relatively small compared to the state of the rest of the operating system. Additionally, when running go-cache, a popular in-memory key/value store, CheckSync reduces throughput by only 12% compared to the 78% throughput loss when using go-cache's snapshot functionality, the 45% loss when using CRIU, and the 68% loss when using virtual machine live migration.
|
2303.13299
|
Avi Schwarzschild
|
Avi Schwarzschild, Max Cembalest, Karthik Rao, Keegan Hines, John
Dickerson
|
Reckoning with the Disagreement Problem: Explanation Consensus as a
Training Objective
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
As neural networks increasingly make critical decisions in high-stakes
settings, monitoring and explaining their behavior in an understandable and
trustworthy manner is a necessity. One commonly used type of explainer is post
hoc feature attribution, a family of methods for giving each feature in an
input a score corresponding to its influence on a model's output. A major
limitation of this family of explainers in practice is that they can disagree
on which features are more important than others. Our contribution in this
paper is a method of training models with this disagreement problem in mind. We
do this by introducing a Post hoc Explainer Agreement Regularization (PEAR)
loss term alongside the standard term corresponding to accuracy, an additional
term that measures the difference in feature attribution between a pair of
explainers. We observe on three datasets that we can train a model with this
loss term to improve explanation consensus on unseen data, and see improved
consensus between explainers other than those used in the loss term. We examine
the trade-off between improved consensus and model performance. And finally, we
study the influence our method has on feature attribution explanations.
|
[
{
"created": "Thu, 23 Mar 2023 14:35:37 GMT",
"version": "v1"
}
] |
2023-03-24
|
[
[
"Schwarzschild",
"Avi",
""
],
[
"Cembalest",
"Max",
""
],
[
"Rao",
"Karthik",
""
],
[
"Hines",
"Keegan",
""
],
[
"Dickerson",
"John",
""
]
] |
As neural networks increasingly make critical decisions in high-stakes settings, monitoring and explaining their behavior in an understandable and trustworthy manner is a necessity. One commonly used type of explainer is post hoc feature attribution, a family of methods for giving each feature in an input a score corresponding to its influence on a model's output. A major limitation of this family of explainers in practice is that they can disagree on which features are more important than others. Our contribution in this paper is a method of training models with this disagreement problem in mind. We do this by introducing a Post hoc Explainer Agreement Regularization (PEAR) loss term alongside the standard term corresponding to accuracy, an additional term that measures the difference in feature attribution between a pair of explainers. We observe on three datasets that we can train a model with this loss term to improve explanation consensus on unseen data, and see improved consensus between explainers other than those used in the loss term. We examine the trade-off between improved consensus and model performance. And finally, we study the influence our method has on feature attribution explanations.
|
2311.11596
|
Yining Miao
|
Yining Miao, Nanlin Shi, Changxing Huang, Yonghao Song, Xiaogang Chen,
Yijun Wang, Xiaorong Gao
|
High-performance cVEP-BCI under minimal calibration
|
35 pages, 5 figures
| null | null | null |
cs.HC cs.IT eess.SP math.IT q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ultimate goal of brain-computer interfaces (BCIs) based on visual
modulation paradigms is to achieve high-speed performance without the burden of
extensive calibration. Code-modulated visual evoked potential-based BCIs
(cVEP-BCIs) modulated by broadband white noise (WN) offer various advantages,
including increased communication speed, expanded encoding target capabilities,
and enhanced coding flexibility. However, the complexity of the
spatial-temporal patterns under broadband stimuli necessitates extensive
calibration for effective target identification in cVEP-BCIs. Consequently, the
information transfer rate (ITR) of cVEP-BCI under limited calibration usually
stays around 100 bits per minute (bpm), significantly lagging behind
state-of-the-art steady-state visual evoked potential-based BCIs (SSVEP-BCIs),
which achieve rates above 200 bpm. To enhance the performance of cVEP-BCIs with
minimal calibration, we devised an efficient calibration stage involving a
brief single-target flickering, lasting less than a minute, to extract
generalizable spatial-temporal patterns. Leveraging the calibration data, we
developed two complementary methods to construct cVEP temporal patterns: the
linear modeling method based on the stimulus sequence and the transfer learning
techniques using cross-subject data. As a result, we achieved the highest ITR
of 250 bpm under a minute of calibration, which has been shown to be comparable
to the state-of-the-art SSVEP paradigms. In summary, our work significantly
improved the cVEP performance under few-shot learning, which is expected to
expand the practicality and usability of cVEP-BCIs.
|
[
{
"created": "Mon, 20 Nov 2023 08:20:51 GMT",
"version": "v1"
}
] |
2023-11-21
|
[
[
"Miao",
"Yining",
""
],
[
"Shi",
"Nanlin",
""
],
[
"Huang",
"Changxing",
""
],
[
"Song",
"Yonghao",
""
],
[
"Chen",
"Xiaogang",
""
],
[
"Wang",
"Yijun",
""
],
[
"Gao",
"Xiaorong",
""
]
] |
The ultimate goal of brain-computer interfaces (BCIs) based on visual modulation paradigms is to achieve high-speed performance without the burden of extensive calibration. Code-modulated visual evoked potential-based BCIs (cVEP-BCIs) modulated by broadband white noise (WN) offer various advantages, including increased communication speed, expanded encoding target capabilities, and enhanced coding flexibility. However, the complexity of the spatial-temporal patterns under broadband stimuli necessitates extensive calibration for effective target identification in cVEP-BCIs. Consequently, the information transfer rate (ITR) of cVEP-BCI under limited calibration usually stays around 100 bits per minute (bpm), significantly lagging behind state-of-the-art steady-state visual evoked potential-based BCIs (SSVEP-BCIs), which achieve rates above 200 bpm. To enhance the performance of cVEP-BCIs with minimal calibration, we devised an efficient calibration stage involving a brief single-target flickering, lasting less than a minute, to extract generalizable spatial-temporal patterns. Leveraging the calibration data, we developed two complementary methods to construct cVEP temporal patterns: the linear modeling method based on the stimulus sequence and the transfer learning techniques using cross-subject data. As a result, we achieved the highest ITR of 250 bpm under a minute of calibration, which has been shown to be comparable to the state-of-the-art SSVEP paradigms. In summary, our work significantly improved the cVEP performance under few-shot learning, which is expected to expand the practicality and usability of cVEP-BCIs.
|
2204.08970
|
Zhihao Li
|
Zhihao Li, Si Yi, Zhan Ma
|
Rendering Nighttime Image Via Cascaded Color and Brightness Compensation
|
Accepted by NTIRE 2022 (CVPR Workshop)
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Image signal processing (ISP) is crucial for camera imaging, and neural
networks (NN) solutions are extensively deployed for daytime scenes. The lack
of sufficient nighttime image dataset and insights on nighttime illumination
characteristics poses a great challenge for high-quality rendering using
existing NN ISPs. To tackle it, we first built a high-resolution nighttime
RAW-RGB (NR2R) dataset with white balance and tone mapping annotated by expert
professionals. Meanwhile, to best capture the characteristics of nighttime
illumination light sources, we develop the CBUnet, a two-stage NN ISP to
cascade the compensation of color and brightness attributes. Experiments show
that our method has better visual quality compared to traditional ISP pipeline,
and is ranked at the second place in the NTIRE 2022 Night Photography Rendering
Challenge for two tracks by respective People's and Professional Photographer's
choices. The code and relevant materials are avaiable on our website:
https://njuvision.github.io/CBUnet.
|
[
{
"created": "Tue, 19 Apr 2022 16:15:31 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Apr 2022 17:23:11 GMT",
"version": "v2"
}
] |
2022-04-22
|
[
[
"Li",
"Zhihao",
""
],
[
"Yi",
"Si",
""
],
[
"Ma",
"Zhan",
""
]
] |
Image signal processing (ISP) is crucial for camera imaging, and neural networks (NN) solutions are extensively deployed for daytime scenes. The lack of sufficient nighttime image dataset and insights on nighttime illumination characteristics poses a great challenge for high-quality rendering using existing NN ISPs. To tackle it, we first built a high-resolution nighttime RAW-RGB (NR2R) dataset with white balance and tone mapping annotated by expert professionals. Meanwhile, to best capture the characteristics of nighttime illumination light sources, we develop the CBUnet, a two-stage NN ISP to cascade the compensation of color and brightness attributes. Experiments show that our method has better visual quality compared to traditional ISP pipeline, and is ranked at the second place in the NTIRE 2022 Night Photography Rendering Challenge for two tracks by respective People's and Professional Photographer's choices. The code and relevant materials are avaiable on our website: https://njuvision.github.io/CBUnet.
|
1702.05724
|
Daniel M\'endez Fern\'andez
|
Marco Kuhrmann and Daniel M\'endez Fern\'andez and Thomas Ternit\'e
|
On the Use of Variability Operations in the V-Modell XT Software Process
Line
|
Journal of Software: Evolution and Process, 2015
| null |
10.1002/smr.1751
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software process lines provide a systematic approach to develop and manage
software processes. It defines a reference process containing general process
assets, whereas a well-defined customization approach allows process engineers
to create new process variants, e.g., by extending or modifying process assets.
Variability operations are an instrument to realize flexibility by explicitly
declaring required modifications, which are applied to create a procedurally
generated company-specific process. However, little is known about which
variability operations are suitable in practice. In this article, we present a
study on the feasibility of variability operations to support the development
of software process lines in the context of the V-Modell XT. We analyze which
variability operations are defined and practically used. We provide an initial
catalog of variability operations as an improvement proposal for other process
models. Our findings show that 69 variability operation types are defined
across several metamodel versions of which, however, 25 remain unused. The
found variability operations allow for systematically modifying the content of
process model elements and the process documentation, and they allow for
altering the structure of a process model and its description. Furthermore, we
also find that variability operations can help process engineers to compensate
process metamodel evolution.
|
[
{
"created": "Sun, 19 Feb 2017 08:58:12 GMT",
"version": "v1"
}
] |
2017-02-21
|
[
[
"Kuhrmann",
"Marco",
""
],
[
"Fernández",
"Daniel Méndez",
""
],
[
"Ternité",
"Thomas",
""
]
] |
Software process lines provide a systematic approach to develop and manage software processes. It defines a reference process containing general process assets, whereas a well-defined customization approach allows process engineers to create new process variants, e.g., by extending or modifying process assets. Variability operations are an instrument to realize flexibility by explicitly declaring required modifications, which are applied to create a procedurally generated company-specific process. However, little is known about which variability operations are suitable in practice. In this article, we present a study on the feasibility of variability operations to support the development of software process lines in the context of the V-Modell XT. We analyze which variability operations are defined and practically used. We provide an initial catalog of variability operations as an improvement proposal for other process models. Our findings show that 69 variability operation types are defined across several metamodel versions of which, however, 25 remain unused. The found variability operations allow for systematically modifying the content of process model elements and the process documentation, and they allow for altering the structure of a process model and its description. Furthermore, we also find that variability operations can help process engineers to compensate process metamodel evolution.
|
1711.11017
|
Ethan Perez
|
Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca
Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, Aaron Courville
|
HoME: a Household Multimodal Environment
|
Presented at NIPS 2017's Visually-Grounded Interaction and Language
Workshop
| null | null | null |
cs.AI cs.CL cs.CV cs.RO cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce HoME: a Household Multimodal Environment for artificial agents
to learn from vision, audio, semantics, physics, and interaction with objects
and other agents, all within a realistic context. HoME integrates over 45,000
diverse 3D house layouts based on the SUNCG dataset, a scale which may
facilitate learning, generalization, and transfer. HoME is an open-source,
OpenAI Gym-compatible platform extensible to tasks in reinforcement learning,
language grounding, sound-based navigation, robotics, multi-agent learning, and
more. We hope HoME better enables artificial agents to learn as humans do: in
an interactive, multimodal, and richly contextualized setting.
|
[
{
"created": "Wed, 29 Nov 2017 18:45:59 GMT",
"version": "v1"
}
] |
2017-11-30
|
[
[
"Brodeur",
"Simon",
""
],
[
"Perez",
"Ethan",
""
],
[
"Anand",
"Ankesh",
""
],
[
"Golemo",
"Florian",
""
],
[
"Celotti",
"Luca",
""
],
[
"Strub",
"Florian",
""
],
[
"Rouat",
"Jean",
""
],
[
"Larochelle",
"Hugo",
""
],
[
"Courville",
"Aaron",
""
]
] |
We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting.
|
1709.03421
|
Riccardo Sven Risuleo
|
Riccardo Sven Risuleo, Giulio Bottegal, H{\aa}kan Hjalmarsson
|
Modeling and identification of uncertain-input systems
|
27 Pages, submitted to Automatica
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present a new class of models, called uncertain-input
models, that allows us to treat system-identification problems in which a
linear system is subject to a partially unknown input signal. To encode prior
information about the input or the linear system, we use Gaussian-process
models. We estimate the model from data using the empirical Bayes approach: the
input and the impulse responses of the linear system are estimated using the
posterior means of the Gaussian-process models given the data, and the
hyperparameters that characterize the Gaussian-process models are estimated
from the marginal likelihood of the data. We propose an iterative algorithm to
find the hyperparameters that relies on the EM method and results in simple
update steps. In the most general formulation, neither the marginal likelihood
nor the posterior distribution of the unknowns is tractable. Therefore, we
propose two approximation approaches, one based on Markov-chain Monte Carlo
techniques and one based on variational Bayes approximation. We also show
special model structures for which the distributions are treatable exactly.
Through numerical simulations, we study the application of the uncertain-input
model to the identification of Hammerstein systems and cascaded linear systems.
As part of the contribution of the paper, we show that this model structure
encompasses many classical problems in system identification such as classical
PEM, Hammerstein models, errors-in-variables problems, blind system
identification, and cascaded linear systems. This allows us to build a
systematic procedure to apply the algorithms proposed in this work to a wide
class of classical problems.
|
[
{
"created": "Mon, 11 Sep 2017 14:53:38 GMT",
"version": "v1"
}
] |
2017-09-12
|
[
[
"Risuleo",
"Riccardo Sven",
""
],
[
"Bottegal",
"Giulio",
""
],
[
"Hjalmarsson",
"Håkan",
""
]
] |
In this work, we present a new class of models, called uncertain-input models, that allows us to treat system-identification problems in which a linear system is subject to a partially unknown input signal. To encode prior information about the input or the linear system, we use Gaussian-process models. We estimate the model from data using the empirical Bayes approach: the input and the impulse responses of the linear system are estimated using the posterior means of the Gaussian-process models given the data, and the hyperparameters that characterize the Gaussian-process models are estimated from the marginal likelihood of the data. We propose an iterative algorithm to find the hyperparameters that relies on the EM method and results in simple update steps. In the most general formulation, neither the marginal likelihood nor the posterior distribution of the unknowns is tractable. Therefore, we propose two approximation approaches, one based on Markov-chain Monte Carlo techniques and one based on variational Bayes approximation. We also show special model structures for which the distributions are treatable exactly. Through numerical simulations, we study the application of the uncertain-input model to the identification of Hammerstein systems and cascaded linear systems. As part of the contribution of the paper, we show that this model structure encompasses many classical problems in system identification such as classical PEM, Hammerstein models, errors-in-variables problems, blind system identification, and cascaded linear systems. This allows us to build a systematic procedure to apply the algorithms proposed in this work to a wide class of classical problems.
|
2110.04678
|
Ankit Parag Shah
|
Rita Singh, Ankit Shah, Hira Dhamyal
|
An Overview of Techniques for Biomarker Discovery in Voice Signal
|
Last two authors contributed equally to the paper
| null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper reflects on the effect of several categories of medical conditions
on human voice, focusing on those that may be hypothesized to have effects on
voice, but for which the changes themselves may be subtle enough to have eluded
observation in standard analytical examinations of the voice signal. It
presents three categories of techniques that can potentially uncover such
elusive biomarkers and allow them to be measured and used for predictive and
diagnostic purposes. These approaches include proxy techniques, model-based
analytical techniques and data-driven AI techniques.
|
[
{
"created": "Sun, 10 Oct 2021 01:39:28 GMT",
"version": "v1"
}
] |
2021-10-12
|
[
[
"Singh",
"Rita",
""
],
[
"Shah",
"Ankit",
""
],
[
"Dhamyal",
"Hira",
""
]
] |
This paper reflects on the effect of several categories of medical conditions on human voice, focusing on those that may be hypothesized to have effects on voice, but for which the changes themselves may be subtle enough to have eluded observation in standard analytical examinations of the voice signal. It presents three categories of techniques that can potentially uncover such elusive biomarkers and allow them to be measured and used for predictive and diagnostic purposes. These approaches include proxy techniques, model-based analytical techniques and data-driven AI techniques.
|
2308.01286
|
Diptapriyo Majumdar
|
Diptapriyo Majumdar
|
Enumeration Kernels of Polynomial Size for Cuts of Bounded Degree
|
There have been major revision in the technicalities and proofs of
the paper
| null | null | null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enumeration kernelization was first proposed by Creignou et al. [TOCS 2017]
and was later refined by Golovach et al. [JCSS 2022] into two different
variants: fully-polynomial enumeration kernelization and polynomial-delay
enumeration kernelization. In this paper, we consider the DEGREE-d-CUT problem
from the perspective of (polynomial-delay) enumeration kenrelization. Given an
undirected graph G = (V, E), a cut F = (A, B) is a degree-d-cut of G if every
$u \in A$ has at most d neighbors in B and every $v \in B$ has at most d
neighbors in A. Checking the existence of a degree-d-cut in a graph is a
well-known NP-hard problem and is well-studied in parameterized complexity
[Algorithmica 2021, IWOCA 2021]. This problem also generalizes a well-studied
problem MATCHING CUT (set d = 1) that has been a central problem in the
literature of polynomial-delay enumeration kernelization. In this paper, we
study three different enumeration variants of this problem, ENUM DEGREE-d-CUT,
ENUM MIN-DEGREE-d-CUT and ENUM MAX-DEGREE-d-CUT that intends to enumerate all
the d-cuts, all the minimal d-cuts and all the maximal degree-d-cuts
respectively. We consider various structural parameters of the input and for
every fixed $d \geq 1$, we provide polynomial-delay enumeration kernelizations
of polynomial size for ENUM DEGREE-d-CUT and ENUM MAX-DEGREE-d-CUT and
fully-polynomial enumeration kernels of polynomial size for ENUM
MIN-DEGREE-d-CUT.
|
[
{
"created": "Wed, 2 Aug 2023 17:18:19 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Nov 2023 09:14:39 GMT",
"version": "v2"
},
{
"created": "Fri, 1 Dec 2023 07:32:28 GMT",
"version": "v3"
},
{
"created": "Fri, 2 Feb 2024 18:15:47 GMT",
"version": "v4"
},
{
"created": "Sat, 27 Apr 2024 17:42:26 GMT",
"version": "v5"
}
] |
2024-04-30
|
[
[
"Majumdar",
"Diptapriyo",
""
]
] |
Enumeration kernelization was first proposed by Creignou et al. [TOCS 2017] and was later refined by Golovach et al. [JCSS 2022] into two different variants: fully-polynomial enumeration kernelization and polynomial-delay enumeration kernelization. In this paper, we consider the DEGREE-d-CUT problem from the perspective of (polynomial-delay) enumeration kenrelization. Given an undirected graph G = (V, E), a cut F = (A, B) is a degree-d-cut of G if every $u \in A$ has at most d neighbors in B and every $v \in B$ has at most d neighbors in A. Checking the existence of a degree-d-cut in a graph is a well-known NP-hard problem and is well-studied in parameterized complexity [Algorithmica 2021, IWOCA 2021]. This problem also generalizes a well-studied problem MATCHING CUT (set d = 1) that has been a central problem in the literature of polynomial-delay enumeration kernelization. In this paper, we study three different enumeration variants of this problem, ENUM DEGREE-d-CUT, ENUM MIN-DEGREE-d-CUT and ENUM MAX-DEGREE-d-CUT that intends to enumerate all the d-cuts, all the minimal d-cuts and all the maximal degree-d-cuts respectively. We consider various structural parameters of the input and for every fixed $d \geq 1$, we provide polynomial-delay enumeration kernelizations of polynomial size for ENUM DEGREE-d-CUT and ENUM MAX-DEGREE-d-CUT and fully-polynomial enumeration kernels of polynomial size for ENUM MIN-DEGREE-d-CUT.
|
0810.2529
|
Alireza Bayesteh
|
Jamshid Abouei, Alireza Bayesteh, Masoud Ebrahimi, and Amir K.
Khandani
|
On the Throughput Maximization in Dencentralized Wireless Networks
|
Submitted to IEEE Transactions on Information Theory
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A distributed single-hop wireless network with $K$ links is considered, where
the links are partitioned into a fixed number ($M$) of clusters each operating
in a subchannel with bandwidth $\frac{W}{M}$. The subchannels are assumed to be
orthogonal to each other. A general shadow-fading model, described by
parameters $(\alpha,\varpi)$, is considered where $\alpha$ denotes the
probability of shadowing and $\varpi$ ($\varpi \leq 1$) represents the average
cross-link gains. The main goal of this paper is to find the maximum network
throughput in the asymptotic regime of $K \to \infty$, which is achieved by: i)
proposing a distributed and non-iterative power allocation strategy, where the
objective of each user is to maximize its best estimate (based on its local
information, i.e., direct channel gain) of the average network throughput, and
ii) choosing the optimum value for $M$. In the first part of the paper, the
network hroughput is defined as the \textit{average sum-rate} of the network,
which is shown to scale as $\Theta (\log K)$. Moreover, it is proved that in
the strong interference scenario, the optimum power allocation strategy for
each user is a threshold-based on-off scheme. In the second part, the network
throughput is defined as the \textit{guaranteed sum-rate}, when the outage
probability approaches zero. In this scenario, it is demonstrated that the
on-off power allocation scheme maximizes the throughput, which scales as
$\frac{W}{\alpha \varpi} \log K$. Moreover, the optimum spectrum sharing for
maximizing the average sum-rate and the guaranteed sum-rate is achieved at M=1.
|
[
{
"created": "Tue, 14 Oct 2008 19:40:22 GMT",
"version": "v1"
}
] |
2008-10-15
|
[
[
"Abouei",
"Jamshid",
""
],
[
"Bayesteh",
"Alireza",
""
],
[
"Ebrahimi",
"Masoud",
""
],
[
"Khandani",
"Amir K.",
""
]
] |
A distributed single-hop wireless network with $K$ links is considered, where the links are partitioned into a fixed number ($M$) of clusters each operating in a subchannel with bandwidth $\frac{W}{M}$. The subchannels are assumed to be orthogonal to each other. A general shadow-fading model, described by parameters $(\alpha,\varpi)$, is considered where $\alpha$ denotes the probability of shadowing and $\varpi$ ($\varpi \leq 1$) represents the average cross-link gains. The main goal of this paper is to find the maximum network throughput in the asymptotic regime of $K \to \infty$, which is achieved by: i) proposing a distributed and non-iterative power allocation strategy, where the objective of each user is to maximize its best estimate (based on its local information, i.e., direct channel gain) of the average network throughput, and ii) choosing the optimum value for $M$. In the first part of the paper, the network hroughput is defined as the \textit{average sum-rate} of the network, which is shown to scale as $\Theta (\log K)$. Moreover, it is proved that in the strong interference scenario, the optimum power allocation strategy for each user is a threshold-based on-off scheme. In the second part, the network throughput is defined as the \textit{guaranteed sum-rate}, when the outage probability approaches zero. In this scenario, it is demonstrated that the on-off power allocation scheme maximizes the throughput, which scales as $\frac{W}{\alpha \varpi} \log K$. Moreover, the optimum spectrum sharing for maximizing the average sum-rate and the guaranteed sum-rate is achieved at M=1.
|
2010.08660
|
Manas Gaur
|
Manas Gaur, Keyur Faldu, Amit Sheth
|
Semantics of the Black-Box: Can knowledge graphs help make deep learning
systems more interpretable and explainable?
|
6 pages + references, 4 figures, Accepted to IEEE internet computing
2020
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The recent series of innovations in deep learning (DL) have shown enormous
potential to impact individuals and society, both positively and negatively.
The DL models utilizing massive computing power and enormous datasets have
significantly outperformed prior historical benchmarks on increasingly
difficult, well-defined research tasks across technology domains such as
computer vision, natural language processing, signal processing, and
human-computer interactions. However, the Black-Box nature of DL models and
their over-reliance on massive amounts of data condensed into labels and dense
representations poses challenges for interpretability and explainability of the
system. Furthermore, DLs have not yet been proven in their ability to
effectively utilize relevant domain knowledge and experience critical to human
understanding. This aspect is missing in early data-focused approaches and
necessitated knowledge-infused learning and other strategies to incorporate
computational knowledge. This article demonstrates how knowledge, provided as a
knowledge graph, is incorporated into DL methods using knowledge-infused
learning, which is one of the strategies. We then discuss how this makes a
fundamental difference in the interpretability and explainability of current
approaches, and illustrate it with examples from natural language processing
for healthcare and education applications.
|
[
{
"created": "Fri, 16 Oct 2020 22:55:23 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Nov 2020 02:28:43 GMT",
"version": "v2"
},
{
"created": "Tue, 3 Nov 2020 15:52:55 GMT",
"version": "v3"
},
{
"created": "Fri, 11 Dec 2020 23:03:11 GMT",
"version": "v4"
}
] |
2020-12-15
|
[
[
"Gaur",
"Manas",
""
],
[
"Faldu",
"Keyur",
""
],
[
"Sheth",
"Amit",
""
]
] |
The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. The DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of DL models and their over-reliance on massive amounts of data condensed into labels and dense representations poses challenges for interpretability and explainability of the system. Furthermore, DLs have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL methods using knowledge-infused learning, which is one of the strategies. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches, and illustrate it with examples from natural language processing for healthcare and education applications.
|
2307.11357
|
Arthur M\"uller
|
Arthur M\"uller, Matthia Sabatelli
|
Bridging the Reality Gap of Reinforcement Learning based Traffic Signal
Control using Domain Randomization and Meta Learning
|
Paper was accepted by the ITSC 2023 (26th IEEE International
Conference on Intelligent Transportation Systems)
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement Learning (RL) has been widely explored in Traffic Signal
Control (TSC) applications, however, still no such system has been deployed in
practice. A key barrier to progress in this area is the reality gap, the
discrepancy that results from differences between simulation models and their
real-world equivalents. In this paper, we address this challenge by first
presenting a comprehensive analysis of potential simulation parameters that
contribute to this reality gap. We then also examine two promising strategies
that can bridge this gap: Domain Randomization (DR) and Model-Agnostic
Meta-Learning (MAML). Both strategies were trained with a traffic simulation
model of an intersection. In addition, the model was embedded in LemgoRL, a
framework that integrates realistic, safety-critical requirements into the
control system. Subsequently, we evaluated the performance of the two methods
on a separate model of the same intersection that was developed with a
different traffic simulator. In this way, we mimic the reality gap. Our
experimental results show that both DR and MAML outperform a state-of-the-art
RL algorithm, therefore highlighting their potential to mitigate the reality
gap in RLbased TSC systems.
|
[
{
"created": "Fri, 21 Jul 2023 05:17:21 GMT",
"version": "v1"
}
] |
2023-07-24
|
[
[
"Müller",
"Arthur",
""
],
[
"Sabatelli",
"Matthia",
""
]
] |
Reinforcement Learning (RL) has been widely explored in Traffic Signal Control (TSC) applications, however, still no such system has been deployed in practice. A key barrier to progress in this area is the reality gap, the discrepancy that results from differences between simulation models and their real-world equivalents. In this paper, we address this challenge by first presenting a comprehensive analysis of potential simulation parameters that contribute to this reality gap. We then also examine two promising strategies that can bridge this gap: Domain Randomization (DR) and Model-Agnostic Meta-Learning (MAML). Both strategies were trained with a traffic simulation model of an intersection. In addition, the model was embedded in LemgoRL, a framework that integrates realistic, safety-critical requirements into the control system. Subsequently, we evaluated the performance of the two methods on a separate model of the same intersection that was developed with a different traffic simulator. In this way, we mimic the reality gap. Our experimental results show that both DR and MAML outperform a state-of-the-art RL algorithm, therefore highlighting their potential to mitigate the reality gap in RLbased TSC systems.
|
2101.04244
|
Mohammed Bahutair Mr.
|
Mohammed Bahutair, Athman Bouguettaya, and Azadeh Ghari Neiat
|
Multi-Perspective Trust Management Framework for Crowdsourced IoT
Services
|
14 pages, accepted and to appear in IEEE Ttransactions on Services
Computing
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel generic trust management framework for crowdsourced IoT
services. The framework exploits a multi-perspective trust model that captures
the inherent characteristics of crowdsourced IoT services. Each perspective is
defined by a set of attributes that contribute to the perspective's influence
on trust. The attributes are fed into a machine-learning-based algorithm to
generate a trust model for crowdsourced services in IoT environments. We
demonstrate the effectiveness of our approach by conducting experiments on
real-world datasets.
|
[
{
"created": "Tue, 12 Jan 2021 00:43:12 GMT",
"version": "v1"
}
] |
2021-01-13
|
[
[
"Bahutair",
"Mohammed",
""
],
[
"Bouguettaya",
"Athman",
""
],
[
"Neiat",
"Azadeh Ghari",
""
]
] |
We propose a novel generic trust management framework for crowdsourced IoT services. The framework exploits a multi-perspective trust model that captures the inherent characteristics of crowdsourced IoT services. Each perspective is defined by a set of attributes that contribute to the perspective's influence on trust. The attributes are fed into a machine-learning-based algorithm to generate a trust model for crowdsourced services in IoT environments. We demonstrate the effectiveness of our approach by conducting experiments on real-world datasets.
|
2008.07862
|
David Baum
|
David Baum
|
Exploring the Design Space of Aesthetics with the Repertory Grid
Technique
|
Appears in the Proceedings of the 28th International Symposium on
Graph Drawing and Network Visualization (GD 2020)
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
By optimizing aesthetics, graph diagrams can be generated that are easier to
read and understand. However, the challenge lies in identifying suitable
aesthetics. We present a novel approach based on repertory grids to explore the
design space of aesthetics systematically. We applied our approach with three
independent groups of participants to systematically identify graph aesthetics.
In all three cases, we were able to reproduce the aesthetics with positively
evaluated influence on readability without any prior knowledge. We also applied
our approach to two- and three-dimensional domain-specific software
visualizations to demonstrate its versatility. In this case, we were also able
to acquire several aesthetics that are relevant for perceiving the
visualization.
|
[
{
"created": "Tue, 18 Aug 2020 11:25:26 GMT",
"version": "v1"
}
] |
2020-08-19
|
[
[
"Baum",
"David",
""
]
] |
By optimizing aesthetics, graph diagrams can be generated that are easier to read and understand. However, the challenge lies in identifying suitable aesthetics. We present a novel approach based on repertory grids to explore the design space of aesthetics systematically. We applied our approach with three independent groups of participants to systematically identify graph aesthetics. In all three cases, we were able to reproduce the aesthetics with positively evaluated influence on readability without any prior knowledge. We also applied our approach to two- and three-dimensional domain-specific software visualizations to demonstrate its versatility. In this case, we were also able to acquire several aesthetics that are relevant for perceiving the visualization.
|
2407.05850
|
Minghao Yang
|
Minghao Yang, Jingjing Zhang, Shengyun Liu
|
DFedSat: Communication-Efficient and Robust Decentralized Federated
Learning for LEO Satellite Constellations
|
13 pages, 10 figures
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Low Earth Orbit (LEO) satellites play a crucial role in the development of 6G
mobile networks and space-air-ground integrated systems. Recent advancements in
space technology have empowered LEO satellites with the capability to run AI
applications. However, centralized approaches, where ground stations (GSs) act
as servers and satellites as clients, often encounter slow convergence and
inefficiencies due to intermittent connectivity between satellites and GSs. In
contrast, decentralized federated learning (DFL) offers a promising alternative
by facilitating direct communication between satellites (clients) via
inter-satellite links (ISLs). However, inter-plane ISLs connecting satellites
from different orbital planes are dynamic due to Doppler shifts and pointing
limitations. This could impact model propagation and lead to slower
convergence. To mitigate these issues, we propose DFedSat, a fully
decentralized federated learning framework tailored for LEO satellites. DFedSat
accelerates the training process by employing two adaptive mechanisms for
intra-plane and inter-plane model aggregation, respectively. Furthermore, a
self-compensation mechanism is integrated to enhance the robustness of
inter-plane ISLs against transmission failure. Additionally, we derive the
sublinear convergence rate for the non-convex case of DFedSat. Extensive
experimental results demonstrate DFedSat's superiority over other DFL baselines
regarding convergence rate, communication efficiency, and resilience to
unreliable links.
|
[
{
"created": "Mon, 8 Jul 2024 12:00:49 GMT",
"version": "v1"
}
] |
2024-07-09
|
[
[
"Yang",
"Minghao",
""
],
[
"Zhang",
"Jingjing",
""
],
[
"Liu",
"Shengyun",
""
]
] |
Low Earth Orbit (LEO) satellites play a crucial role in the development of 6G mobile networks and space-air-ground integrated systems. Recent advancements in space technology have empowered LEO satellites with the capability to run AI applications. However, centralized approaches, where ground stations (GSs) act as servers and satellites as clients, often encounter slow convergence and inefficiencies due to intermittent connectivity between satellites and GSs. In contrast, decentralized federated learning (DFL) offers a promising alternative by facilitating direct communication between satellites (clients) via inter-satellite links (ISLs). However, inter-plane ISLs connecting satellites from different orbital planes are dynamic due to Doppler shifts and pointing limitations. This could impact model propagation and lead to slower convergence. To mitigate these issues, we propose DFedSat, a fully decentralized federated learning framework tailored for LEO satellites. DFedSat accelerates the training process by employing two adaptive mechanisms for intra-plane and inter-plane model aggregation, respectively. Furthermore, a self-compensation mechanism is integrated to enhance the robustness of inter-plane ISLs against transmission failure. Additionally, we derive the sublinear convergence rate for the non-convex case of DFedSat. Extensive experimental results demonstrate DFedSat's superiority over other DFL baselines regarding convergence rate, communication efficiency, and resilience to unreliable links.
|
2306.14834
|
Zheqing Zhu
|
Zheqing Zhu, Benjamin Van Roy
|
Scalable Neural Contextual Bandit for Recommender Systems
| null |
ACM International Conference on Information and Knowledge
Management (CIKM 2023) 32nd ACM International Conference on Information and
Knowledge Management (CIKM 2023)
| null | null |
cs.IR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
High-quality recommender systems ought to deliver both innovative and
relevant content through effective and exploratory interactions with users.
Yet, supervised learning-based neural networks, which form the backbone of many
existing recommender systems, only leverage recognized user interests, falling
short when it comes to efficiently uncovering unknown user preferences. While
there has been some progress with neural contextual bandit algorithms towards
enabling online exploration through neural networks, their onerous
computational demands hinder widespread adoption in real-world recommender
systems. In this work, we propose a scalable sample-efficient neural contextual
bandit algorithm for recommender systems. To do this, we design an epistemic
neural network architecture, Epistemic Neural Recommendation (ENR), that
enables Thompson sampling at a large scale. In two distinct large-scale
experiments with real-world tasks, ENR significantly boosts click-through rates
and user ratings by at least 9% and 6% respectively compared to
state-of-the-art neural contextual bandit algorithms. Furthermore, it achieves
equivalent performance with at least 29% fewer user interactions compared to
the best-performing baseline algorithm. Remarkably, while accomplishing these
improvements, ENR demands orders of magnitude fewer computational resources
than neural contextual bandit baseline algorithms.
|
[
{
"created": "Mon, 26 Jun 2023 16:39:39 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Jul 2023 09:01:01 GMT",
"version": "v2"
},
{
"created": "Sat, 19 Aug 2023 03:32:53 GMT",
"version": "v3"
}
] |
2023-08-22
|
[
[
"Zhu",
"Zheqing",
""
],
[
"Van Roy",
"Benjamin",
""
]
] |
High-quality recommender systems ought to deliver both innovative and relevant content through effective and exploratory interactions with users. Yet, supervised learning-based neural networks, which form the backbone of many existing recommender systems, only leverage recognized user interests, falling short when it comes to efficiently uncovering unknown user preferences. While there has been some progress with neural contextual bandit algorithms towards enabling online exploration through neural networks, their onerous computational demands hinder widespread adoption in real-world recommender systems. In this work, we propose a scalable sample-efficient neural contextual bandit algorithm for recommender systems. To do this, we design an epistemic neural network architecture, Epistemic Neural Recommendation (ENR), that enables Thompson sampling at a large scale. In two distinct large-scale experiments with real-world tasks, ENR significantly boosts click-through rates and user ratings by at least 9% and 6% respectively compared to state-of-the-art neural contextual bandit algorithms. Furthermore, it achieves equivalent performance with at least 29% fewer user interactions compared to the best-performing baseline algorithm. Remarkably, while accomplishing these improvements, ENR demands orders of magnitude fewer computational resources than neural contextual bandit baseline algorithms.
|
1603.05095
|
Navid Azizan Ruhi
|
Navid Azizan Ruhi, Christos Thrampoulidis and Babak Hassibi
|
Improved Bounds on the Epidemic Threshold of Exact SIS Models on Complex
Networks
|
Submitted to CDC 2016
| null |
10.1109/CDC.2016.7798804
| null |
cs.SI math.DS physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The SIS (susceptible-infected-susceptible) epidemic model on an arbitrary
network, without making approximations, is a $2^n$-state Markov chain with a
unique absorbing state (the all-healthy state). This makes analysis of the SIS
model and, in particular, determining the threshold of epidemic spread quite
challenging. It has been shown that the exact marginal probabilities of
infection can be upper bounded by an $n$-dimensional linear time-invariant
system, a consequence of which is that the Markov chain is "fast-mixing" when
the LTI system is stable, i.e. when
$\frac{\beta}{\delta}<\frac{1}{\lambda_{\max}(A)}$ (where $\beta$ is the
infection rate per link, $\delta$ is the recovery rate, and $\lambda_{\max}(A)$
is the largest eigenvalue of the network's adjacency matrix). This well-known
threshold has been recently shown not to be tight in several cases, such as in
a star network. In this paper, We provide tighter upper bounds on the exact
marginal probabilities of infection, by also taking pairwise infection
probabilities into account. Based on this improved bound, we derive tighter
eigenvalue conditions that guarantee fast mixing (i.e., logarithmic mixing
time) of the chain. We demonstrate the improvement of the threshold condition
by comparing the new bound with the known one on various networks and epidemic
parameters.
|
[
{
"created": "Wed, 16 Mar 2016 13:44:04 GMT",
"version": "v1"
}
] |
2019-01-21
|
[
[
"Ruhi",
"Navid Azizan",
""
],
[
"Thrampoulidis",
"Christos",
""
],
[
"Hassibi",
"Babak",
""
]
] |
The SIS (susceptible-infected-susceptible) epidemic model on an arbitrary network, without making approximations, is a $2^n$-state Markov chain with a unique absorbing state (the all-healthy state). This makes analysis of the SIS model and, in particular, determining the threshold of epidemic spread quite challenging. It has been shown that the exact marginal probabilities of infection can be upper bounded by an $n$-dimensional linear time-invariant system, a consequence of which is that the Markov chain is "fast-mixing" when the LTI system is stable, i.e. when $\frac{\beta}{\delta}<\frac{1}{\lambda_{\max}(A)}$ (where $\beta$ is the infection rate per link, $\delta$ is the recovery rate, and $\lambda_{\max}(A)$ is the largest eigenvalue of the network's adjacency matrix). This well-known threshold has been recently shown not to be tight in several cases, such as in a star network. In this paper, We provide tighter upper bounds on the exact marginal probabilities of infection, by also taking pairwise infection probabilities into account. Based on this improved bound, we derive tighter eigenvalue conditions that guarantee fast mixing (i.e., logarithmic mixing time) of the chain. We demonstrate the improvement of the threshold condition by comparing the new bound with the known one on various networks and epidemic parameters.
|
2310.11301
|
Marvin Wyrich
|
Marvin Wyrich
|
Source Code Comprehension: A Contemporary Definition and Conceptual
Model for Empirical Investigation
|
Submission under review
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Be it in debugging, testing, code review or, more recently, pair programming
with AI assistance: in all these activities, software engineers need to
understand source code. Accordingly, plenty of research is taking place in the
field to find out, for example, what makes code easy to understand and which
tools can best support developers in their comprehension process. And while any
code comprehension researcher certainly has a rough idea of what they mean when
they mention a developer having a good understanding of a piece of code, to
date, the research community has not managed to define source code
comprehension as a concept. Instead, in primary research on code comprehension,
an implicit definition by task prevails, i.e., code comprehension is what the
experimental tasks measure. This approach has two negative consequences. First,
it makes it difficult to conduct secondary research. Currently, each code
comprehension primary study uses different comprehension tasks and measures,
and thus it is not clear whether different studies intend to measure the same
construct. Second, authors of a primary study run into the difficulty of
justifying their design decisions without a definition of what they attempt to
measure. An operationalization of an insufficiently described construct occurs,
which poses a threat to construct validity.
The task of defining code comprehension considering the theory of the past
fifty years is not an easy one. Nor is it a task that every author of a primary
study must accomplish on their own. Therefore, this paper constitutes a
reference work that defines source code comprehension and presents a conceptual
framework in which researchers can anchor their empirical code comprehension
research.
|
[
{
"created": "Tue, 17 Oct 2023 14:23:46 GMT",
"version": "v1"
}
] |
2023-10-18
|
[
[
"Wyrich",
"Marvin",
""
]
] |
Be it in debugging, testing, code review or, more recently, pair programming with AI assistance: in all these activities, software engineers need to understand source code. Accordingly, plenty of research is taking place in the field to find out, for example, what makes code easy to understand and which tools can best support developers in their comprehension process. And while any code comprehension researcher certainly has a rough idea of what they mean when they mention a developer having a good understanding of a piece of code, to date, the research community has not managed to define source code comprehension as a concept. Instead, in primary research on code comprehension, an implicit definition by task prevails, i.e., code comprehension is what the experimental tasks measure. This approach has two negative consequences. First, it makes it difficult to conduct secondary research. Currently, each code comprehension primary study uses different comprehension tasks and measures, and thus it is not clear whether different studies intend to measure the same construct. Second, authors of a primary study run into the difficulty of justifying their design decisions without a definition of what they attempt to measure. An operationalization of an insufficiently described construct occurs, which poses a threat to construct validity. The task of defining code comprehension considering the theory of the past fifty years is not an easy one. Nor is it a task that every author of a primary study must accomplish on their own. Therefore, this paper constitutes a reference work that defines source code comprehension and presents a conceptual framework in which researchers can anchor their empirical code comprehension research.
|
1301.3230
|
Easwar Vivek Mangipudi
|
Easwar Vivek Mangipudi, Venkatesh Ramaiyan
|
A Framework for Quality of Service with a Multiple Access Strategy
|
6 pages
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study a problem of scheduling real-time traffic with hard delay
constraints in an unreliable wireless channel. Packets arrive at a constant
rate to the network and have to be delivered within a fixed number of slots in
a fading wireless channel. For an infrastructure mode of traffic with a
centralized scheduler, we are interested in the long time average throughput
achievable for the real time traffic. In [1], the authors have stud- ied the
feasible throughput vectors by identifying the necessary and sufficient
conditions using work load characterization. In our work, we provide a
characterization of the feasible throughput vectors using the notion of the
rate region. We then discuss an extension to the network model studied in [1]
by allowing multiple access during contention and propose an enhancement to the
rate region of the wireless network. We characterize the feasible throughput
vectors with the multiple access technique and study throughput optimal and
utility maximizing strategies for the network scenario. Using simulations, we
evaluate the performance of the proposed strategy and discuss its advantages.
|
[
{
"created": "Tue, 15 Jan 2013 06:05:42 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Aug 2013 19:56:21 GMT",
"version": "v2"
},
{
"created": "Wed, 18 Dec 2013 20:47:03 GMT",
"version": "v3"
}
] |
2013-12-19
|
[
[
"Mangipudi",
"Easwar Vivek",
""
],
[
"Ramaiyan",
"Venkatesh",
""
]
] |
We study a problem of scheduling real-time traffic with hard delay constraints in an unreliable wireless channel. Packets arrive at a constant rate to the network and have to be delivered within a fixed number of slots in a fading wireless channel. For an infrastructure mode of traffic with a centralized scheduler, we are interested in the long time average throughput achievable for the real time traffic. In [1], the authors have stud- ied the feasible throughput vectors by identifying the necessary and sufficient conditions using work load characterization. In our work, we provide a characterization of the feasible throughput vectors using the notion of the rate region. We then discuss an extension to the network model studied in [1] by allowing multiple access during contention and propose an enhancement to the rate region of the wireless network. We characterize the feasible throughput vectors with the multiple access technique and study throughput optimal and utility maximizing strategies for the network scenario. Using simulations, we evaluate the performance of the proposed strategy and discuss its advantages.
|
1302.1555
|
Alexander V. Kozlov
|
Alexander V. Kozlov, Daphne Koller
|
Nonuniform Dynamic Discretization in Hybrid Networks
|
Appears in Proceedings of the Thirteenth Conference on Uncertainty in
Artificial Intelligence (UAI1997)
| null | null |
UAI-P-1997-PG-314-325
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider probabilistic inference in general hybrid networks, which include
continuous and discrete variables in an arbitrary topology. We reexamine the
question of variable discretization in a hybrid network aiming at minimizing
the information loss induced by the discretization. We show that a nonuniform
partition across all variables as opposed to uniform partition of each variable
separately reduces the size of the data structures needed to represent a
continuous function. We also provide a simple but efficient procedure for
nonuniform partition. To represent a nonuniform discretization in the computer
memory, we introduce a new data structure, which we call a Binary Split
Partition (BSP) tree. We show that BSP trees can be an exponential factor
smaller than the data structures in the standard uniform discretization in
multiple dimensions and show how the BSP trees can be used in the standard join
tree algorithm. We show that the accuracy of the inference process can be
significantly improved by adjusting discretization with evidence. We construct
an iterative anytime algorithm that gradually improves the quality of the
discretization and the accuracy of the answer on a query. We provide empirical
evidence that the algorithm converges.
|
[
{
"created": "Wed, 6 Feb 2013 15:57:46 GMT",
"version": "v1"
}
] |
2013-02-08
|
[
[
"Kozlov",
"Alexander V.",
""
],
[
"Koller",
"Daphne",
""
]
] |
We consider probabilistic inference in general hybrid networks, which include continuous and discrete variables in an arbitrary topology. We reexamine the question of variable discretization in a hybrid network aiming at minimizing the information loss induced by the discretization. We show that a nonuniform partition across all variables as opposed to uniform partition of each variable separately reduces the size of the data structures needed to represent a continuous function. We also provide a simple but efficient procedure for nonuniform partition. To represent a nonuniform discretization in the computer memory, we introduce a new data structure, which we call a Binary Split Partition (BSP) tree. We show that BSP trees can be an exponential factor smaller than the data structures in the standard uniform discretization in multiple dimensions and show how the BSP trees can be used in the standard join tree algorithm. We show that the accuracy of the inference process can be significantly improved by adjusting discretization with evidence. We construct an iterative anytime algorithm that gradually improves the quality of the discretization and the accuracy of the answer on a query. We provide empirical evidence that the algorithm converges.
|
1811.01811
|
Kemal Davaslioglu
|
Yi Shi, Yalin E. Sagduyu, Kemal Davaslioglu, and Jason H. Li
|
Active Deep Learning Attacks under Strict Rate Limitations for Online
API Calls
|
Presented at 2018 IEEE International Symposium on Technologies for
Homeland Security (HST) on October 23 2018. Received the Best Paper Award in
Cyber Security Track
| null | null | null |
cs.LG cs.CR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning has been applied to a broad range of applications and some
of them are available online as application programming interfaces (APIs) with
either free (trial) or paid subscriptions. In this paper, we study adversarial
machine learning in the form of back-box attacks on online classifier APIs. We
start with a deep learning based exploratory (inference) attack, which aims to
build a classifier that can provide similar classification results (labels) as
the target classifier. To minimize the difference between the labels returned
by the inferred classifier and the target classifier, we show that the deep
learning based exploratory attack requires a large number of labeled training
data samples. These labels can be collected by calling the online API, but
usually there is some strict rate limitation on the number of allowed API
calls. To mitigate the impact of limited training data, we develop an active
learning approach that first builds a classifier based on a small number of API
calls and uses this classifier to select samples to further collect their
labels. Then, a new classifier is built using more training data samples. This
updating process can be repeated multiple times. We show that this active
learning approach can build an adversarial classifier with a small statistical
difference from the target classifier using only a limited number of training
data samples. We further consider evasion and causative (poisoning) attacks
based on the inferred classifier that is built by the exploratory attack.
Evasion attack determines samples that the target classifier is likely to
misclassify, whereas causative attack provides erroneous training data samples
to reduce the reliability of the re-trained classifier. The success of these
attacks show that adversarial machine learning emerges as a feasible threat in
the realistic case with limited training data.
|
[
{
"created": "Mon, 5 Nov 2018 15:50:30 GMT",
"version": "v1"
}
] |
2018-11-06
|
[
[
"Shi",
"Yi",
""
],
[
"Sagduyu",
"Yalin E.",
""
],
[
"Davaslioglu",
"Kemal",
""
],
[
"Li",
"Jason H.",
""
]
] |
Machine learning has been applied to a broad range of applications and some of them are available online as application programming interfaces (APIs) with either free (trial) or paid subscriptions. In this paper, we study adversarial machine learning in the form of back-box attacks on online classifier APIs. We start with a deep learning based exploratory (inference) attack, which aims to build a classifier that can provide similar classification results (labels) as the target classifier. To minimize the difference between the labels returned by the inferred classifier and the target classifier, we show that the deep learning based exploratory attack requires a large number of labeled training data samples. These labels can be collected by calling the online API, but usually there is some strict rate limitation on the number of allowed API calls. To mitigate the impact of limited training data, we develop an active learning approach that first builds a classifier based on a small number of API calls and uses this classifier to select samples to further collect their labels. Then, a new classifier is built using more training data samples. This updating process can be repeated multiple times. We show that this active learning approach can build an adversarial classifier with a small statistical difference from the target classifier using only a limited number of training data samples. We further consider evasion and causative (poisoning) attacks based on the inferred classifier that is built by the exploratory attack. Evasion attack determines samples that the target classifier is likely to misclassify, whereas causative attack provides erroneous training data samples to reduce the reliability of the re-trained classifier. The success of these attacks show that adversarial machine learning emerges as a feasible threat in the realistic case with limited training data.
|
1905.12541
|
Susan Stepney
|
Penelope Faulkner Rainford, Angelika Sebald, Susan Stepney
|
MetaChem: An Algebraic Framework for Artificial Chemistries
|
39 pages, 19 figures; minor typos corrected
|
Artificial Life 26(2):153-195 2020
|
10.1162/artl_a_00315
| null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce MetaChem, a language for representing and implementing
Artificial Chemistries. We motivate the need for modularisation and
standardisation in representation of artificial chemistries. We describe a
mathematical formalism for Static Graph MetaChem, a static graph based system.
MetaChem supports different levels of description, and has a formal
description; we illustrate these using StringCatChem, a toy artificial
chemistry. We describe two existing Artificial Chemistries -- Jordan Algebra
AChem and Swarm Chemistries -- in MetaChem, and demonstrate how they can be
combined in several different configurations by using a MetaChem environmental
link. MetaChem provides a route to standardisation, reuse, and composition of
Artificial Chemistries and their tools.
|
[
{
"created": "Wed, 29 May 2019 15:42:24 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Sep 2019 16:44:18 GMT",
"version": "v2"
},
{
"created": "Sun, 14 Jun 2020 19:36:09 GMT",
"version": "v3"
}
] |
2020-06-16
|
[
[
"Rainford",
"Penelope Faulkner",
""
],
[
"Sebald",
"Angelika",
""
],
[
"Stepney",
"Susan",
""
]
] |
We introduce MetaChem, a language for representing and implementing Artificial Chemistries. We motivate the need for modularisation and standardisation in representation of artificial chemistries. We describe a mathematical formalism for Static Graph MetaChem, a static graph based system. MetaChem supports different levels of description, and has a formal description; we illustrate these using StringCatChem, a toy artificial chemistry. We describe two existing Artificial Chemistries -- Jordan Algebra AChem and Swarm Chemistries -- in MetaChem, and demonstrate how they can be combined in several different configurations by using a MetaChem environmental link. MetaChem provides a route to standardisation, reuse, and composition of Artificial Chemistries and their tools.
|
2011.03829
|
Raman Goyal
|
Raman Goyal and Manoranjan Majji and Robert E. Skelton
|
Robust Shape Control of Gyroscopic Tensegrity Robotic Arm
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a model-based approach to control the shape of a
tensegrity system by driving its node position locations. The nonlinear
dynamics of the tensegrity system is used to regulate position, velocity, and
acceleration to the specified reference trajectory. State feedback control
design is used to obtain the solution for the control variable as a linear
programming problem. Shape control for the gyroscopic tensegrity systems is
discussed, and it is observed that these systems increase the reachable space
for the structure by providing independent control over certain rotational
degrees of freedom. Disturbance rejection of the tensegrity system is further
studied in the paper. A methodology to calculate the control gains to bound the
errors for five different types of problems is provided. The formulation uses a
Linear Matrix Inequality (LMI) approach to stipulate the desired performance
bounds on the error for $\mathcal{H}_\infty$, generalized $\mathcal{H}_2$, LQR,
covariance control and stabilizing control problem. A high degree of freedom
tensegrity $T_2D_1$ robotic arm is used as an example to show the efficacy of
the formulation.
|
[
{
"created": "Sat, 7 Nov 2020 18:41:01 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Nov 2020 21:51:05 GMT",
"version": "v2"
}
] |
2020-11-23
|
[
[
"Goyal",
"Raman",
""
],
[
"Majji",
"Manoranjan",
""
],
[
"Skelton",
"Robert E.",
""
]
] |
This paper proposes a model-based approach to control the shape of a tensegrity system by driving its node position locations. The nonlinear dynamics of the tensegrity system is used to regulate position, velocity, and acceleration to the specified reference trajectory. State feedback control design is used to obtain the solution for the control variable as a linear programming problem. Shape control for the gyroscopic tensegrity systems is discussed, and it is observed that these systems increase the reachable space for the structure by providing independent control over certain rotational degrees of freedom. Disturbance rejection of the tensegrity system is further studied in the paper. A methodology to calculate the control gains to bound the errors for five different types of problems is provided. The formulation uses a Linear Matrix Inequality (LMI) approach to stipulate the desired performance bounds on the error for $\mathcal{H}_\infty$, generalized $\mathcal{H}_2$, LQR, covariance control and stabilizing control problem. A high degree of freedom tensegrity $T_2D_1$ robotic arm is used as an example to show the efficacy of the formulation.
|
0709.1056
|
Stephane Norte
|
Stephane Norte
|
A Sudoku Game for People with Motor Impairments
|
7 pages, 5 figures
| null | null | null |
cs.HC cs.CY
| null |
Computer games are motivating and beneficial in learning different
educational skills. Most people use their fingers, hands, and arms when using a
computer game. However, for people with motor disabilities this task can be a
barrier. We present a new Sudoku game for people whose motion is impaired,
called Sudoku 4ALL. With this special interface a person can control the game
with the voice or with a single switch. Our research aims to cautiously search
for issues that might be appropriate for computational support and to build
enabling technologies that increase individuals' functional independence in a
game environment.
|
[
{
"created": "Fri, 7 Sep 2007 11:59:22 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Sep 2007 13:26:17 GMT",
"version": "v2"
},
{
"created": "Mon, 17 Sep 2007 21:08:35 GMT",
"version": "v3"
}
] |
2007-09-18
|
[
[
"Norte",
"Stephane",
""
]
] |
Computer games are motivating and beneficial in learning different educational skills. Most people use their fingers, hands, and arms when using a computer game. However, for people with motor disabilities this task can be a barrier. We present a new Sudoku game for people whose motion is impaired, called Sudoku 4ALL. With this special interface a person can control the game with the voice or with a single switch. Our research aims to cautiously search for issues that might be appropriate for computational support and to build enabling technologies that increase individuals' functional independence in a game environment.
|
2204.13892
|
Ziming Chen
|
Chang Shu, Ziming Chen, Lei Chen, Kuan Ma, Minghui Wang and Haibing
Ren
|
SideRT: A Real-time Pure Transformer Architecture for Single Image Depth
Estimation
|
7 pages, 5 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since context modeling is critical for estimating depth from a single image,
researchers put tremendous effort into obtaining global context. Many global
manipulations are designed for traditional CNN-based architectures to overcome
the locality of convolutions. Attention mechanisms or transformers originally
designed for capturing long-range dependencies might be a better choice, but
usually complicates architectures and could lead to a decrease in inference
speed. In this work, we propose a pure transformer architecture called SideRT
that can attain excellent predictions in real-time. In order to capture better
global context, Cross-Scale Attention (CSA) and Multi-Scale Refinement (MSR)
modules are designed to work collaboratively to fuse features of different
scales efficiently. CSA modules focus on fusing features of high semantic
similarities, while MSR modules aim to fuse features at corresponding
positions. These two modules contain a few learnable parameters without
convolutions, based on which a lightweight yet effective model is built. This
architecture achieves state-of-the-art performances in real-time (51.3 FPS) and
becomes much faster with a reasonable performance drop on a smaller backbone
Swin-T (83.1 FPS). Furthermore, its performance surpasses the previous
state-of-the-art by a large margin, improving AbsRel metric 6.9% on KITTI and
9.7% on NYU. To the best of our knowledge, this is the first work to show that
transformer-based networks can attain state-of-the-art performance in real-time
in the single image depth estimation field. Code will be made available soon.
|
[
{
"created": "Fri, 29 Apr 2022 05:46:20 GMT",
"version": "v1"
}
] |
2022-05-02
|
[
[
"Shu",
"Chang",
""
],
[
"Chen",
"Ziming",
""
],
[
"Chen",
"Lei",
""
],
[
"Ma",
"Kuan",
""
],
[
"Wang",
"Minghui",
""
],
[
"Ren",
"Haibing",
""
]
] |
Since context modeling is critical for estimating depth from a single image, researchers put tremendous effort into obtaining global context. Many global manipulations are designed for traditional CNN-based architectures to overcome the locality of convolutions. Attention mechanisms or transformers originally designed for capturing long-range dependencies might be a better choice, but usually complicates architectures and could lead to a decrease in inference speed. In this work, we propose a pure transformer architecture called SideRT that can attain excellent predictions in real-time. In order to capture better global context, Cross-Scale Attention (CSA) and Multi-Scale Refinement (MSR) modules are designed to work collaboratively to fuse features of different scales efficiently. CSA modules focus on fusing features of high semantic similarities, while MSR modules aim to fuse features at corresponding positions. These two modules contain a few learnable parameters without convolutions, based on which a lightweight yet effective model is built. This architecture achieves state-of-the-art performances in real-time (51.3 FPS) and becomes much faster with a reasonable performance drop on a smaller backbone Swin-T (83.1 FPS). Furthermore, its performance surpasses the previous state-of-the-art by a large margin, improving AbsRel metric 6.9% on KITTI and 9.7% on NYU. To the best of our knowledge, this is the first work to show that transformer-based networks can attain state-of-the-art performance in real-time in the single image depth estimation field. Code will be made available soon.
|
0909.0122
|
Sanjiang Li
|
Sanjiang Li, Anthony G. Cohn
|
Reasoning with Topological and Directional Spatial Information
| null |
Computational Intelligence, 2012, 28(4):579-616
| null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current research on qualitative spatial representation and reasoning mainly
focuses on one single aspect of space. In real world applications, however,
multiple spatial aspects are often involved simultaneously.
This paper investigates problems arising in reasoning with combined
topological and directional information. We use the RCC8 algebra and the
Rectangle Algebra (RA) for expressing topological and directional information
respectively. We give examples to show that the bipath-consistency algorithm
BIPATH is incomplete for solving even basic RCC8 and RA constraints. If
topological constraints are taken from some maximal tractable subclasses of
RCC8, and directional constraints are taken from a subalgebra, termed DIR49, of
RA, then we show that BIPATH is able to separate topological constraints from
directional ones. This means, given a set of hybrid topological and directional
constraints from the above subclasses of RCC8 and RA, we can transfer the joint
satisfaction problem in polynomial time to two independent satisfaction
problems in RCC8 and RA. For general RA constraints, we give a method to
compute solutions that satisfy all topological constraints and approximately
satisfy each RA constraint to any prescribed precision.
|
[
{
"created": "Tue, 1 Sep 2009 08:31:22 GMT",
"version": "v1"
}
] |
2013-10-21
|
[
[
"Li",
"Sanjiang",
""
],
[
"Cohn",
"Anthony G.",
""
]
] |
Current research on qualitative spatial representation and reasoning mainly focuses on one single aspect of space. In real world applications, however, multiple spatial aspects are often involved simultaneously. This paper investigates problems arising in reasoning with combined topological and directional information. We use the RCC8 algebra and the Rectangle Algebra (RA) for expressing topological and directional information respectively. We give examples to show that the bipath-consistency algorithm BIPATH is incomplete for solving even basic RCC8 and RA constraints. If topological constraints are taken from some maximal tractable subclasses of RCC8, and directional constraints are taken from a subalgebra, termed DIR49, of RA, then we show that BIPATH is able to separate topological constraints from directional ones. This means, given a set of hybrid topological and directional constraints from the above subclasses of RCC8 and RA, we can transfer the joint satisfaction problem in polynomial time to two independent satisfaction problems in RCC8 and RA. For general RA constraints, we give a method to compute solutions that satisfy all topological constraints and approximately satisfy each RA constraint to any prescribed precision.
|
1908.03361
|
Bj\"orn Barz
|
Bj\"orn Barz, Kai Schr\"oter, Moritz M\"unch, Bin Yang, Andrea Unger,
Doris Dransch, Joachim Denzler
|
Enhancing Flood Impact Analysis using Interactive Retrieval of Social
Media Images
| null |
Archives of Data Science, Series A, 5.1, 2018
|
10.5445/KSP/1000087327/06
| null |
cs.IR cs.CV cs.MM eess.IV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The analysis of natural disasters such as floods in a timely manner often
suffers from limited data due to a coarse distribution of sensors or sensor
failures. This limitation could be alleviated by leveraging information
contained in images of the event posted on social media platforms, so-called
"Volunteered Geographic Information (VGI)". To save the analyst from the need
to inspect all images posted online manually, we propose to use content-based
image retrieval with the possibility of relevance feedback for retrieving only
relevant images of the event to be analyzed. To evaluate this approach, we
introduce a new dataset of 3,710 flood images, annotated by domain experts
regarding their relevance with respect to three tasks (determining the flooded
area, inundation depth, water pollution). We compare several image features and
relevance feedback methods on that dataset, mixed with 97,085 distractor
images, and are able to improve the precision among the top 100 retrieval
results from 55% with the baseline retrieval to 87% after 5 rounds of feedback.
|
[
{
"created": "Fri, 9 Aug 2019 08:29:57 GMT",
"version": "v1"
}
] |
2020-03-24
|
[
[
"Barz",
"Björn",
""
],
[
"Schröter",
"Kai",
""
],
[
"Münch",
"Moritz",
""
],
[
"Yang",
"Bin",
""
],
[
"Unger",
"Andrea",
""
],
[
"Dransch",
"Doris",
""
],
[
"Denzler",
"Joachim",
""
]
] |
The analysis of natural disasters such as floods in a timely manner often suffers from limited data due to a coarse distribution of sensors or sensor failures. This limitation could be alleviated by leveraging information contained in images of the event posted on social media platforms, so-called "Volunteered Geographic Information (VGI)". To save the analyst from the need to inspect all images posted online manually, we propose to use content-based image retrieval with the possibility of relevance feedback for retrieving only relevant images of the event to be analyzed. To evaluate this approach, we introduce a new dataset of 3,710 flood images, annotated by domain experts regarding their relevance with respect to three tasks (determining the flooded area, inundation depth, water pollution). We compare several image features and relevance feedback methods on that dataset, mixed with 97,085 distractor images, and are able to improve the precision among the top 100 retrieval results from 55% with the baseline retrieval to 87% after 5 rounds of feedback.
|
1405.2736
|
Elisa Gorla
|
Elisa Gorla, Alberto Ravagnani
|
Subspace codes from Ferrers diagrams
|
minor edits
| null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we give new constructions of Ferrer diagram rank metric codes,
which achieve the largest possible dimension. In particular, we prove several
cases of a conjecture by T. Etzion and N. Silberstein. We also establish a
sharp lower bound on the dimension of linear rank metric anticodes with a given
profile. Combining our results with the multilevel construction, we produce
examples of subspace codes with the largest known cardinality for the given
parameters.
|
[
{
"created": "Mon, 12 May 2014 13:01:37 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Jun 2014 09:03:54 GMT",
"version": "v2"
}
] |
2014-06-16
|
[
[
"Gorla",
"Elisa",
""
],
[
"Ravagnani",
"Alberto",
""
]
] |
In this paper we give new constructions of Ferrer diagram rank metric codes, which achieve the largest possible dimension. In particular, we prove several cases of a conjecture by T. Etzion and N. Silberstein. We also establish a sharp lower bound on the dimension of linear rank metric anticodes with a given profile. Combining our results with the multilevel construction, we produce examples of subspace codes with the largest known cardinality for the given parameters.
|
1611.09010
|
Francesc Moreno-Noguer
|
Francesc Moreno-Noguer
|
3D Human Pose Estimation from a Single Image via Distance Matrix
Regression
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the problem of 3D human pose estimation from a single
image. We follow a standard two-step pipeline by first detecting the 2D
position of the $N$ body joints, and then using these observations to infer 3D
pose. For the first step, we use a recent CNN-based detector. For the second
step, most existing approaches perform 2$N$-to-3$N$ regression of the Cartesian
joint coordinates. We show that more precise pose estimates can be obtained by
representing both the 2D and 3D human poses using $N\times N$ distance
matrices, and formulating the problem as a 2D-to-3D distance matrix regression.
For learning such a regressor we leverage on simple Neural Network
architectures, which by construction, enforce positivity and symmetry of the
predicted matrices. The approach has also the advantage to naturally handle
missing observations and allowing to hypothesize the position of non-observed
joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate
consistent performance gains over state-of-the-art. Qualitative evaluation on
the images in-the-wild of the LSP dataset, using the regressor learned on
Human3.6M, reveals very promising generalization results.
|
[
{
"created": "Mon, 28 Nov 2016 07:36:31 GMT",
"version": "v1"
}
] |
2016-11-29
|
[
[
"Moreno-Noguer",
"Francesc",
""
]
] |
This paper addresses the problem of 3D human pose estimation from a single image. We follow a standard two-step pipeline by first detecting the 2D position of the $N$ body joints, and then using these observations to infer 3D pose. For the first step, we use a recent CNN-based detector. For the second step, most existing approaches perform 2$N$-to-3$N$ regression of the Cartesian joint coordinates. We show that more precise pose estimates can be obtained by representing both the 2D and 3D human poses using $N\times N$ distance matrices, and formulating the problem as a 2D-to-3D distance matrix regression. For learning such a regressor we leverage on simple Neural Network architectures, which by construction, enforce positivity and symmetry of the predicted matrices. The approach has also the advantage to naturally handle missing observations and allowing to hypothesize the position of non-observed joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate consistent performance gains over state-of-the-art. Qualitative evaluation on the images in-the-wild of the LSP dataset, using the regressor learned on Human3.6M, reveals very promising generalization results.
|
1305.3021
|
Ijaz Bukhari ijaz bukhari
|
Ijaz Bukhari, Nuhman-ul-Haq and Khizar Hyat
|
Wave Atom Based Watermarking
|
I want to withdraw the paper due to serious error
| null | null | null |
cs.MM cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Watermarking helps in ensuring originality, ownership and copyrights of a
digital image. This paper aims at embedding a Watermark in an image using Wave
Atom Transform. Preference of Wave Atoms on other transformations has been due
to its sparser expansion, adaptability to the direction of local pattern, and
sharp frequency localization. In this scheme, we had tried to spread the
watermark in an image so that the information at one place is very small and
undetectable. In order to extract the watermark and verify ownership of an
image, one would have the advantage of prior knowledge of embedded locations. A
noise of high amplitude will be needed to be added to the image for watermark
distortion. Furthermore, the information spread will ensure the robustness of
the watermark data. The proposed scheme has the ability to withstand malicious
operations and attacks.
|
[
{
"created": "Tue, 14 May 2013 05:27:52 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Aug 2015 13:06:09 GMT",
"version": "v2"
}
] |
2015-08-10
|
[
[
"Bukhari",
"Ijaz",
""
],
[
"Nuhman-ul-Haq",
"",
""
],
[
"Hyat",
"Khizar",
""
]
] |
Watermarking helps in ensuring originality, ownership and copyrights of a digital image. This paper aims at embedding a Watermark in an image using Wave Atom Transform. Preference of Wave Atoms on other transformations has been due to its sparser expansion, adaptability to the direction of local pattern, and sharp frequency localization. In this scheme, we had tried to spread the watermark in an image so that the information at one place is very small and undetectable. In order to extract the watermark and verify ownership of an image, one would have the advantage of prior knowledge of embedded locations. A noise of high amplitude will be needed to be added to the image for watermark distortion. Furthermore, the information spread will ensure the robustness of the watermark data. The proposed scheme has the ability to withstand malicious operations and attacks.
|
2203.07605
|
Hanna Sumita
|
Hanna Sumita, Shinji Ito, Kei Takemura, Daisuke Hatano, Takuro
Fukunaga, Naonori Kakimura, Ken-ichi Kawarabayashi
|
Online Task Assignment Problems with Reusable Resources
|
Appeared in AAAI-22
| null | null | null |
cs.DS cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study online task assignment problem with reusable resources, motivated by
practical applications such as ridesharing, crowdsourcing and job hiring. In
the problem, we are given a set of offline vertices (agents), and, at each
time, an online vertex (task) arrives randomly according to a known
time-dependent distribution. Upon arrival, we assign the task to agents
immediately and irrevocably. The goal of the problem is to maximize the
expected total profit produced by completed tasks. The key features of our
problem are (1) an agent is reusable, i.e., an agent comes back to the market
after completing the assigned task, (2) an agent may reject the assigned task
to stay the market, and (3) a task may accommodate multiple agents. The setting
generalizes that of existing work in which an online task is assigned to one
agent under (1).
In this paper, we propose an online algorithm that is $1/2$-competitive for
the above setting, which is tight. Moreover, when each agent can reject
assigned tasks at most $\Delta$ times, the algorithm is shown to have the
competitive ratio $\Delta/(3\Delta-1)\geq 1/3$. We also evaluate our proposed
algorithm with numerical experiments.
|
[
{
"created": "Tue, 15 Mar 2022 02:48:13 GMT",
"version": "v1"
}
] |
2022-03-16
|
[
[
"Sumita",
"Hanna",
""
],
[
"Ito",
"Shinji",
""
],
[
"Takemura",
"Kei",
""
],
[
"Hatano",
"Daisuke",
""
],
[
"Fukunaga",
"Takuro",
""
],
[
"Kakimura",
"Naonori",
""
],
[
"Kawarabayashi",
"Ken-ichi",
""
]
] |
We study online task assignment problem with reusable resources, motivated by practical applications such as ridesharing, crowdsourcing and job hiring. In the problem, we are given a set of offline vertices (agents), and, at each time, an online vertex (task) arrives randomly according to a known time-dependent distribution. Upon arrival, we assign the task to agents immediately and irrevocably. The goal of the problem is to maximize the expected total profit produced by completed tasks. The key features of our problem are (1) an agent is reusable, i.e., an agent comes back to the market after completing the assigned task, (2) an agent may reject the assigned task to stay the market, and (3) a task may accommodate multiple agents. The setting generalizes that of existing work in which an online task is assigned to one agent under (1). In this paper, we propose an online algorithm that is $1/2$-competitive for the above setting, which is tight. Moreover, when each agent can reject assigned tasks at most $\Delta$ times, the algorithm is shown to have the competitive ratio $\Delta/(3\Delta-1)\geq 1/3$. We also evaluate our proposed algorithm with numerical experiments.
|
2106.08671
|
Waddah Saeed
|
Waddah Saeed
|
Comparison of Automated Machine Learning Tools for SMS Spam Message
Filtering
|
10 pages, 3 figures
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Short Message Service (SMS) is a very popular service used for communication
by mobile users. However, this popular service can be abused by executing
illegal activities and influencing security risks. Nowadays, many automatic
machine learning (AutoML) tools exist which can help domain experts and lay
users to build high-quality ML models with little or no machine learning
knowledge. In this work, a classification performance comparison was conducted
between three automatic ML tools for SMS spam message filtering. These tools
are mljar-supervised AutoML, H2O AutoML, and Tree-based Pipeline Optimization
Tool (TPOT) AutoML. Experimental results showed that ensemble models achieved
the best classification performance. The Stacked Ensemble model, which was
built using H2O AutoML, achieved the best performance in terms of Log Loss
(0.8370), true positive (1088/1116), and true negative (281/287) metrics. There
is a 19.05\% improvement in Log Loss with respect to TPOT AutoML and 5.56\%
improvement with respect to mljar-supervised AutoML. The satisfactory filtering
performance achieved with AutoML tools provides a potential application for
AutoML tools to automatically determine the best ML model that can perform best
for SMS spam message filtering.
|
[
{
"created": "Wed, 16 Jun 2021 10:16:07 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Jun 2021 11:37:35 GMT",
"version": "v2"
}
] |
2021-06-29
|
[
[
"Saeed",
"Waddah",
""
]
] |
Short Message Service (SMS) is a very popular service used for communication by mobile users. However, this popular service can be abused by executing illegal activities and influencing security risks. Nowadays, many automatic machine learning (AutoML) tools exist which can help domain experts and lay users to build high-quality ML models with little or no machine learning knowledge. In this work, a classification performance comparison was conducted between three automatic ML tools for SMS spam message filtering. These tools are mljar-supervised AutoML, H2O AutoML, and Tree-based Pipeline Optimization Tool (TPOT) AutoML. Experimental results showed that ensemble models achieved the best classification performance. The Stacked Ensemble model, which was built using H2O AutoML, achieved the best performance in terms of Log Loss (0.8370), true positive (1088/1116), and true negative (281/287) metrics. There is a 19.05\% improvement in Log Loss with respect to TPOT AutoML and 5.56\% improvement with respect to mljar-supervised AutoML. The satisfactory filtering performance achieved with AutoML tools provides a potential application for AutoML tools to automatically determine the best ML model that can perform best for SMS spam message filtering.
|
1709.00551
|
Yong Xu Dr
|
Yong Xu, Qiuqiang Kong, Wenwu Wang, Mark D. Plumbley
|
Surrey-cvssp system for DCASE2017 challenge task4
|
DCASE2017 challenge ranked 1st system, task4, tech report
| null | null | null |
cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this technique report, we present a bunch of methods for the task 4 of
Detection and Classification of Acoustic Scenes and Events 2017 (DCASE2017)
challenge. This task evaluates systems for the large-scale detection of sound
events using weakly labeled training data. The data are YouTube video excerpts
focusing on transportation and warnings due to their industry applications.
There are two tasks, audio tagging and sound event detection from weakly
labeled data. Convolutional neural network (CNN) and gated recurrent unit (GRU)
based recurrent neural network (RNN) are adopted as our basic framework. We
proposed a learnable gating activation function for selecting informative local
features. Attention-based scheme is used for localizing the specific events in
a weakly-supervised mode. A new batch-level balancing strategy is also proposed
to tackle the data unbalancing problem. Fusion of posteriors from different
systems are found effective to improve the performance. In a summary, we get
61% F-value for the audio tagging subtask and 0.73 error rate (ER) for the
sound event detection subtask on the development set. While the official
multilayer perceptron (MLP) based baseline just obtained 13.1% F-value for the
audio tagging and 1.02 for the sound event detection.
|
[
{
"created": "Sat, 2 Sep 2017 09:40:06 GMT",
"version": "v1"
},
{
"created": "Sat, 25 Nov 2017 20:21:32 GMT",
"version": "v2"
}
] |
2017-11-28
|
[
[
"Xu",
"Yong",
""
],
[
"Kong",
"Qiuqiang",
""
],
[
"Wang",
"Wenwu",
""
],
[
"Plumbley",
"Mark D.",
""
]
] |
In this technique report, we present a bunch of methods for the task 4 of Detection and Classification of Acoustic Scenes and Events 2017 (DCASE2017) challenge. This task evaluates systems for the large-scale detection of sound events using weakly labeled training data. The data are YouTube video excerpts focusing on transportation and warnings due to their industry applications. There are two tasks, audio tagging and sound event detection from weakly labeled data. Convolutional neural network (CNN) and gated recurrent unit (GRU) based recurrent neural network (RNN) are adopted as our basic framework. We proposed a learnable gating activation function for selecting informative local features. Attention-based scheme is used for localizing the specific events in a weakly-supervised mode. A new batch-level balancing strategy is also proposed to tackle the data unbalancing problem. Fusion of posteriors from different systems are found effective to improve the performance. In a summary, we get 61% F-value for the audio tagging subtask and 0.73 error rate (ER) for the sound event detection subtask on the development set. While the official multilayer perceptron (MLP) based baseline just obtained 13.1% F-value for the audio tagging and 1.02 for the sound event detection.
|
1404.3165
|
Gozde Ozcan
|
Gozde Ozcan, M. Cenk Gursoy
|
Energy-Efficient Power Adaptation for Cognitive Radio Systems under
Imperfect Channel Sensing
|
To Appear at 2014 IEEE INFOCOM Workshop on Green Cognitive
Communications and Computing Networks. Some typos are fixed
| null |
10.1109/INFCOMW.2014.6849317
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, energy efficient power adaptation is considered in
sensing-based spectrum sharing cognitive radio systems in which secondary users
first perform channel sensing and then initiate data transmission with two
power levels based on the sensing decisions (e.g., idle or busy). It is assumed
that spectrum sensing is performed by the cognitive secondary users, albeit
with possible errors. In this setting, the optimization problem of maximizing
the energy efficiency (EE) subject to peak/average transmission power
constraints and average interference constraints is considered. The circuit
power is taken into account for total power consumption. By exploiting the
quasiconcave property of the EE maximization problem, the original problem is
transformed into an equivalent parameterized concave problem and Dinkelbach's
method-based iterative power adaptation algorithm is proposed. The impact of
sensing performance, peak/average transmit power constraints and average
interference constraint on the energy efficiency of cognitive radio systems is
analyzed.
|
[
{
"created": "Fri, 11 Apr 2014 17:42:00 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Apr 2014 05:22:43 GMT",
"version": "v2"
}
] |
2016-11-17
|
[
[
"Ozcan",
"Gozde",
""
],
[
"Gursoy",
"M. Cenk",
""
]
] |
In this paper, energy efficient power adaptation is considered in sensing-based spectrum sharing cognitive radio systems in which secondary users first perform channel sensing and then initiate data transmission with two power levels based on the sensing decisions (e.g., idle or busy). It is assumed that spectrum sensing is performed by the cognitive secondary users, albeit with possible errors. In this setting, the optimization problem of maximizing the energy efficiency (EE) subject to peak/average transmission power constraints and average interference constraints is considered. The circuit power is taken into account for total power consumption. By exploiting the quasiconcave property of the EE maximization problem, the original problem is transformed into an equivalent parameterized concave problem and Dinkelbach's method-based iterative power adaptation algorithm is proposed. The impact of sensing performance, peak/average transmit power constraints and average interference constraint on the energy efficiency of cognitive radio systems is analyzed.
|
2407.11271
|
Charlotte Shahlaei
|
Charlotte A. Shahlaei and Nicholas Berente
|
An Analysis of European Data and AI Regulations for Automotive
Organizations
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
This report summarizes the European Union's series of data and AI regulations
and analyzes them for managers in automotive vehicle manufacturing
organizations. In particular, we highlight the relevant ideas of the
regulations, including how they find their roots in earlier legislation, how
they contradict and complement each other, as well as the business
opportunities that these regulations offer. The structure of the report is as
follows. First, we address the GDPR as the cornerstone against which the
requirements of other regulations are weighed and legislated. Second, we
explain the EU Data Act since it directly addresses Internet of Things (IoT)
for businesses in the private sector and imposes strict requirements on large
data generators such as vehicle manufacturers. For manufacturers, compliance
with the EU Data Act is a prerequisite for the subsequent legislation, in
particular the EU AI Act. Third, we explain the Data Governance Act, Digital
Services Act, Digital Markets Act, and EU AI Act in chronological order.
Overall, we characterize European Union data regulations as a wave set, rooted
in historical precedent, with important implications for the automotive
industry.
|
[
{
"created": "Mon, 15 Jul 2024 22:38:37 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Jul 2024 14:38:13 GMT",
"version": "v2"
},
{
"created": "Fri, 19 Jul 2024 02:59:18 GMT",
"version": "v3"
}
] |
2024-07-22
|
[
[
"Shahlaei",
"Charlotte A.",
""
],
[
"Berente",
"Nicholas",
""
]
] |
This report summarizes the European Union's series of data and AI regulations and analyzes them for managers in automotive vehicle manufacturing organizations. In particular, we highlight the relevant ideas of the regulations, including how they find their roots in earlier legislation, how they contradict and complement each other, as well as the business opportunities that these regulations offer. The structure of the report is as follows. First, we address the GDPR as the cornerstone against which the requirements of other regulations are weighed and legislated. Second, we explain the EU Data Act since it directly addresses Internet of Things (IoT) for businesses in the private sector and imposes strict requirements on large data generators such as vehicle manufacturers. For manufacturers, compliance with the EU Data Act is a prerequisite for the subsequent legislation, in particular the EU AI Act. Third, we explain the Data Governance Act, Digital Services Act, Digital Markets Act, and EU AI Act in chronological order. Overall, we characterize European Union data regulations as a wave set, rooted in historical precedent, with important implications for the automotive industry.
|
1812.06705
|
Wu Xing
|
Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, Songlin Hu
|
Conditional BERT Contextual Augmentation
|
9 pages, 1 figure
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel data augmentation method for labeled sentences called
conditional BERT contextual augmentation. Data augmentation methods are often
applied to prevent overfitting and improve generalization of deep neural
network models. Recently proposed contextual augmentation augments labeled
sentences by randomly replacing words with more varied substitutions predicted
by language model. BERT demonstrates that a deep bidirectional language model
is more powerful than either an unidirectional language model or the shallow
concatenation of a forward and backward model. We retrofit BERT to conditional
BERT by introducing a new conditional masked language model\footnote{The term
"conditional masked language model" appeared once in original BERT paper, which
indicates context-conditional, is equivalent to term "masked language model".
In our paper, "conditional masked language model" indicates we apply extra
label-conditional constraint to the "masked language model".} task. The well
trained conditional BERT can be applied to enhance contextual augmentation.
Experiments on six various different text classification tasks show that our
method can be easily applied to both convolutional or recurrent neural networks
classifier to obtain obvious improvement.
|
[
{
"created": "Mon, 17 Dec 2018 11:26:42 GMT",
"version": "v1"
}
] |
2018-12-18
|
[
[
"Wu",
"Xing",
""
],
[
"Lv",
"Shangwen",
""
],
[
"Zang",
"Liangjun",
""
],
[
"Han",
"Jizhong",
""
],
[
"Hu",
"Songlin",
""
]
] |
We propose a novel data augmentation method for labeled sentences called conditional BERT contextual augmentation. Data augmentation methods are often applied to prevent overfitting and improve generalization of deep neural network models. Recently proposed contextual augmentation augments labeled sentences by randomly replacing words with more varied substitutions predicted by language model. BERT demonstrates that a deep bidirectional language model is more powerful than either an unidirectional language model or the shallow concatenation of a forward and backward model. We retrofit BERT to conditional BERT by introducing a new conditional masked language model\footnote{The term "conditional masked language model" appeared once in original BERT paper, which indicates context-conditional, is equivalent to term "masked language model". In our paper, "conditional masked language model" indicates we apply extra label-conditional constraint to the "masked language model".} task. The well trained conditional BERT can be applied to enhance contextual augmentation. Experiments on six various different text classification tasks show that our method can be easily applied to both convolutional or recurrent neural networks classifier to obtain obvious improvement.
|
2005.11735
|
Marcin Plata
|
Marcin Plata and Piotr Syga
|
Robust Spatial-spread Deep Neural Image Watermarking
|
The article was accepted on TrustCom 2020: The 19th IEEE
International Conference on Trust, Security and Privacy in Computing and
Communications
| null |
10.1109/TrustCom50675.2020.00022
| null |
cs.MM cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Watermarking is an operation of embedding an information into an image in a
way that allows to identify ownership of the image despite applying some
distortions on it. In this paper, we presented a novel end-to-end solution for
embedding and recovering the watermark in the digital image using convolutional
neural networks. The method is based on spreading the message over the spatial
domain of the image, hence reducing the "local bits per pixel" capacity. To
obtain the model we used adversarial training and applied noiser layers between
the encoder and the decoder. Moreover, we broadened the spectrum of typically
considered attacks on the watermark and by grouping the attacks according to
their scope, we achieved high general robustness, most notably against JPEG
compression, Gaussian blurring, subsampling or resizing. To help us in the
models training we also proposed a precise differentiable approximation of
JPEG.
|
[
{
"created": "Sun, 24 May 2020 12:51:25 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Nov 2020 13:14:42 GMT",
"version": "v2"
}
] |
2022-01-11
|
[
[
"Plata",
"Marcin",
""
],
[
"Syga",
"Piotr",
""
]
] |
Watermarking is an operation of embedding an information into an image in a way that allows to identify ownership of the image despite applying some distortions on it. In this paper, we presented a novel end-to-end solution for embedding and recovering the watermark in the digital image using convolutional neural networks. The method is based on spreading the message over the spatial domain of the image, hence reducing the "local bits per pixel" capacity. To obtain the model we used adversarial training and applied noiser layers between the encoder and the decoder. Moreover, we broadened the spectrum of typically considered attacks on the watermark and by grouping the attacks according to their scope, we achieved high general robustness, most notably against JPEG compression, Gaussian blurring, subsampling or resizing. To help us in the models training we also proposed a precise differentiable approximation of JPEG.
|
1212.3873
|
EPTCS
|
Hua Mao (AAU), Yingke Chen (AAU), Manfred Jaeger (AAU), Thomas D.
Nielsen (AAU), Kim G. Larsen (AAU), Brian Nielsen (AAU)
|
Learning Markov Decision Processes for Model Checking
|
In Proceedings QFM 2012, arXiv:1212.3454
|
EPTCS 103, 2012, pp. 49-63
|
10.4204/EPTCS.103.6
| null |
cs.LG cs.LO cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Constructing an accurate system model for formal model verification can be
both resource demanding and time-consuming. To alleviate this shortcoming,
algorithms have been proposed for automatically learning system models based on
observed system behaviors. In this paper we extend the algorithm on learning
probabilistic automata to reactive systems, where the observed system behavior
is in the form of alternating sequences of inputs and outputs. We propose an
algorithm for automatically learning a deterministic labeled Markov decision
process model from the observed behavior of a reactive system. The proposed
learning algorithm is adapted from algorithms for learning deterministic
probabilistic finite automata, and extended to include both probabilistic and
nondeterministic transitions. The algorithm is empirically analyzed and
evaluated by learning system models of slot machines. The evaluation is
performed by analyzing the probabilistic linear temporal logic properties of
the system as well as by analyzing the schedulers, in particular the optimal
schedulers, induced by the learned models.
|
[
{
"created": "Mon, 17 Dec 2012 03:40:47 GMT",
"version": "v1"
}
] |
2012-12-18
|
[
[
"Mao",
"Hua",
"",
"AAU"
],
[
"Chen",
"Yingke",
"",
"AAU"
],
[
"Jaeger",
"Manfred",
"",
"AAU"
],
[
"Nielsen",
"Thomas D.",
"",
"AAU"
],
[
"Larsen",
"Kim G.",
"",
"AAU"
],
[
"Nielsen",
"Brian",
"",
"AAU"
]
] |
Constructing an accurate system model for formal model verification can be both resource demanding and time-consuming. To alleviate this shortcoming, algorithms have been proposed for automatically learning system models based on observed system behaviors. In this paper we extend the algorithm on learning probabilistic automata to reactive systems, where the observed system behavior is in the form of alternating sequences of inputs and outputs. We propose an algorithm for automatically learning a deterministic labeled Markov decision process model from the observed behavior of a reactive system. The proposed learning algorithm is adapted from algorithms for learning deterministic probabilistic finite automata, and extended to include both probabilistic and nondeterministic transitions. The algorithm is empirically analyzed and evaluated by learning system models of slot machines. The evaluation is performed by analyzing the probabilistic linear temporal logic properties of the system as well as by analyzing the schedulers, in particular the optimal schedulers, induced by the learned models.
|
2312.05092
|
Romain Robbes
|
Anjan Karmakar, Romain Robbes
|
INSPECT: Intrinsic and Systematic Probing Evaluation for Code
Transformers
|
Accepted to IEEE Transactions on Software Engineering. Extension of
our previous paper "What do pre-trained code models know about code?" (ASE
2021, arXiv:2108.11308). 21 pages
| null | null | null |
cs.SE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Pre-trained models of source code have recently been successfully applied to
a wide variety of Software Engineering tasks; they have also seen some
practical adoption in practice, e.g. for code completion. Yet, we still know
very little about what these pre-trained models learn about source code. In
this article, we use probing--simple diagnostic tasks that do not further train
the models--to discover to what extent pre-trained models learn about specific
aspects of source code. We use an extensible framework to define 15 probing
tasks that exercise surface, syntactic, structural and semantic characteristics
of source code. We probe 8 pre-trained source code models, as well as a natural
language model (BERT) as our baseline. We find that models that incorporate
some structural information (such as GraphCodeBERT) have a better
representation of source code characteristics. Surprisingly, we find that for
some probing tasks, BERT is competitive with the source code models, indicating
that there are ample opportunities to improve source-code specific pre-training
on the respective code characteristics. We encourage other researchers to
evaluate their models with our probing task suite, so that they may peer into
the hidden layers of the models and identify what intrinsic code
characteristics are encoded.
|
[
{
"created": "Fri, 8 Dec 2023 15:21:54 GMT",
"version": "v1"
}
] |
2023-12-11
|
[
[
"Karmakar",
"Anjan",
""
],
[
"Robbes",
"Romain",
""
]
] |
Pre-trained models of source code have recently been successfully applied to a wide variety of Software Engineering tasks; they have also seen some practical adoption in practice, e.g. for code completion. Yet, we still know very little about what these pre-trained models learn about source code. In this article, we use probing--simple diagnostic tasks that do not further train the models--to discover to what extent pre-trained models learn about specific aspects of source code. We use an extensible framework to define 15 probing tasks that exercise surface, syntactic, structural and semantic characteristics of source code. We probe 8 pre-trained source code models, as well as a natural language model (BERT) as our baseline. We find that models that incorporate some structural information (such as GraphCodeBERT) have a better representation of source code characteristics. Surprisingly, we find that for some probing tasks, BERT is competitive with the source code models, indicating that there are ample opportunities to improve source-code specific pre-training on the respective code characteristics. We encourage other researchers to evaluate their models with our probing task suite, so that they may peer into the hidden layers of the models and identify what intrinsic code characteristics are encoded.
|
cs/0109096
|
Tony Christensen
|
Tony Christensen, Peter McCormick
|
CyberCampaigns and Canadian Politics: Still Waiting?
|
29th TPRC Conference, 2001
| null | null |
TPRC-2001-058
|
cs.CY
| null |
The early election call in the fall of 2000 provided the perfect opportunity
to study the impact the Internet has had on election campaigning in Canada.
With the explosion of use the Net has seen since the 1997 general election,
Canadian federal parties stood at the threshold of a new age in election
campaigning. Pundits such as Rheingold (1993) have argued that the Internet
will provide citizens with a way to bypass traditional media and gain
unmediated access to each parties political message as well as providing a
forum for citizens to engage the parties, and each other in deliberative
debate.
Through a longitudinal analysis of party web pages and telephone interviews
with party staffers, we analyze the role the Internet played in the election
campaigns of Canada's federal parties. Our findings indicate that the parties
are still focusing on providing online features that talk at the voter instead
of engaging them in any type of meaningful discourse. Most of these sites were
exceptionally similar in their structure and in the type of content they
provided. Generally, these sites served as digital archives for campaign
material created with other media in mind and despite the multimedia
capabilities of the Internet, these sites tended to be overwhelmingly text
oriented. In line with Stromer-Galley's (2000) discussion of why candidates in
the U.S. avoid online interaction, we also argue that little incentive exists
to motivate parties to engage in any meaningful interaction with voters online.
|
[
{
"created": "Mon, 24 Sep 2001 22:42:52 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Christensen",
"Tony",
""
],
[
"McCormick",
"Peter",
""
]
] |
The early election call in the fall of 2000 provided the perfect opportunity to study the impact the Internet has had on election campaigning in Canada. With the explosion of use the Net has seen since the 1997 general election, Canadian federal parties stood at the threshold of a new age in election campaigning. Pundits such as Rheingold (1993) have argued that the Internet will provide citizens with a way to bypass traditional media and gain unmediated access to each parties political message as well as providing a forum for citizens to engage the parties, and each other in deliberative debate. Through a longitudinal analysis of party web pages and telephone interviews with party staffers, we analyze the role the Internet played in the election campaigns of Canada's federal parties. Our findings indicate that the parties are still focusing on providing online features that talk at the voter instead of engaging them in any type of meaningful discourse. Most of these sites were exceptionally similar in their structure and in the type of content they provided. Generally, these sites served as digital archives for campaign material created with other media in mind and despite the multimedia capabilities of the Internet, these sites tended to be overwhelmingly text oriented. In line with Stromer-Galley's (2000) discussion of why candidates in the U.S. avoid online interaction, we also argue that little incentive exists to motivate parties to engage in any meaningful interaction with voters online.
|
1707.09411
|
Ding Zhao
|
Ding Zhao, Huei Peng, Kazutoshi Nobukawa, Shan Bao, David J LeBlanc,
Christopher S Pan
|
Analysis of mandatory and discretionary lane change behaviors for heavy
trucks
|
Published in the 12th International Symposium on Advanced Vehicle
Control, AVEC'14
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The behaviors of heavy vehicles drivers in mandatory and discretionary lane
changes are analyzed in this paper. 640 mandatory and 2,035 discretionary lane
change events were extracted from a naturalistic driving database. Variations
in gap acceptance and lane change duration were investigated. Statistical
analysis showed that mandatory lane changes are more aggressive in gap
acceptance and lane change execution than discretionary lane changes. The
results can be used for microscopic simulations, and design and evaluation of
driver-assistant systems.
|
[
{
"created": "Fri, 28 Jul 2017 20:45:02 GMT",
"version": "v1"
}
] |
2017-08-01
|
[
[
"Zhao",
"Ding",
""
],
[
"Peng",
"Huei",
""
],
[
"Nobukawa",
"Kazutoshi",
""
],
[
"Bao",
"Shan",
""
],
[
"LeBlanc",
"David J",
""
],
[
"Pan",
"Christopher S",
""
]
] |
The behaviors of heavy vehicles drivers in mandatory and discretionary lane changes are analyzed in this paper. 640 mandatory and 2,035 discretionary lane change events were extracted from a naturalistic driving database. Variations in gap acceptance and lane change duration were investigated. Statistical analysis showed that mandatory lane changes are more aggressive in gap acceptance and lane change execution than discretionary lane changes. The results can be used for microscopic simulations, and design and evaluation of driver-assistant systems.
|
2005.12517
|
Makoto Naruse
|
Daijiro Koyama, Yunzhuo Wang, Nobuyasu Shiga, Satoshi Yasuda, Nicolas
Chauvet, Makoto Naruse
|
Information transfer based on precision time synchronization via
wireless interferometry
| null | null | null | null |
cs.NI nlin.AO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The growing demand of high-bandwidth and low-latency information transfer in
information and communication technologies such as data centres and in-vehicle
networks has increased the importance of optical communication networks in
recent years. However, complicated arbitration schemes can impose significant
overheads in data transfer, which may inhibit the full exploitation of the
potential of optical interconnects. Herein, we propose an arbitration protocol
based on precision time synchronization via wireless two-way interferometry
(Wi-Wi), and numerically validate its efficiency including the ability to
impose a strict upper bound on the latency of data transfer. Compared with the
conventional carrier sense multiple access/collision detection (CSMA/CD)-based
approach, a significant improvement in the data transfer was observed
especially in the cases with high traffic flow rate. Furthermore, we conducted
a proof-of-principle experiment for Wi-Wi-based data transfer between two
electrically connected nodes and confirmed that the skew was less than 300 ns
and remained stable over time. Conversely, non-WiWi-based data transfer
exhibited huge and unstable skew. These results indicate that precision time
synchronization is a promising resource to significantly reduce the
communication overheads and ensure low latency for future networks and
real-time applications.
|
[
{
"created": "Tue, 26 May 2020 05:20:52 GMT",
"version": "v1"
}
] |
2020-05-27
|
[
[
"Koyama",
"Daijiro",
""
],
[
"Wang",
"Yunzhuo",
""
],
[
"Shiga",
"Nobuyasu",
""
],
[
"Yasuda",
"Satoshi",
""
],
[
"Chauvet",
"Nicolas",
""
],
[
"Naruse",
"Makoto",
""
]
] |
The growing demand of high-bandwidth and low-latency information transfer in information and communication technologies such as data centres and in-vehicle networks has increased the importance of optical communication networks in recent years. However, complicated arbitration schemes can impose significant overheads in data transfer, which may inhibit the full exploitation of the potential of optical interconnects. Herein, we propose an arbitration protocol based on precision time synchronization via wireless two-way interferometry (Wi-Wi), and numerically validate its efficiency including the ability to impose a strict upper bound on the latency of data transfer. Compared with the conventional carrier sense multiple access/collision detection (CSMA/CD)-based approach, a significant improvement in the data transfer was observed especially in the cases with high traffic flow rate. Furthermore, we conducted a proof-of-principle experiment for Wi-Wi-based data transfer between two electrically connected nodes and confirmed that the skew was less than 300 ns and remained stable over time. Conversely, non-WiWi-based data transfer exhibited huge and unstable skew. These results indicate that precision time synchronization is a promising resource to significantly reduce the communication overheads and ensure low latency for future networks and real-time applications.
|
2308.15684
|
Kanata Suzuki
|
Kazuki Hori, Kanata Suzuki, Tetsuya Ogata
|
Interactively Robot Action Planning with Uncertainty Analysis and Active
Questioning by Large Language Model
|
7 pages, 6 figures, accepted at SII 2024
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The application of the Large Language Model (LLM) to robot action planning
has been actively studied. The instructions given to the LLM by natural
language may include ambiguity and lack of information depending on the task
context. It is possible to adjust the output of LLM by making the instruction
input more detailed; however, the design cost is high. In this paper, we
propose the interactive robot action planning method that allows the LLM to
analyze and gather missing information by asking questions to humans. The
method can minimize the design cost of generating precise robot instructions.
We demonstrated the effectiveness of our method through concrete examples in
cooking tasks. However, our experiments also revealed challenges in robot
action planning with LLM, such as asking unimportant questions and assuming
crucial information without asking. Shedding light on these issues provides
valuable insights for future research on utilizing LLM for robotics.
|
[
{
"created": "Wed, 30 Aug 2023 00:54:44 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Oct 2023 13:31:49 GMT",
"version": "v2"
}
] |
2023-10-19
|
[
[
"Hori",
"Kazuki",
""
],
[
"Suzuki",
"Kanata",
""
],
[
"Ogata",
"Tetsuya",
""
]
] |
The application of the Large Language Model (LLM) to robot action planning has been actively studied. The instructions given to the LLM by natural language may include ambiguity and lack of information depending on the task context. It is possible to adjust the output of LLM by making the instruction input more detailed; however, the design cost is high. In this paper, we propose the interactive robot action planning method that allows the LLM to analyze and gather missing information by asking questions to humans. The method can minimize the design cost of generating precise robot instructions. We demonstrated the effectiveness of our method through concrete examples in cooking tasks. However, our experiments also revealed challenges in robot action planning with LLM, such as asking unimportant questions and assuming crucial information without asking. Shedding light on these issues provides valuable insights for future research on utilizing LLM for robotics.
|
2206.15025
|
Hongchang Gao
|
Hongchang Gao, Bin Gu, My T. Thai
|
On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bilevel optimization has been applied to a wide variety of machine learning
models, and numerous stochastic bilevel optimization algorithms have been
developed in recent years. However, most existing algorithms restrict their
focus on the single-machine setting so that they are incapable of handling the
distributed data. To address this issue, under the setting where all
participants compose a network and perform peer-to-peer communication in this
network, we developed two novel decentralized stochastic bilevel optimization
algorithms based on the gradient tracking communication mechanism and two
different gradient estimators. Additionally, we established their convergence
rates for nonconvex-strongly-convex problems with novel theoretical analysis
strategies. To our knowledge, this is the first work achieving these
theoretical results. Finally, we applied our algorithms to practical machine
learning models, and the experimental results confirmed the efficacy of our
algorithms.
|
[
{
"created": "Thu, 30 Jun 2022 05:29:52 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Mar 2023 16:09:27 GMT",
"version": "v2"
}
] |
2023-03-28
|
[
[
"Gao",
"Hongchang",
""
],
[
"Gu",
"Bin",
""
],
[
"Thai",
"My T.",
""
]
] |
Bilevel optimization has been applied to a wide variety of machine learning models, and numerous stochastic bilevel optimization algorithms have been developed in recent years. However, most existing algorithms restrict their focus on the single-machine setting so that they are incapable of handling the distributed data. To address this issue, under the setting where all participants compose a network and perform peer-to-peer communication in this network, we developed two novel decentralized stochastic bilevel optimization algorithms based on the gradient tracking communication mechanism and two different gradient estimators. Additionally, we established their convergence rates for nonconvex-strongly-convex problems with novel theoretical analysis strategies. To our knowledge, this is the first work achieving these theoretical results. Finally, we applied our algorithms to practical machine learning models, and the experimental results confirmed the efficacy of our algorithms.
|
2406.13922
|
Feng Ye
|
Feng Ye, Xiaohu You, Jiamin Li, Chuan Zhang, Chen Ji
|
Explicit Performance Bound of Finite Blocklength Coded MIMO: Time-Domain
versus Spatiotemporal Channel Coding
|
9 pages, 5 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the sixth generation (6G), ultra-reliable low-latency communications
(URLLC) will further develop to achieve TKu extreme connectivity, and
multiple-input multiple-output (MIMO) is expected to be a key enabler for its
realization. Since the latency constraint can be represented by the blocklength
of a codeword, it is essential to analyze different coded MIMO schemes under
finite blocklength regime. In this paper, we analyze the statistical
characteristics of information density of time-domain coding and spatiotemporal
coding MIMO, compute the channel capacity and dispersion, and present new
explicit performance bounds of finite blocklength coded MIMO for different
coding modes via normal approximation. As revealed by the analysis and
simulation, spatiotemporal coding can effectively mitigate the performance loss
induced by short blocklength by increasing the spatial degree of freedom (DoF).
However, for time-domain coding, each spatial link is encoded independently,
and the performance loss will be more severe with short blocklength under any
spatial DoF. These results indicate that spatiotemporal coding can optimally
exploit the spatial dimension advantages of MIMO systems compared with
time-domain coding, and it has the potential to support URLLC transmission,
enabling very low error-rate communication under stringent blocklength
constraint.
|
[
{
"created": "Thu, 20 Jun 2024 01:41:57 GMT",
"version": "v1"
}
] |
2024-06-21
|
[
[
"Ye",
"Feng",
""
],
[
"You",
"Xiaohu",
""
],
[
"Li",
"Jiamin",
""
],
[
"Zhang",
"Chuan",
""
],
[
"Ji",
"Chen",
""
]
] |
In the sixth generation (6G), ultra-reliable low-latency communications (URLLC) will further develop to achieve TKu extreme connectivity, and multiple-input multiple-output (MIMO) is expected to be a key enabler for its realization. Since the latency constraint can be represented by the blocklength of a codeword, it is essential to analyze different coded MIMO schemes under finite blocklength regime. In this paper, we analyze the statistical characteristics of information density of time-domain coding and spatiotemporal coding MIMO, compute the channel capacity and dispersion, and present new explicit performance bounds of finite blocklength coded MIMO for different coding modes via normal approximation. As revealed by the analysis and simulation, spatiotemporal coding can effectively mitigate the performance loss induced by short blocklength by increasing the spatial degree of freedom (DoF). However, for time-domain coding, each spatial link is encoded independently, and the performance loss will be more severe with short blocklength under any spatial DoF. These results indicate that spatiotemporal coding can optimally exploit the spatial dimension advantages of MIMO systems compared with time-domain coding, and it has the potential to support URLLC transmission, enabling very low error-rate communication under stringent blocklength constraint.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.