id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2403.09048
|
Lei Wang
|
Lei Wang, Jieming Bian, Letian Zhang, Chen Chen, Jie Xu
|
Taming Cross-Domain Representation Variance in Federated Prototype
Learning with Heterogeneous Data Domains
|
16 pages
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated learning (FL) allows collaborative machine learning training
without sharing private data. While most FL methods assume identical data
domains across clients, real-world scenarios often involve heterogeneous data
domains. Federated Prototype Learning (FedPL) addresses this issue, using mean
feature vectors as prototypes to enhance model generalization. However,
existing FedPL methods create the same number of prototypes for each client,
leading to cross-domain performance gaps and disparities for clients with
varied data distributions. To mitigate cross-domain feature representation
variance, we introduce FedPLVM, which establishes variance-aware dual-level
prototypes clustering and employs a novel $\alpha$-sparsity prototype loss. The
dual-level prototypes clustering strategy creates local clustered prototypes
based on private data features, then performs global prototypes clustering to
reduce communication complexity and preserve local data privacy. The
$\alpha$-sparsity prototype loss aligns samples from underrepresented domains,
enhancing intra-class similarity and reducing inter-class similarity.
Evaluations on Digit-5, Office-10, and DomainNet datasets demonstrate our
method's superiority over existing approaches.
|
[
{
"created": "Thu, 14 Mar 2024 02:36:16 GMT",
"version": "v1"
}
] |
2024-03-15
|
[
[
"Wang",
"Lei",
""
],
[
"Bian",
"Jieming",
""
],
[
"Zhang",
"Letian",
""
],
[
"Chen",
"Chen",
""
],
[
"Xu",
"Jie",
""
]
] |
Federated learning (FL) allows collaborative machine learning training without sharing private data. While most FL methods assume identical data domains across clients, real-world scenarios often involve heterogeneous data domains. Federated Prototype Learning (FedPL) addresses this issue, using mean feature vectors as prototypes to enhance model generalization. However, existing FedPL methods create the same number of prototypes for each client, leading to cross-domain performance gaps and disparities for clients with varied data distributions. To mitigate cross-domain feature representation variance, we introduce FedPLVM, which establishes variance-aware dual-level prototypes clustering and employs a novel $\alpha$-sparsity prototype loss. The dual-level prototypes clustering strategy creates local clustered prototypes based on private data features, then performs global prototypes clustering to reduce communication complexity and preserve local data privacy. The $\alpha$-sparsity prototype loss aligns samples from underrepresented domains, enhancing intra-class similarity and reducing inter-class similarity. Evaluations on Digit-5, Office-10, and DomainNet datasets demonstrate our method's superiority over existing approaches.
|
1110.0791
|
Sergey Bozhevolnyi I
|
Sergey I. Bozhevolnyi
|
Rapid, Impartial and Comprehensive (RIC) publishing: A new concept for
scientific journals
|
6 pages, 2 figures, 4 references
| null | null | null |
cs.DL physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Publishing scientific journals governed by editors relying on anonymous peer
reviewing is slow (even one round of reviewing involves several communications
between authors, editor and reviewers), partial (arguments of authors can
hardly overrule those of reviewers) and not using all available scientific
material (even the most thorough and insightful reviews remain for the eyes of
authors and editors only). Here I propose a new concept for scientific journals
that ensures rapid, impartial and comprehensive (RIC) publishing. RIC concept
is based on implementation of two novel publishing principles: the first
(rapid) editorial screening of a submitted manuscript should result in its
either "rejection" or "acceptance with optional revisions", and, in the latter
case, the optionally revised (taking into account open reviews) paper should be
published along with all (positive and negative) reviews, presenting thereby to
the scientific community all available scientific material on the topic in
question.
|
[
{
"created": "Tue, 4 Oct 2011 18:51:53 GMT",
"version": "v1"
}
] |
2011-10-05
|
[
[
"Bozhevolnyi",
"Sergey I.",
""
]
] |
Publishing scientific journals governed by editors relying on anonymous peer reviewing is slow (even one round of reviewing involves several communications between authors, editor and reviewers), partial (arguments of authors can hardly overrule those of reviewers) and not using all available scientific material (even the most thorough and insightful reviews remain for the eyes of authors and editors only). Here I propose a new concept for scientific journals that ensures rapid, impartial and comprehensive (RIC) publishing. RIC concept is based on implementation of two novel publishing principles: the first (rapid) editorial screening of a submitted manuscript should result in its either "rejection" or "acceptance with optional revisions", and, in the latter case, the optionally revised (taking into account open reviews) paper should be published along with all (positive and negative) reviews, presenting thereby to the scientific community all available scientific material on the topic in question.
|
1608.08252
|
Fabrizio Maria Maggi
|
Hoang Nguyen, Marlon Dumas, Marcello La Rosa, Fabrizio Maria Maggi,
Suriadi Suriadi
|
Business Process Deviance Mining: Review and Evaluation
| null | null | null | null |
cs.AI cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Business process deviance refers to the phenomenon whereby a subset of the
executions of a business process deviate, in a negative or positive way, with
respect to its expected or desirable outcomes. Deviant executions of a business
process include those that violate compliance rules, or executions that
undershoot or exceed performance targets. Deviance mining is concerned with
uncovering the reasons for deviant executions by analyzing business process
event logs. This article provides a systematic review and comparative
evaluation of deviance mining approaches based on a family of data mining
techniques known as sequence classification. Using real-life logs from multiple
domains, we evaluate a range of feature types and classification methods in
terms of their ability to accurately discriminate between normal and deviant
executions of a process. We also analyze the interestingness of the rule sets
extracted using different methods. We observe that feature sets extracted using
pattern mining techniques only slightly outperform simpler feature sets based
on counts of individual activity occurrences in a trace.
|
[
{
"created": "Mon, 29 Aug 2016 21:14:01 GMT",
"version": "v1"
}
] |
2016-08-31
|
[
[
"Nguyen",
"Hoang",
""
],
[
"Dumas",
"Marlon",
""
],
[
"La Rosa",
"Marcello",
""
],
[
"Maggi",
"Fabrizio Maria",
""
],
[
"Suriadi",
"Suriadi",
""
]
] |
Business process deviance refers to the phenomenon whereby a subset of the executions of a business process deviate, in a negative or positive way, with respect to its expected or desirable outcomes. Deviant executions of a business process include those that violate compliance rules, or executions that undershoot or exceed performance targets. Deviance mining is concerned with uncovering the reasons for deviant executions by analyzing business process event logs. This article provides a systematic review and comparative evaluation of deviance mining approaches based on a family of data mining techniques known as sequence classification. Using real-life logs from multiple domains, we evaluate a range of feature types and classification methods in terms of their ability to accurately discriminate between normal and deviant executions of a process. We also analyze the interestingness of the rule sets extracted using different methods. We observe that feature sets extracted using pattern mining techniques only slightly outperform simpler feature sets based on counts of individual activity occurrences in a trace.
|
1709.06308
|
Tingting Qiao
|
Tingting Qiao, Jianfeng Dong, Duanqing Xu
|
Exploring Human-like Attention Supervision in Visual Question Answering
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Attention mechanisms have been widely applied in the Visual Question
Answering (VQA) task, as they help to focus on the area-of-interest of both
visual and textual information. To answer the questions correctly, the model
needs to selectively target different areas of an image, which suggests that an
attention-based model may benefit from an explicit attention supervision. In
this work, we aim to address the problem of adding attention supervision to VQA
models. Since there is a lack of human attention data, we first propose a Human
Attention Network (HAN) to generate human-like attention maps, training on a
recently released dataset called Human ATtention Dataset (VQA-HAT). Then, we
apply the pre-trained HAN on the VQA v2.0 dataset to automatically produce the
human-like attention maps for all image-question pairs. The generated
human-like attention map dataset for the VQA v2.0 dataset is named as
Human-Like ATtention (HLAT) dataset. Finally, we apply human-like attention
supervision to an attention-based VQA model. The experiments show that adding
human-like supervision yields a more accurate attention together with a better
performance, showing a promising future for human-like attention supervision in
VQA.
|
[
{
"created": "Tue, 19 Sep 2017 09:19:08 GMT",
"version": "v1"
}
] |
2017-09-20
|
[
[
"Qiao",
"Tingting",
""
],
[
"Dong",
"Jianfeng",
""
],
[
"Xu",
"Duanqing",
""
]
] |
Attention mechanisms have been widely applied in the Visual Question Answering (VQA) task, as they help to focus on the area-of-interest of both visual and textual information. To answer the questions correctly, the model needs to selectively target different areas of an image, which suggests that an attention-based model may benefit from an explicit attention supervision. In this work, we aim to address the problem of adding attention supervision to VQA models. Since there is a lack of human attention data, we first propose a Human Attention Network (HAN) to generate human-like attention maps, training on a recently released dataset called Human ATtention Dataset (VQA-HAT). Then, we apply the pre-trained HAN on the VQA v2.0 dataset to automatically produce the human-like attention maps for all image-question pairs. The generated human-like attention map dataset for the VQA v2.0 dataset is named as Human-Like ATtention (HLAT) dataset. Finally, we apply human-like attention supervision to an attention-based VQA model. The experiments show that adding human-like supervision yields a more accurate attention together with a better performance, showing a promising future for human-like attention supervision in VQA.
|
2401.17948
|
Harvie Zhang
|
Harvie Zhang
|
HyperZ$\cdot$Z$\cdot$W Operator Connects Slow-Fast Networks for Full
Context Interaction
|
10 pages, 6 figures, 5 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The self-attention mechanism utilizes large implicit weight matrices,
programmed through dot product-based activations with very few trainable
parameters, to enable long sequence modeling. In this paper, we investigate the
possibility of discarding residual learning by employing large implicit kernels
to achieve full context interaction at each layer of the network. To accomplish
it, we introduce coordinate-based implicit MLPs as a slow network to generate
hyper-kernels for another fast convolutional network. To get context-varying
weights for fast dynamic encoding, we propose a
$\mathrm{Hyper}\mathcal{Z{\cdot}Z{\cdot}W}$ operator that connects
hyper-kernels ($\mathcal{W}$) and hidden activations ($\mathcal{Z}$) through
simple elementwise multiplication, followed by convolution of $\mathcal{Z}$
using the context-dependent $\mathcal{W}$. Based on this design, we present a
novel Terminator architecture that integrates hyper-kernels of different sizes
to produce multi-branch hidden representations for enhancing the feature
extraction capability of each layer. Additionally, a bottleneck layer is
employed to compress the concatenated channels, allowing only valuable
information to propagate to the subsequent layers. Notably, our model
incorporates several innovative components and exhibits excellent properties,
such as introducing local feedback error for updating the slow network, stable
zero-mean features, faster training convergence, and fewer model parameters.
Extensive experimental results on pixel-level 1D and 2D image classification
benchmarks demonstrate the superior performance of our architecture.
|
[
{
"created": "Wed, 31 Jan 2024 15:57:21 GMT",
"version": "v1"
}
] |
2024-02-01
|
[
[
"Zhang",
"Harvie",
""
]
] |
The self-attention mechanism utilizes large implicit weight matrices, programmed through dot product-based activations with very few trainable parameters, to enable long sequence modeling. In this paper, we investigate the possibility of discarding residual learning by employing large implicit kernels to achieve full context interaction at each layer of the network. To accomplish it, we introduce coordinate-based implicit MLPs as a slow network to generate hyper-kernels for another fast convolutional network. To get context-varying weights for fast dynamic encoding, we propose a $\mathrm{Hyper}\mathcal{Z{\cdot}Z{\cdot}W}$ operator that connects hyper-kernels ($\mathcal{W}$) and hidden activations ($\mathcal{Z}$) through simple elementwise multiplication, followed by convolution of $\mathcal{Z}$ using the context-dependent $\mathcal{W}$. Based on this design, we present a novel Terminator architecture that integrates hyper-kernels of different sizes to produce multi-branch hidden representations for enhancing the feature extraction capability of each layer. Additionally, a bottleneck layer is employed to compress the concatenated channels, allowing only valuable information to propagate to the subsequent layers. Notably, our model incorporates several innovative components and exhibits excellent properties, such as introducing local feedback error for updating the slow network, stable zero-mean features, faster training convergence, and fewer model parameters. Extensive experimental results on pixel-level 1D and 2D image classification benchmarks demonstrate the superior performance of our architecture.
|
2207.13478
|
Jiaping Yu
|
Jiaping Yu, Shang Gao, Rui Song, Zhiping Cai, Bin Xiao
|
Partial Selfish Mining for More Profits
| null | null | null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mining attacks aim to gain an unfair share of extra rewards in the blockchain
mining. Selfish mining can preserve discovered blocks and strategically release
them, wasting honest miners' computing resources and getting higher profits.
Previous mining attacks either conceal the mined whole blocks (hiding or
discarding), or release them completely in a particular time slot (e.g.,
causing a fork). In this paper, we extend the mining attack's strategy space to
partial block sharing, and propose a new and feasible Partial Selfish Mining
(PSM) attack. We show that by releasing partial block data publicly and
attracting rational miners to work on attacker's private branch, attackers and
these attracted miners can gain an unfair share of mining rewards. We then
propose Advanced PSM (A-PSM) attack that can further improve attackers' profits
to be no less than the selfish mining. Both theoretical and experimental
results show that PSM attackers can be more profitable than selfish miners
under a certain range of mining power and network conditions. A-PSM attackers
can gain even higher profits than both selfish mining and honest mining with
attracted rational miners.
|
[
{
"created": "Wed, 27 Jul 2022 11:58:38 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Apr 2024 14:00:20 GMT",
"version": "v2"
}
] |
2024-04-09
|
[
[
"Yu",
"Jiaping",
""
],
[
"Gao",
"Shang",
""
],
[
"Song",
"Rui",
""
],
[
"Cai",
"Zhiping",
""
],
[
"Xiao",
"Bin",
""
]
] |
Mining attacks aim to gain an unfair share of extra rewards in the blockchain mining. Selfish mining can preserve discovered blocks and strategically release them, wasting honest miners' computing resources and getting higher profits. Previous mining attacks either conceal the mined whole blocks (hiding or discarding), or release them completely in a particular time slot (e.g., causing a fork). In this paper, we extend the mining attack's strategy space to partial block sharing, and propose a new and feasible Partial Selfish Mining (PSM) attack. We show that by releasing partial block data publicly and attracting rational miners to work on attacker's private branch, attackers and these attracted miners can gain an unfair share of mining rewards. We then propose Advanced PSM (A-PSM) attack that can further improve attackers' profits to be no less than the selfish mining. Both theoretical and experimental results show that PSM attackers can be more profitable than selfish miners under a certain range of mining power and network conditions. A-PSM attackers can gain even higher profits than both selfish mining and honest mining with attracted rational miners.
|
2312.01721
|
Moritz Lampert
|
Moritz Lampert, Ingo Scholtes
|
The Self-Loop Paradox: Investigating the Impact of Self-Loops on Graph
Neural Networks
|
Presented at the Second Learning on Graphs Conference (LoG 2023) as
extended abstract
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Many Graph Neural Networks (GNNs) add self-loops to a graph to include
feature information about a node itself at each layer. However, if the GNN
consists of more than one layer, this information can return to its origin via
cycles in the graph topology. Intuition suggests that this "backflow" of
information should be larger in graphs with self-loops compared to graphs
without. In this work, we counter this intuition and show that for certain GNN
architectures, the information a node gains from itself can be smaller in
graphs with self-loops compared to the same graphs without. We adopt an
analytical approach for the study of statistical graph ensembles with a given
degree sequence and show that this phenomenon, which we call the self-loop
paradox, can depend both on the number of GNN layers $k$ and whether $k$ is
even or odd. We experimentally validate our theoretical findings in a synthetic
node classification task and investigate its practical relevance in 23
real-world graphs.
|
[
{
"created": "Mon, 4 Dec 2023 08:23:00 GMT",
"version": "v1"
}
] |
2023-12-05
|
[
[
"Lampert",
"Moritz",
""
],
[
"Scholtes",
"Ingo",
""
]
] |
Many Graph Neural Networks (GNNs) add self-loops to a graph to include feature information about a node itself at each layer. However, if the GNN consists of more than one layer, this information can return to its origin via cycles in the graph topology. Intuition suggests that this "backflow" of information should be larger in graphs with self-loops compared to graphs without. In this work, we counter this intuition and show that for certain GNN architectures, the information a node gains from itself can be smaller in graphs with self-loops compared to the same graphs without. We adopt an analytical approach for the study of statistical graph ensembles with a given degree sequence and show that this phenomenon, which we call the self-loop paradox, can depend both on the number of GNN layers $k$ and whether $k$ is even or odd. We experimentally validate our theoretical findings in a synthetic node classification task and investigate its practical relevance in 23 real-world graphs.
|
2002.11903
|
Youngbin Park
|
Taewon Kim, Yeseong Park, Youngbin Park and Il Hong Suh
|
Acceleration of Actor-Critic Deep Reinforcement Learning for Visual
Grasping in Clutter by State Representation Learning Based on Disentanglement
of a Raw Input Image
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a robotic grasping task in which diverse unseen target objects exist in a
cluttered environment, some deep learning-based methods have achieved
state-of-the-art results using visual input directly. In contrast, actor-critic
deep reinforcement learning (RL) methods typically perform very poorly when
grasping diverse objects, especially when learning from raw images and sparse
rewards. To make these RL techniques feasible for vision-based grasping tasks,
we employ state representation learning (SRL), where we encode essential
information first for subsequent use in RL. However, typical representation
learning procedures are unsuitable for extracting pertinent information for
learning the grasping skill, because the visual inputs for representation
learning, where a robot attempts to grasp a target object in clutter, are
extremely complex. We found that preprocessing based on the disentanglement of
a raw input image is the key to effectively capturing a compact representation.
This enables deep RL to learn robotic grasping skills from highly varied and
diverse visual inputs. We demonstrate the effectiveness of this approach with
varying levels of disentanglement in a realistic simulated environment.
|
[
{
"created": "Thu, 27 Feb 2020 03:58:51 GMT",
"version": "v1"
}
] |
2020-02-28
|
[
[
"Kim",
"Taewon",
""
],
[
"Park",
"Yeseong",
""
],
[
"Park",
"Youngbin",
""
],
[
"Suh",
"Il Hong",
""
]
] |
For a robotic grasping task in which diverse unseen target objects exist in a cluttered environment, some deep learning-based methods have achieved state-of-the-art results using visual input directly. In contrast, actor-critic deep reinforcement learning (RL) methods typically perform very poorly when grasping diverse objects, especially when learning from raw images and sparse rewards. To make these RL techniques feasible for vision-based grasping tasks, we employ state representation learning (SRL), where we encode essential information first for subsequent use in RL. However, typical representation learning procedures are unsuitable for extracting pertinent information for learning the grasping skill, because the visual inputs for representation learning, where a robot attempts to grasp a target object in clutter, are extremely complex. We found that preprocessing based on the disentanglement of a raw input image is the key to effectively capturing a compact representation. This enables deep RL to learn robotic grasping skills from highly varied and diverse visual inputs. We demonstrate the effectiveness of this approach with varying levels of disentanglement in a realistic simulated environment.
|
1304.7095
|
Cong Ling
|
Shuiyin Liu, Cong Ling, Xiaofu Wu
|
Proximity Factors of Lattice Reduction-Aided Precoding for Multiantenna
Broadcast
|
ISIT 2012
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lattice precoding is an effective strategy for multiantenna broadcast. In
this paper, we show that approximate lattice precoding in multiantenna
broadcast is a variant of the closest vector problem (CVP) known as $\eta$-CVP.
The proximity factors of lattice reduction-aided precoding are defined, and
their bounds are derived, which measure the worst-case loss in power efficiency
compared to sphere precoding. Unlike decoding applications, this analysis does
not suffer from the boundary effect of a finite constellation, since the
underlying lattice in multiantenna broadcast is indeed infinite.
|
[
{
"created": "Fri, 26 Apr 2013 08:47:37 GMT",
"version": "v1"
}
] |
2013-04-29
|
[
[
"Liu",
"Shuiyin",
""
],
[
"Ling",
"Cong",
""
],
[
"Wu",
"Xiaofu",
""
]
] |
Lattice precoding is an effective strategy for multiantenna broadcast. In this paper, we show that approximate lattice precoding in multiantenna broadcast is a variant of the closest vector problem (CVP) known as $\eta$-CVP. The proximity factors of lattice reduction-aided precoding are defined, and their bounds are derived, which measure the worst-case loss in power efficiency compared to sphere precoding. Unlike decoding applications, this analysis does not suffer from the boundary effect of a finite constellation, since the underlying lattice in multiantenna broadcast is indeed infinite.
|
2111.08165
|
Michael Fitzke
|
Michael Fitzke, Conrad Stack, Andre Dourson, Rodrigo M. B. Santana,
Diane Wilson, Lisa Ziemer, Arjun Soin, Matthew P. Lungren, Paul Fisher, Mark
Parkinson
|
RapidRead: Global Deployment of State-of-the-art Radiology AI for a
Large Veterinary Teleradiology Practice
| null | null | null | null |
cs.LG cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This work describes the development and real-world deployment of a deep
learning-based AI system for evaluating canine and feline radiographs across a
broad range of findings and abnormalities. We describe a new semi-supervised
learning approach that combines NLP-derived labels with self-supervised
training leveraging more than 2.5 million x-ray images. Finally we describe the
clinical deployment of the model including system architecture, real-time
performance evaluation and data drift detection.
|
[
{
"created": "Tue, 9 Nov 2021 14:05:16 GMT",
"version": "v1"
}
] |
2021-11-17
|
[
[
"Fitzke",
"Michael",
""
],
[
"Stack",
"Conrad",
""
],
[
"Dourson",
"Andre",
""
],
[
"Santana",
"Rodrigo M. B.",
""
],
[
"Wilson",
"Diane",
""
],
[
"Ziemer",
"Lisa",
""
],
[
"Soin",
"Arjun",
""
],
[
"Lungren",
"Matthew P.",
""
],
[
"Fisher",
"Paul",
""
],
[
"Parkinson",
"Mark",
""
]
] |
This work describes the development and real-world deployment of a deep learning-based AI system for evaluating canine and feline radiographs across a broad range of findings and abnormalities. We describe a new semi-supervised learning approach that combines NLP-derived labels with self-supervised training leveraging more than 2.5 million x-ray images. Finally we describe the clinical deployment of the model including system architecture, real-time performance evaluation and data drift detection.
|
2403.20045
|
Haoxiang Luo
|
Tianqi Jiang, Haoxiang Luo, Kun Yang, Gang Sun, Hongfang Yu, Qi Huang,
Athanasios V. Vasilakos
|
Blockchain for Energy Market: A Comprehensive Survey
| null | null | null | null |
cs.NI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The energy market encompasses the behavior of energy supply and trading
within a platform system. By utilizing centralized or distributed trading,
energy can be effectively managed and distributed across different regions,
thereby achieving market equilibrium and satisfying both producers and
consumers. However, recent years have presented unprecedented challenges and
difficulties for the development of the energy market. These challenges include
regional energy imbalances, volatile energy pricing, high computing costs, and
issues related to transaction information disclosure. Researchers widely
acknowledge that the security features of blockchain technology can enhance the
efficiency of energy transactions and establish the fundamental stability and
robustness of the energy market. This type of blockchain-enabled energy market
is commonly referred to as an energy blockchain. Currently, there is a
burgeoning amount of research in this field, encompassing algorithm design,
framework construction, and practical application. It is crucial to organize
and compare these research efforts to facilitate the further advancement of
energy blockchain. This survey aims to comprehensively review the fundamental
characteristics of blockchain and energy markets, highlighting the significant
advantages of combining the two. Moreover, based on existing research outcomes,
we will categorize and compare the current energy market research supported by
blockchain in terms of algorithm design, market framework construction, and the
policies and practical applications adopted by different countries. Finally, we
will address current issues and propose potential future directions for
improvement, to provide guidance for the practical implementation of blockchain
in the energy market.
|
[
{
"created": "Fri, 29 Mar 2024 08:29:32 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Apr 2024 05:37:43 GMT",
"version": "v2"
}
] |
2024-04-08
|
[
[
"Jiang",
"Tianqi",
""
],
[
"Luo",
"Haoxiang",
""
],
[
"Yang",
"Kun",
""
],
[
"Sun",
"Gang",
""
],
[
"Yu",
"Hongfang",
""
],
[
"Huang",
"Qi",
""
],
[
"Vasilakos",
"Athanasios V.",
""
]
] |
The energy market encompasses the behavior of energy supply and trading within a platform system. By utilizing centralized or distributed trading, energy can be effectively managed and distributed across different regions, thereby achieving market equilibrium and satisfying both producers and consumers. However, recent years have presented unprecedented challenges and difficulties for the development of the energy market. These challenges include regional energy imbalances, volatile energy pricing, high computing costs, and issues related to transaction information disclosure. Researchers widely acknowledge that the security features of blockchain technology can enhance the efficiency of energy transactions and establish the fundamental stability and robustness of the energy market. This type of blockchain-enabled energy market is commonly referred to as an energy blockchain. Currently, there is a burgeoning amount of research in this field, encompassing algorithm design, framework construction, and practical application. It is crucial to organize and compare these research efforts to facilitate the further advancement of energy blockchain. This survey aims to comprehensively review the fundamental characteristics of blockchain and energy markets, highlighting the significant advantages of combining the two. Moreover, based on existing research outcomes, we will categorize and compare the current energy market research supported by blockchain in terms of algorithm design, market framework construction, and the policies and practical applications adopted by different countries. Finally, we will address current issues and propose potential future directions for improvement, to provide guidance for the practical implementation of blockchain in the energy market.
|
1701.05581
|
Diptesh Kanojia
|
Abhijit Mishra, Diptesh Kanojia, Seema Nagar, Kuntal Dey and Pushpak
Bhattacharyya
|
Leveraging Cognitive Features for Sentiment Analysis
|
The SIGNLL Conference on Computational Natural Language Learning
(CoNLL 2016)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sentiments expressed in user-generated short text and sentences are nuanced
by subtleties at lexical, syntactic, semantic and pragmatic levels. To address
this, we propose to augment traditional features used for sentiment analysis
and sarcasm detection, with cognitive features derived from the eye-movement
patterns of readers. Statistical classification using our enhanced feature set
improves the performance (F-score) of polarity detection by a maximum of 3.7%
and 9.3% on two datasets, over the systems that use only traditional features.
We perform feature significance analysis, and experiment on a held-out dataset,
showing that cognitive features indeed empower sentiment analyzers to handle
complex constructs.
|
[
{
"created": "Thu, 19 Jan 2017 19:58:26 GMT",
"version": "v1"
}
] |
2017-01-23
|
[
[
"Mishra",
"Abhijit",
""
],
[
"Kanojia",
"Diptesh",
""
],
[
"Nagar",
"Seema",
""
],
[
"Dey",
"Kuntal",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] |
Sentiments expressed in user-generated short text and sentences are nuanced by subtleties at lexical, syntactic, semantic and pragmatic levels. To address this, we propose to augment traditional features used for sentiment analysis and sarcasm detection, with cognitive features derived from the eye-movement patterns of readers. Statistical classification using our enhanced feature set improves the performance (F-score) of polarity detection by a maximum of 3.7% and 9.3% on two datasets, over the systems that use only traditional features. We perform feature significance analysis, and experiment on a held-out dataset, showing that cognitive features indeed empower sentiment analyzers to handle complex constructs.
|
2305.04658
|
Han Chen
|
Han Chen, Ziwen Zhao, Yuhua Li, Yixiong Zou, Ruixuan Li, Rui Zhang
|
CSGCL: Community-Strength-Enhanced Graph Contrastive Learning
|
Accepted to 23th International Joint Conferences on Artificial
Intelligence(IJCAI)
| null | null | null |
cs.SI cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph Contrastive Learning (GCL) is an effective way to learn generalized
graph representations in a self-supervised manner, and has grown rapidly in
recent years. However, the underlying community semantics has not been well
explored by most previous GCL methods. Research that attempts to leverage
communities in GCL regards them as having the same influence on the graph,
leading to extra representation errors. To tackle this issue, we define
''community strength'' to measure the difference of influence among
communities. Under this premise, we propose a Community-Strength-enhanced Graph
Contrastive Learning (CSGCL) framework to preserve community strength
throughout the learning process. Firstly, we present two novel graph
augmentation methods, Communal Attribute Voting (CAV) and Communal Edge
Dropping (CED), where the perturbations of node attributes and edges are guided
by community strength. Secondly, we propose a dynamic ''Team-up'' contrastive
learning scheme, where community strength is used to progressively fine-tune
the contrastive objective. We report extensive experiment results on three
downstream tasks: node classification, node clustering, and link prediction.
CSGCL achieves state-of-the-art performance compared with other GCL methods,
validating that community strength brings effectiveness and generality to graph
representations. Our code is available at
https://github.com/HanChen-HUST/CSGCL.
|
[
{
"created": "Mon, 8 May 2023 12:21:24 GMT",
"version": "v1"
}
] |
2023-05-09
|
[
[
"Chen",
"Han",
""
],
[
"Zhao",
"Ziwen",
""
],
[
"Li",
"Yuhua",
""
],
[
"Zou",
"Yixiong",
""
],
[
"Li",
"Ruixuan",
""
],
[
"Zhang",
"Rui",
""
]
] |
Graph Contrastive Learning (GCL) is an effective way to learn generalized graph representations in a self-supervised manner, and has grown rapidly in recent years. However, the underlying community semantics has not been well explored by most previous GCL methods. Research that attempts to leverage communities in GCL regards them as having the same influence on the graph, leading to extra representation errors. To tackle this issue, we define ''community strength'' to measure the difference of influence among communities. Under this premise, we propose a Community-Strength-enhanced Graph Contrastive Learning (CSGCL) framework to preserve community strength throughout the learning process. Firstly, we present two novel graph augmentation methods, Communal Attribute Voting (CAV) and Communal Edge Dropping (CED), where the perturbations of node attributes and edges are guided by community strength. Secondly, we propose a dynamic ''Team-up'' contrastive learning scheme, where community strength is used to progressively fine-tune the contrastive objective. We report extensive experiment results on three downstream tasks: node classification, node clustering, and link prediction. CSGCL achieves state-of-the-art performance compared with other GCL methods, validating that community strength brings effectiveness and generality to graph representations. Our code is available at https://github.com/HanChen-HUST/CSGCL.
|
1910.01805
|
Alexander Jung
|
Alexander Jung
|
On the Duality between Network Flows and Network Lasso
|
networks, clustering, machine learning, optimization, duality, Lasso
| null |
10.1109/LSP.2020.2998400
| null |
cs.LG math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many applications generate data with an intrinsic network structure such as
time series data, image data or social network data. The network Lasso (nLasso)
has been proposed recently as a method for joint clustering and optimization of
machine learning models for networked data. The nLasso extends the Lasso from
sparse linear models to clustered graph signals. This paper explores the
duality of nLasso and network flow optimization. We show that, in a very
precise sense, nLasso is equivalent to a minimum-cost flow problem on the data
network structure. Our main technical result is a concise characterization of
nLasso solutions via existence of certain network flows. The main conceptual
result is a useful link between nLasso methods and basic graph algorithms such
as clustering or maximum flow.
|
[
{
"created": "Fri, 4 Oct 2019 06:04:45 GMT",
"version": "v1"
},
{
"created": "Sat, 28 Mar 2020 07:16:30 GMT",
"version": "v2"
}
] |
2020-08-26
|
[
[
"Jung",
"Alexander",
""
]
] |
Many applications generate data with an intrinsic network structure such as time series data, image data or social network data. The network Lasso (nLasso) has been proposed recently as a method for joint clustering and optimization of machine learning models for networked data. The nLasso extends the Lasso from sparse linear models to clustered graph signals. This paper explores the duality of nLasso and network flow optimization. We show that, in a very precise sense, nLasso is equivalent to a minimum-cost flow problem on the data network structure. Our main technical result is a concise characterization of nLasso solutions via existence of certain network flows. The main conceptual result is a useful link between nLasso methods and basic graph algorithms such as clustering or maximum flow.
|
2306.05476
|
Yu-Jen Chen
|
Yu-Jen Chen, Yiyu Shi, Tsung-Yi Ho
|
A Novel Confidence Induced Class Activation Mapping for MRI Brain Tumor
Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Magnetic resonance imaging (MRI) is a commonly used technique for brain tumor
segmentation, which is critical for evaluating patients and planning treatment.
To make the labeling process less laborious and dependent on expertise,
weakly-supervised semantic segmentation (WSSS) methods using class activation
mapping (CAM) have been proposed. However, current CAM-based WSSS methods
generate the object localization map using internal neural network information,
such as gradient or trainable parameters, which can lead to suboptimal
solutions. To address these issues, we propose the confidence-induced CAM
(Cfd-CAM), which calculates the weight of each feature map by using the
confidence of the target class. Our experiments on two brain tumor datasets
show that Cfd-CAM outperforms existing state-of-the-art methods under the same
level of supervision. Overall, our proposed Cfd-CAM approach improves the
accuracy of brain tumor segmentation and may provide valuable insights for
developing better WSSS methods for other medical imaging tasks.
|
[
{
"created": "Thu, 8 Jun 2023 18:01:08 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Jun 2023 08:16:59 GMT",
"version": "v2"
},
{
"created": "Mon, 30 Oct 2023 06:45:01 GMT",
"version": "v3"
}
] |
2023-10-31
|
[
[
"Chen",
"Yu-Jen",
""
],
[
"Shi",
"Yiyu",
""
],
[
"Ho",
"Tsung-Yi",
""
]
] |
Magnetic resonance imaging (MRI) is a commonly used technique for brain tumor segmentation, which is critical for evaluating patients and planning treatment. To make the labeling process less laborious and dependent on expertise, weakly-supervised semantic segmentation (WSSS) methods using class activation mapping (CAM) have been proposed. However, current CAM-based WSSS methods generate the object localization map using internal neural network information, such as gradient or trainable parameters, which can lead to suboptimal solutions. To address these issues, we propose the confidence-induced CAM (Cfd-CAM), which calculates the weight of each feature map by using the confidence of the target class. Our experiments on two brain tumor datasets show that Cfd-CAM outperforms existing state-of-the-art methods under the same level of supervision. Overall, our proposed Cfd-CAM approach improves the accuracy of brain tumor segmentation and may provide valuable insights for developing better WSSS methods for other medical imaging tasks.
|
1807.02189
|
David Paulius
|
David Paulius, Ahmad Babaeian Jelodar and Yu Sun
|
Functional Object-Oriented Network: Construction & Expansion
|
7 pages, 3 figures, presented at ICRA 2018
|
ICRA 2018 Submission -- 7 pages
|
10.1109/ICRA.2018.8460200
| null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We build upon the functional object-oriented network (FOON), a structured
knowledge representation which is constructed from observations of human
activities and manipulations. A FOON can be used for representing object-motion
affordances. Knowledge retrieval through graph search allows us to obtain novel
manipulation sequences using knowledge spanning across many video sources,
hence the novelty in our approach. However, we are limited to the sources
collected. To further improve the performance of knowledge retrieval as a
follow up to our previous work, we discuss generalizing knowledge to be applied
to objects which are similar to what we have in FOON without manually
annotating new sources of knowledge. We discuss two means of generalization: 1)
expanding our network through the use of object similarity to create new
functional units from those we already have, and 2) compressing the functional
units by object categories rather than specific objects. We discuss experiments
which compare the performance of our knowledge retrieval algorithm with both
expansion and compression by categories.
|
[
{
"created": "Thu, 5 Jul 2018 21:59:30 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Jul 2020 04:28:52 GMT",
"version": "v2"
}
] |
2020-08-03
|
[
[
"Paulius",
"David",
""
],
[
"Jelodar",
"Ahmad Babaeian",
""
],
[
"Sun",
"Yu",
""
]
] |
We build upon the functional object-oriented network (FOON), a structured knowledge representation which is constructed from observations of human activities and manipulations. A FOON can be used for representing object-motion affordances. Knowledge retrieval through graph search allows us to obtain novel manipulation sequences using knowledge spanning across many video sources, hence the novelty in our approach. However, we are limited to the sources collected. To further improve the performance of knowledge retrieval as a follow up to our previous work, we discuss generalizing knowledge to be applied to objects which are similar to what we have in FOON without manually annotating new sources of knowledge. We discuss two means of generalization: 1) expanding our network through the use of object similarity to create new functional units from those we already have, and 2) compressing the functional units by object categories rather than specific objects. We discuss experiments which compare the performance of our knowledge retrieval algorithm with both expansion and compression by categories.
|
2403.16635
|
Zihan Ding
|
Si Liu, Zihan Ding, Jiahui Fu, Hongyu Li, Siheng Chen, Shifeng Zhang,
Xu Zhou
|
V2X-PC: Vehicle-to-everything Collaborative Perception via Point Cluster
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The objective of the collaborative vehicle-to-everything perception task is
to enhance the individual vehicle's perception capability through message
communication among neighboring traffic agents. Previous methods focus on
achieving optimal performance within bandwidth limitations and typically adopt
BEV maps as the basic collaborative message units. However, we demonstrate that
collaboration with dense representations is plagued by object feature
destruction during message packing, inefficient message aggregation for
long-range collaboration, and implicit structure representation communication.
To tackle these issues, we introduce a brand new message unit, namely point
cluster, designed to represent the scene sparsely with a combination of
low-level structure information and high-level semantic information. The point
cluster inherently preserves object information while packing messages, with
weak relevance to the collaboration range, and supports explicit structure
modeling. Building upon this representation, we propose a novel framework
V2X-PC for collaborative perception. This framework includes a Point Cluster
Packing (PCP) module to keep object feature and manage bandwidth through the
manipulation of cluster point numbers. As for effective message aggregation, we
propose a Point Cluster Aggregation (PCA) module to match and merge point
clusters associated with the same object. To further handle time latency and
pose errors encountered in real-world scenarios, we propose parameter-free
solutions that can adapt to different noisy levels without finetuning.
Experiments on two widely recognized collaborative perception benchmarks
showcase the superior performance of our method compared to the previous
state-of-the-art approaches relying on BEV maps.
|
[
{
"created": "Mon, 25 Mar 2024 11:24:02 GMT",
"version": "v1"
}
] |
2024-03-26
|
[
[
"Liu",
"Si",
""
],
[
"Ding",
"Zihan",
""
],
[
"Fu",
"Jiahui",
""
],
[
"Li",
"Hongyu",
""
],
[
"Chen",
"Siheng",
""
],
[
"Zhang",
"Shifeng",
""
],
[
"Zhou",
"Xu",
""
]
] |
The objective of the collaborative vehicle-to-everything perception task is to enhance the individual vehicle's perception capability through message communication among neighboring traffic agents. Previous methods focus on achieving optimal performance within bandwidth limitations and typically adopt BEV maps as the basic collaborative message units. However, we demonstrate that collaboration with dense representations is plagued by object feature destruction during message packing, inefficient message aggregation for long-range collaboration, and implicit structure representation communication. To tackle these issues, we introduce a brand new message unit, namely point cluster, designed to represent the scene sparsely with a combination of low-level structure information and high-level semantic information. The point cluster inherently preserves object information while packing messages, with weak relevance to the collaboration range, and supports explicit structure modeling. Building upon this representation, we propose a novel framework V2X-PC for collaborative perception. This framework includes a Point Cluster Packing (PCP) module to keep object feature and manage bandwidth through the manipulation of cluster point numbers. As for effective message aggregation, we propose a Point Cluster Aggregation (PCA) module to match and merge point clusters associated with the same object. To further handle time latency and pose errors encountered in real-world scenarios, we propose parameter-free solutions that can adapt to different noisy levels without finetuning. Experiments on two widely recognized collaborative perception benchmarks showcase the superior performance of our method compared to the previous state-of-the-art approaches relying on BEV maps.
|
2305.15932
|
Jie He
|
Jie He and Simon Chi Lok U and V\'ictor Guti\'errez-Basulto and Jeff
Z. Pan
|
BUCA: A Binary Classification Approach to Unsupervised Commonsense
Question Answering
|
Accepted by ACL2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Unsupervised commonsense reasoning (UCR) is becoming increasingly popular as
the construction of commonsense reasoning datasets is expensive, and they are
inevitably limited in their scope. A popular approach to UCR is to fine-tune
language models with external knowledge (e.g., knowledge graphs), but this
usually requires a large number of training examples. In this paper, we propose
to transform the downstream multiple choice question answering task into a
simpler binary classification task by ranking all candidate answers according
to their reasonableness. To this end, for training the model, we convert the
knowledge graph triples into reasonable and unreasonable texts. Extensive
experimental results show the effectiveness of our approach on various multiple
choice question answering benchmarks. Furthermore, compared with existing UCR
approaches using KGs, ours is less data hungry. Our code is available at
https://github.com/probe2/BUCA.
|
[
{
"created": "Thu, 25 May 2023 10:59:47 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Jun 2023 20:33:09 GMT",
"version": "v2"
}
] |
2023-06-09
|
[
[
"He",
"Jie",
""
],
[
"U",
"Simon Chi Lok",
""
],
[
"Gutiérrez-Basulto",
"Víctor",
""
],
[
"Pan",
"Jeff Z.",
""
]
] |
Unsupervised commonsense reasoning (UCR) is becoming increasingly popular as the construction of commonsense reasoning datasets is expensive, and they are inevitably limited in their scope. A popular approach to UCR is to fine-tune language models with external knowledge (e.g., knowledge graphs), but this usually requires a large number of training examples. In this paper, we propose to transform the downstream multiple choice question answering task into a simpler binary classification task by ranking all candidate answers according to their reasonableness. To this end, for training the model, we convert the knowledge graph triples into reasonable and unreasonable texts. Extensive experimental results show the effectiveness of our approach on various multiple choice question answering benchmarks. Furthermore, compared with existing UCR approaches using KGs, ours is less data hungry. Our code is available at https://github.com/probe2/BUCA.
|
2211.09200
|
Ma\"el Le Treust
|
Ma\"el Le Treust and Tobias Oechtering
|
Power-Estimation Trade-off of Vector-valued Witsenhausen Counterexample
with Causal Decoder
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The vector-valued extension of the famous Witsenhausen counterexample setup
is studied where the encoder, i.e. the first decision maker, non-causally knows
and encodes the i.i.d. state sequence and the decoder, i.e. the second decision
maker, causally estimates the interim state. The coding scheme is transferred
from the finite alphabet coordination problem for which it is proved to be
optimal. The extension to the Gaussian setup is based on a non-standard weak
typicality approach and requires a careful average estimation error analysis
since the interim state is estimated by the decoder. We provide a single-letter
expression that characterizes the optimal trade-off between the Witsenhausen
power cost and estimation cost. The two auxiliary random variables improve the
communication with the decoder, while performing the dual role of the channel
input, which also controls the state of the system. Interestingly, we show that
a pair of discrete and continuous auxiliary random variables, outperforms both
Witsenhausen two point strategy and the best affine policies. The optimal
choice of random variables remains unknown.
|
[
{
"created": "Wed, 16 Nov 2022 20:40:36 GMT",
"version": "v1"
}
] |
2022-11-18
|
[
[
"Treust",
"Maël Le",
""
],
[
"Oechtering",
"Tobias",
""
]
] |
The vector-valued extension of the famous Witsenhausen counterexample setup is studied where the encoder, i.e. the first decision maker, non-causally knows and encodes the i.i.d. state sequence and the decoder, i.e. the second decision maker, causally estimates the interim state. The coding scheme is transferred from the finite alphabet coordination problem for which it is proved to be optimal. The extension to the Gaussian setup is based on a non-standard weak typicality approach and requires a careful average estimation error analysis since the interim state is estimated by the decoder. We provide a single-letter expression that characterizes the optimal trade-off between the Witsenhausen power cost and estimation cost. The two auxiliary random variables improve the communication with the decoder, while performing the dual role of the channel input, which also controls the state of the system. Interestingly, we show that a pair of discrete and continuous auxiliary random variables, outperforms both Witsenhausen two point strategy and the best affine policies. The optimal choice of random variables remains unknown.
|
1603.00993
|
Kanji Tanaka
|
Tanaka Kanji
|
Self-localization from Images with Small Overlap
|
8 pages, 9 figures, Draft of a paper submitted to an International
Conference
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the recent success of visual features from deep convolutional neural
networks (DCNN) in visual robot self-localization, it has become important and
practical to address more general self-localization scenarios. In this paper,
we address the scenario of self-localization from images with small overlap. We
explicitly introduce a localization difficulty index as a decreasing function
of view overlap between query and relevant database images and investigate
performance versus difficulty for challenging cross-view self-localization
tasks. We then reformulate the self-localization as a scalable
bag-of-visual-features (BoVF) scene retrieval and present an efficient solution
called PCA-NBNN, aiming to facilitate fast and yet discriminative
correspondence between partially overlapping images. The proposed approach
adopts recent findings in discriminativity preserving encoding of DCNN features
using principal component analysis (PCA) and cross-domain scene matching using
naive Bayes nearest neighbor distance metric (NBNN). We experimentally
demonstrate that the proposed PCA-NBNN framework frequently achieves comparable
results to previous DCNN features and that the BoVF model is significantly more
efficient. We further address an important alternative scenario of
"self-localization from images with NO overlap" and report the result.
|
[
{
"created": "Thu, 3 Mar 2016 06:39:37 GMT",
"version": "v1"
}
] |
2016-03-04
|
[
[
"Kanji",
"Tanaka",
""
]
] |
With the recent success of visual features from deep convolutional neural networks (DCNN) in visual robot self-localization, it has become important and practical to address more general self-localization scenarios. In this paper, we address the scenario of self-localization from images with small overlap. We explicitly introduce a localization difficulty index as a decreasing function of view overlap between query and relevant database images and investigate performance versus difficulty for challenging cross-view self-localization tasks. We then reformulate the self-localization as a scalable bag-of-visual-features (BoVF) scene retrieval and present an efficient solution called PCA-NBNN, aiming to facilitate fast and yet discriminative correspondence between partially overlapping images. The proposed approach adopts recent findings in discriminativity preserving encoding of DCNN features using principal component analysis (PCA) and cross-domain scene matching using naive Bayes nearest neighbor distance metric (NBNN). We experimentally demonstrate that the proposed PCA-NBNN framework frequently achieves comparable results to previous DCNN features and that the BoVF model is significantly more efficient. We further address an important alternative scenario of "self-localization from images with NO overlap" and report the result.
|
1311.6676
|
Alexandr Klimchik
|
Alexandr Klimchik (IRCCyN), Yier Wu (IRCCyN), Gabriel ABBA (LCFC),
Beno\^it Furet (IRCCyN), Anatol Pashkevich (IRCCyN)
|
Robust algorithm for calibration of robotic manipulator model
| null |
The IFAC Conference on Manufacturing Modeling, Management and
Control (MIM 2013), Saint Petersburg : Russian Federation (2013)
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper focuses on the robust identification of geometrical and
elastostatic parameters of robotic manipulator. The main attention is paid to
the efficiency improvement of the identification algorithm. To increase the
identification accuracy, it is proposed to apply the weighted least square
technique that employs a new algorithm for assigning of the weighting
coefficients. The latter allows taking into account variation of the
measurement system precision in different directions and throughout the robot
workspace. The advantages of the proposed approach are illustrated by an
application example that deals with the elasto-static calibration of industrial
robot.
|
[
{
"created": "Tue, 26 Nov 2013 14:02:08 GMT",
"version": "v1"
}
] |
2013-11-27
|
[
[
"Klimchik",
"Alexandr",
"",
"IRCCyN"
],
[
"Wu",
"Yier",
"",
"IRCCyN"
],
[
"ABBA",
"Gabriel",
"",
"LCFC"
],
[
"Furet",
"Benoît",
"",
"IRCCyN"
],
[
"Pashkevich",
"Anatol",
"",
"IRCCyN"
]
] |
The paper focuses on the robust identification of geometrical and elastostatic parameters of robotic manipulator. The main attention is paid to the efficiency improvement of the identification algorithm. To increase the identification accuracy, it is proposed to apply the weighted least square technique that employs a new algorithm for assigning of the weighting coefficients. The latter allows taking into account variation of the measurement system precision in different directions and throughout the robot workspace. The advantages of the proposed approach are illustrated by an application example that deals with the elasto-static calibration of industrial robot.
|
2111.07549
|
Zhu Li
|
Zhu Li, Yuqing Zhang, Mengxi Nie, Ming Yan, Mengnan He, Ruixiong
Zhang, Caixia Gong
|
Improving Prosody for Unseen Texts in Speech Synthesis by Utilizing
Linguistic Information and Noisy Data
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in end-to-end speech synthesis have made it possible to
generate highly natural speech. However, training these models typically
requires a large amount of high-fidelity speech data, and for unseen texts, the
prosody of synthesized speech is relatively unnatural. To address these issues,
we propose to combine a fine-tuned BERT-based front-end with a pre-trained
FastSpeech2-based acoustic model to improve prosody modeling. The pre-trained
BERT is fine-tuned on the polyphone disambiguation task, the joint Chinese word
segmentation (CWS) and part-of-speech (POS) tagging task, and the prosody
structure prediction (PSP) task in a multi-task learning framework. FastSpeech
2 is pre-trained on large-scale external data that are noisy but easier to
obtain. Experimental results show that both the fine-tuned BERT model and the
pre-trained FastSpeech 2 can improve prosody, especially for those structurally
complex sentences.
|
[
{
"created": "Mon, 15 Nov 2021 05:58:29 GMT",
"version": "v1"
}
] |
2021-11-16
|
[
[
"Li",
"Zhu",
""
],
[
"Zhang",
"Yuqing",
""
],
[
"Nie",
"Mengxi",
""
],
[
"Yan",
"Ming",
""
],
[
"He",
"Mengnan",
""
],
[
"Zhang",
"Ruixiong",
""
],
[
"Gong",
"Caixia",
""
]
] |
Recent advancements in end-to-end speech synthesis have made it possible to generate highly natural speech. However, training these models typically requires a large amount of high-fidelity speech data, and for unseen texts, the prosody of synthesized speech is relatively unnatural. To address these issues, we propose to combine a fine-tuned BERT-based front-end with a pre-trained FastSpeech2-based acoustic model to improve prosody modeling. The pre-trained BERT is fine-tuned on the polyphone disambiguation task, the joint Chinese word segmentation (CWS) and part-of-speech (POS) tagging task, and the prosody structure prediction (PSP) task in a multi-task learning framework. FastSpeech 2 is pre-trained on large-scale external data that are noisy but easier to obtain. Experimental results show that both the fine-tuned BERT model and the pre-trained FastSpeech 2 can improve prosody, especially for those structurally complex sentences.
|
2107.08319
|
Karishma Sharma
|
Karishma Sharma and Emilio Ferrara and Yan Liu
|
Characterizing Online Engagement with Disinformation and Conspiracies in
the 2020 U.S. Presidential Election
|
Accepted at ICWSM'22
|
ICWSM 2022
| null | null |
cs.SI cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Identifying and characterizing disinformation in political discourse on
social media is critical to ensure the integrity of elections and democratic
processes around the world. Persistent manipulation of social media has
resulted in increased concerns regarding the 2020 U.S. Presidential Election,
due to its potential to influence individual opinions and social dynamics. In
this work, we focus on the identification of distorted facts, in the form of
unreliable and conspiratorial narratives in election-related tweets, to
characterize discourse manipulation prior to the election. We apply a detection
model to separate factual from unreliable (or conspiratorial) claims analyzing
a dataset of 242 million election-related tweets. The identified claims are
used to investigate targeted topics of disinformation, and conspiracy groups,
most notably the far-right QAnon conspiracy group. Further, we characterize
account engagements with unreliable and conspiracy tweets, and with the QAnon
conspiracy group, by political leaning and tweet types. Finally, using a
regression discontinuity design, we investigate whether Twitter's actions to
curb QAnon activity on the platform were effective, and how QAnon accounts
adapt to Twitter's restrictions.
|
[
{
"created": "Sat, 17 Jul 2021 22:11:13 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Oct 2021 17:50:37 GMT",
"version": "v2"
}
] |
2021-10-22
|
[
[
"Sharma",
"Karishma",
""
],
[
"Ferrara",
"Emilio",
""
],
[
"Liu",
"Yan",
""
]
] |
Identifying and characterizing disinformation in political discourse on social media is critical to ensure the integrity of elections and democratic processes around the world. Persistent manipulation of social media has resulted in increased concerns regarding the 2020 U.S. Presidential Election, due to its potential to influence individual opinions and social dynamics. In this work, we focus on the identification of distorted facts, in the form of unreliable and conspiratorial narratives in election-related tweets, to characterize discourse manipulation prior to the election. We apply a detection model to separate factual from unreliable (or conspiratorial) claims analyzing a dataset of 242 million election-related tweets. The identified claims are used to investigate targeted topics of disinformation, and conspiracy groups, most notably the far-right QAnon conspiracy group. Further, we characterize account engagements with unreliable and conspiracy tweets, and with the QAnon conspiracy group, by political leaning and tweet types. Finally, using a regression discontinuity design, we investigate whether Twitter's actions to curb QAnon activity on the platform were effective, and how QAnon accounts adapt to Twitter's restrictions.
|
2006.02379
|
Juli\'an Tachella Dr
|
Juli\'an Tachella and Junqi Tang and Mike Davies
|
The Neural Tangent Link Between CNN Denoisers and Non-Local Filters
| null | null | null | null |
cs.CV eess.IV eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional Neural Networks (CNNs) are now a well-established tool for
solving computational imaging problems. Modern CNN-based algorithms obtain
state-of-the-art performance in diverse image restoration problems.
Furthermore, it has been recently shown that, despite being highly
overparameterized, networks trained with a single corrupted image can still
perform as well as fully trained networks. We introduce a formal link between
such networks through their neural tangent kernel (NTK), and well-known
non-local filtering techniques, such as non-local means or BM3D. The filtering
function associated with a given network architecture can be obtained in closed
form without need to train the network, being fully characterized by the random
initialization of the network weights. While the NTK theory accurately predicts
the filter associated with networks trained using standard gradient descent,
our analysis shows that it falls short to explain the behaviour of networks
trained using the popular Adam optimizer. The latter achieves a larger change
of weights in hidden layers, adapting the non-local filtering function during
training. We evaluate our findings via extensive image denoising experiments.
|
[
{
"created": "Wed, 3 Jun 2020 16:50:54 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Jun 2020 16:02:57 GMT",
"version": "v2"
},
{
"created": "Fri, 5 Jun 2020 16:41:44 GMT",
"version": "v3"
},
{
"created": "Mon, 16 Nov 2020 23:06:53 GMT",
"version": "v4"
}
] |
2020-11-18
|
[
[
"Tachella",
"Julián",
""
],
[
"Tang",
"Junqi",
""
],
[
"Davies",
"Mike",
""
]
] |
Convolutional Neural Networks (CNNs) are now a well-established tool for solving computational imaging problems. Modern CNN-based algorithms obtain state-of-the-art performance in diverse image restoration problems. Furthermore, it has been recently shown that, despite being highly overparameterized, networks trained with a single corrupted image can still perform as well as fully trained networks. We introduce a formal link between such networks through their neural tangent kernel (NTK), and well-known non-local filtering techniques, such as non-local means or BM3D. The filtering function associated with a given network architecture can be obtained in closed form without need to train the network, being fully characterized by the random initialization of the network weights. While the NTK theory accurately predicts the filter associated with networks trained using standard gradient descent, our analysis shows that it falls short to explain the behaviour of networks trained using the popular Adam optimizer. The latter achieves a larger change of weights in hidden layers, adapting the non-local filtering function during training. We evaluate our findings via extensive image denoising experiments.
|
2407.12465
|
Vignesh V Menon
|
Vignesh V Menon and Adam Wieckowski and Christian Stoffers and Jens
Brandenburg and Christian Lehmann and Benjamin Bross and Thomas Schierl and
Detlev Marpe
|
Enhancing Film Grain Coding in VVC: Improving Encoding Quality and
Efficiency
|
Accepted at IBC'24
| null | null | null |
cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents an in-depth analysis of film grain handling in
open-source implementations of the Versatile Video Coding (VVC) standard. We
focus on two key components: the Film Grain Analysis (FGA) module implemented
in VVenC and the Film Grain Synthesis (FGS) module implemented in VVdeC. We
describe the methodologies used to implement these modules and discuss the
generation of Supplementary Enhancement Information (SEI) parameters to signal
film grain characteristics in the encoded video sequences. Additionally, we
conduct subjective and objective evaluations across Full HD videos to assess
the effectiveness of film grain handling. Our results demonstrate the
capability of the FGA and FGS techniques to accurately analyze and synthesize
film grain, thereby improving the visual quality of encoded video content.
Overall, our study contributes to advancing the understanding and
implementation of film grain handling techniques in VVC open-source
implementations, with implications for enhancing the viewing experience in
multimedia applications.
|
[
{
"created": "Wed, 17 Jul 2024 10:34:49 GMT",
"version": "v1"
}
] |
2024-07-18
|
[
[
"Menon",
"Vignesh V",
""
],
[
"Wieckowski",
"Adam",
""
],
[
"Stoffers",
"Christian",
""
],
[
"Brandenburg",
"Jens",
""
],
[
"Lehmann",
"Christian",
""
],
[
"Bross",
"Benjamin",
""
],
[
"Schierl",
"Thomas",
""
],
[
"Marpe",
"Detlev",
""
]
] |
This paper presents an in-depth analysis of film grain handling in open-source implementations of the Versatile Video Coding (VVC) standard. We focus on two key components: the Film Grain Analysis (FGA) module implemented in VVenC and the Film Grain Synthesis (FGS) module implemented in VVdeC. We describe the methodologies used to implement these modules and discuss the generation of Supplementary Enhancement Information (SEI) parameters to signal film grain characteristics in the encoded video sequences. Additionally, we conduct subjective and objective evaluations across Full HD videos to assess the effectiveness of film grain handling. Our results demonstrate the capability of the FGA and FGS techniques to accurately analyze and synthesize film grain, thereby improving the visual quality of encoded video content. Overall, our study contributes to advancing the understanding and implementation of film grain handling techniques in VVC open-source implementations, with implications for enhancing the viewing experience in multimedia applications.
|
1601.03976
|
Eduardo Martins Hargreaves
|
Eduardo Hargreaves, Paulo H De Aguiar Rodrigues and Daniel S.
Menasch\'e
|
Modeling and Analysis of Converged Network-Cloud Services
|
XIII Workshop em Clouds e Aplica\c{c}\~oes (WCGA2015)
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Networks connecting distributed cloud services through multiple data centers
are called cloud networks. These types of networks play a crucial role in cloud
computing and a holistic performance evaluation is essential before planning a
converged network-cloud environment. We analyze a specific case where some
resources can be centralized in one datacenter or distributed among multiple
data centers. The economy of scale in centralizing resources in a sin- gle pool
of resources can be overcome by an increase in communication costs. We propose
an analytical model to evaluate tradeoffs in terms of application requirements,
usage patterns, number of resources and communication costs. We numerically
evaluate the proposed model in a case study inspired by the oil and gas
industry, indicating how to cope with the tradeoff between statisti- cal
multiplexing advantages of centralization and the corresponding increase in
communication infrastructure costs.
|
[
{
"created": "Fri, 15 Jan 2016 15:57:04 GMT",
"version": "v1"
}
] |
2018-07-24
|
[
[
"Hargreaves",
"Eduardo",
""
],
[
"Rodrigues",
"Paulo H De Aguiar",
""
],
[
"Menasché",
"Daniel S.",
""
]
] |
Networks connecting distributed cloud services through multiple data centers are called cloud networks. These types of networks play a crucial role in cloud computing and a holistic performance evaluation is essential before planning a converged network-cloud environment. We analyze a specific case where some resources can be centralized in one datacenter or distributed among multiple data centers. The economy of scale in centralizing resources in a sin- gle pool of resources can be overcome by an increase in communication costs. We propose an analytical model to evaluate tradeoffs in terms of application requirements, usage patterns, number of resources and communication costs. We numerically evaluate the proposed model in a case study inspired by the oil and gas industry, indicating how to cope with the tradeoff between statisti- cal multiplexing advantages of centralization and the corresponding increase in communication infrastructure costs.
|
2304.10254
|
Xu Zhang Zhang
|
Xu Zhang, Xinzheng Niu, Philippe Fournier-Viger, Xudong Dai
|
Image-text Retrieval via Preserving Main Semantics of Vision
|
6 pages, 3 figures, accepted by ICME2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image-text retrieval is one of the major tasks of cross-modal retrieval.
Several approaches for this task map images and texts into a common space to
create correspondences between the two modalities. However, due to the content
(semantics) richness of an image, redundant secondary information in an image
may cause false matches. To address this issue, this paper presents a semantic
optimization approach, implemented as a Visual Semantic Loss (VSL), to assist
the model in focusing on an image's main content. This approach is inspired by
how people typically annotate the content of an image by describing its main
content. Thus, we leverage the annotated texts corresponding to an image to
assist the model in capturing the main content of the image, reducing the
negative impact of secondary content. Extensive experiments on two benchmark
datasets (MSCOCO and Flickr30K) demonstrate the superior performance of our
method. The code is available at: https://github.com/ZhangXu0963/VSL.
|
[
{
"created": "Thu, 20 Apr 2023 12:23:29 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Apr 2023 08:09:54 GMT",
"version": "v2"
}
] |
2023-05-01
|
[
[
"Zhang",
"Xu",
""
],
[
"Niu",
"Xinzheng",
""
],
[
"Fournier-Viger",
"Philippe",
""
],
[
"Dai",
"Xudong",
""
]
] |
Image-text retrieval is one of the major tasks of cross-modal retrieval. Several approaches for this task map images and texts into a common space to create correspondences between the two modalities. However, due to the content (semantics) richness of an image, redundant secondary information in an image may cause false matches. To address this issue, this paper presents a semantic optimization approach, implemented as a Visual Semantic Loss (VSL), to assist the model in focusing on an image's main content. This approach is inspired by how people typically annotate the content of an image by describing its main content. Thus, we leverage the annotated texts corresponding to an image to assist the model in capturing the main content of the image, reducing the negative impact of secondary content. Extensive experiments on two benchmark datasets (MSCOCO and Flickr30K) demonstrate the superior performance of our method. The code is available at: https://github.com/ZhangXu0963/VSL.
|
2112.08348
|
Daniel Khashabi Mr.
|
Daniel Khashabi, Shane Lyu, Sewon Min, Lianhui Qin, Kyle Richardson,
Sean Welleck, Hannaneh Hajishirzi, Tushar Khot, Ashish Sabharwal, Sameer
Singh, Yejin Choi
|
Prompt Waywardness: The Curious Case of Discretized Interpretation of
Continuous Prompts
|
NAACL 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fine-tuning continuous prompts for target tasks has recently emerged as a
compact alternative to full model fine-tuning. Motivated by these promising
results, we investigate the feasibility of extracting a discrete (textual)
interpretation of continuous prompts that is faithful to the problem they
solve. In practice, we observe a "wayward" behavior between the task solved by
continuous prompts and their nearest neighbor discrete projections: We can find
continuous prompts that solve a task while being projected to an arbitrary text
(e.g., definition of a different or even a contradictory task), while being
within a very small (2%) margin of the best continuous prompt of the same size
for the task. We provide intuitions behind this odd and surprising behavior, as
well as extensive empirical analyses quantifying the effect of various
parameters. For instance, for larger model sizes we observe higher waywardness,
i.e, we can find prompts that more closely map to any arbitrary text with a
smaller drop in accuracy. These findings have important implications relating
to the difficulty of faithfully interpreting continuous prompts and their
generalization across models and tasks, providing guidance for future progress
in prompting language models.
|
[
{
"created": "Wed, 15 Dec 2021 18:55:05 GMT",
"version": "v1"
},
{
"created": "Wed, 4 May 2022 04:28:12 GMT",
"version": "v2"
}
] |
2022-05-05
|
[
[
"Khashabi",
"Daniel",
""
],
[
"Lyu",
"Shane",
""
],
[
"Min",
"Sewon",
""
],
[
"Qin",
"Lianhui",
""
],
[
"Richardson",
"Kyle",
""
],
[
"Welleck",
"Sean",
""
],
[
"Hajishirzi",
"Hannaneh",
""
],
[
"Khot",
"Tushar",
""
],
[
"Sabharwal",
"Ashish",
""
],
[
"Singh",
"Sameer",
""
],
[
"Choi",
"Yejin",
""
]
] |
Fine-tuning continuous prompts for target tasks has recently emerged as a compact alternative to full model fine-tuning. Motivated by these promising results, we investigate the feasibility of extracting a discrete (textual) interpretation of continuous prompts that is faithful to the problem they solve. In practice, we observe a "wayward" behavior between the task solved by continuous prompts and their nearest neighbor discrete projections: We can find continuous prompts that solve a task while being projected to an arbitrary text (e.g., definition of a different or even a contradictory task), while being within a very small (2%) margin of the best continuous prompt of the same size for the task. We provide intuitions behind this odd and surprising behavior, as well as extensive empirical analyses quantifying the effect of various parameters. For instance, for larger model sizes we observe higher waywardness, i.e, we can find prompts that more closely map to any arbitrary text with a smaller drop in accuracy. These findings have important implications relating to the difficulty of faithfully interpreting continuous prompts and their generalization across models and tasks, providing guidance for future progress in prompting language models.
|
2406.18032
|
Bernie Gao
|
Xiao Yan, Bernie Gao
|
A Communication Satellite Servises Based Decentralized Network Protocol
| null | null | null | null |
cs.CR cs.DC cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we present a decentralized network protocol, Space Network
Protocol, based on Communication Satellite Services. The protocol outlines a
method for distributing information about the status of satellite communication
services across the entire blockchain network, facilitating fairness and
transparency in all communication services. Our primary objective is to
standardize the services delivered by all satellite networks under the
communication satellite protocol. This standard remains intact regardless of
potential unreliability associated with the satellites or the terminal
hardware. We proposed PoD (Proof of Distribution) to verify if the
communication satellites are online and PoF (Proof of Flow) to authenticate the
actual data flow provided by the communication satellites. In addition, we also
proposed PoM (Proof of Mesh) to verify if the communication satellites have
successfully meshed together. Utilizing zero-knowledge proof and multi-party
cryptographic computations, we can evaluate the service provisioning parameters
of each satellite, even in the presence of potential terminal or network node
fraud. This method offers technical support for the modeling of distributed
network services.
|
[
{
"created": "Wed, 26 Jun 2024 03:01:40 GMT",
"version": "v1"
}
] |
2024-06-27
|
[
[
"Yan",
"Xiao",
""
],
[
"Gao",
"Bernie",
""
]
] |
In this paper, we present a decentralized network protocol, Space Network Protocol, based on Communication Satellite Services. The protocol outlines a method for distributing information about the status of satellite communication services across the entire blockchain network, facilitating fairness and transparency in all communication services. Our primary objective is to standardize the services delivered by all satellite networks under the communication satellite protocol. This standard remains intact regardless of potential unreliability associated with the satellites or the terminal hardware. We proposed PoD (Proof of Distribution) to verify if the communication satellites are online and PoF (Proof of Flow) to authenticate the actual data flow provided by the communication satellites. In addition, we also proposed PoM (Proof of Mesh) to verify if the communication satellites have successfully meshed together. Utilizing zero-knowledge proof and multi-party cryptographic computations, we can evaluate the service provisioning parameters of each satellite, even in the presence of potential terminal or network node fraud. This method offers technical support for the modeling of distributed network services.
|
2403.07290
|
Runmin Cong
|
Runmin Cong, Ronghui Sheng, Hao Wu, Yulan Guo, Yunchao Wei, Wangmeng
Zuo, Yao Zhao, and Sam Kwong
|
Learning Hierarchical Color Guidance for Depth Map Super-Resolution
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Color information is the most commonly used prior knowledge for depth map
super-resolution (DSR), which can provide high-frequency boundary guidance for
detail restoration. However, its role and functionality in DSR have not been
fully developed. In this paper, we rethink the utilization of color information
and propose a hierarchical color guidance network to achieve DSR. On the one
hand, the low-level detail embedding module is designed to supplement
high-frequency color information of depth features in a residual mask manner at
the low-level stages. On the other hand, the high-level abstract guidance
module is proposed to maintain semantic consistency in the reconstruction
process by using a semantic mask that encodes the global guidance information.
The color information of these two dimensions plays a role in the front and
back ends of the attention-based feature projection (AFP) module in a more
comprehensive form. Simultaneously, the AFP module integrates the multi-scale
content enhancement block and adaptive attention projection block to make full
use of multi-scale information and adaptively project critical restoration
information in an attention manner for DSR. Compared with the state-of-the-art
methods on four benchmark datasets, our method achieves more competitive
performance both qualitatively and quantitatively.
|
[
{
"created": "Tue, 12 Mar 2024 03:44:46 GMT",
"version": "v1"
}
] |
2024-03-13
|
[
[
"Cong",
"Runmin",
""
],
[
"Sheng",
"Ronghui",
""
],
[
"Wu",
"Hao",
""
],
[
"Guo",
"Yulan",
""
],
[
"Wei",
"Yunchao",
""
],
[
"Zuo",
"Wangmeng",
""
],
[
"Zhao",
"Yao",
""
],
[
"Kwong",
"Sam",
""
]
] |
Color information is the most commonly used prior knowledge for depth map super-resolution (DSR), which can provide high-frequency boundary guidance for detail restoration. However, its role and functionality in DSR have not been fully developed. In this paper, we rethink the utilization of color information and propose a hierarchical color guidance network to achieve DSR. On the one hand, the low-level detail embedding module is designed to supplement high-frequency color information of depth features in a residual mask manner at the low-level stages. On the other hand, the high-level abstract guidance module is proposed to maintain semantic consistency in the reconstruction process by using a semantic mask that encodes the global guidance information. The color information of these two dimensions plays a role in the front and back ends of the attention-based feature projection (AFP) module in a more comprehensive form. Simultaneously, the AFP module integrates the multi-scale content enhancement block and adaptive attention projection block to make full use of multi-scale information and adaptively project critical restoration information in an attention manner for DSR. Compared with the state-of-the-art methods on four benchmark datasets, our method achieves more competitive performance both qualitatively and quantitatively.
|
2305.00131
|
Rajshekhar Das
|
Rajshekhar Das, Jonathan Francis, Sanket Vaibhav Mehta, Jean Oh, Emma
Strubell, Jose Moura
|
Regularizing Self-training for Unsupervised Domain Adaptation via
Structural Constraints
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-training based on pseudo-labels has emerged as a dominant approach for
addressing conditional distribution shifts in unsupervised domain adaptation
(UDA) for semantic segmentation problems. A notable drawback, however, is that
this family of approaches is susceptible to erroneous pseudo labels that arise
from confirmation biases in the source domain and that manifest as nuisance
factors in the target domain. A possible source for this mismatch is the
reliance on only photometric cues provided by RGB image inputs, which may
ultimately lead to sub-optimal adaptation. To mitigate the effect of mismatched
pseudo-labels, we propose to incorporate structural cues from auxiliary
modalities, such as depth, to regularise conventional self-training objectives.
Specifically, we introduce a contrastive pixel-level objectness constraint that
pulls the pixel representations within a region of an object instance closer,
while pushing those from different object categories apart. To obtain object
regions consistent with the true underlying object, we extract information from
both depth maps and RGB-images in the form of multimodal clustering. Crucially,
the objectness constraint is agnostic to the ground-truth semantic labels and,
hence, appropriate for unsupervised domain adaptation. In this work, we show
that our regularizer significantly improves top performing self-training
methods (by up to $2$ points) in various UDA benchmarks for semantic
segmentation. We include all code in the supplementary.
|
[
{
"created": "Sat, 29 Apr 2023 00:12:26 GMT",
"version": "v1"
}
] |
2023-05-02
|
[
[
"Das",
"Rajshekhar",
""
],
[
"Francis",
"Jonathan",
""
],
[
"Mehta",
"Sanket Vaibhav",
""
],
[
"Oh",
"Jean",
""
],
[
"Strubell",
"Emma",
""
],
[
"Moura",
"Jose",
""
]
] |
Self-training based on pseudo-labels has emerged as a dominant approach for addressing conditional distribution shifts in unsupervised domain adaptation (UDA) for semantic segmentation problems. A notable drawback, however, is that this family of approaches is susceptible to erroneous pseudo labels that arise from confirmation biases in the source domain and that manifest as nuisance factors in the target domain. A possible source for this mismatch is the reliance on only photometric cues provided by RGB image inputs, which may ultimately lead to sub-optimal adaptation. To mitigate the effect of mismatched pseudo-labels, we propose to incorporate structural cues from auxiliary modalities, such as depth, to regularise conventional self-training objectives. Specifically, we introduce a contrastive pixel-level objectness constraint that pulls the pixel representations within a region of an object instance closer, while pushing those from different object categories apart. To obtain object regions consistent with the true underlying object, we extract information from both depth maps and RGB-images in the form of multimodal clustering. Crucially, the objectness constraint is agnostic to the ground-truth semantic labels and, hence, appropriate for unsupervised domain adaptation. In this work, we show that our regularizer significantly improves top performing self-training methods (by up to $2$ points) in various UDA benchmarks for semantic segmentation. We include all code in the supplementary.
|
2406.05395
|
Vahid MohammadZadeh Eivaghi
|
Vahid MohammadZadeh Eivaghi, Mahdi Aliyari Shoorehdeli
|
Dynamic importance learning using fisher information gain for nonlinear
system identification
| null | null | null | null |
cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The Fisher Information Matrix (FIM) provides a way for quantifying the
information content of an observable random variable concerning unknown
parameters within a model that characterizes the variable. When parameters in a
model are directly linked to individual features, the diagonal elements of the
FIM can signify the relative importance of each feature. However, in scenarios
where feature interactions may exist, a comprehensive exploration of the full
FIM is necessary rather than focusing solely on its diagonal elements. This
paper presents an end-to-end black box system identification approach that
integrates the FIM into the training process to gain insights into dynamic
importance and overall model structure. A decision module is added to the first
layer of the network to determine the relevance scores using the entire FIM as
input. The forward propagation is then performed on element-wise multiplication
of inputs and relevance scores. Simulation results demonstrate that the
proposed methodology effectively captures various types of interactions between
dynamics, outperforming existing methods limited to polynomial interactions.
Moreover, the effectiveness of this novel approach is confirmed through its
application in identifying a real-world industrial system, specifically the PH
neutralization process.
|
[
{
"created": "Sat, 8 Jun 2024 08:12:41 GMT",
"version": "v1"
}
] |
2024-06-11
|
[
[
"Eivaghi",
"Vahid MohammadZadeh",
""
],
[
"Shoorehdeli",
"Mahdi Aliyari",
""
]
] |
The Fisher Information Matrix (FIM) provides a way for quantifying the information content of an observable random variable concerning unknown parameters within a model that characterizes the variable. When parameters in a model are directly linked to individual features, the diagonal elements of the FIM can signify the relative importance of each feature. However, in scenarios where feature interactions may exist, a comprehensive exploration of the full FIM is necessary rather than focusing solely on its diagonal elements. This paper presents an end-to-end black box system identification approach that integrates the FIM into the training process to gain insights into dynamic importance and overall model structure. A decision module is added to the first layer of the network to determine the relevance scores using the entire FIM as input. The forward propagation is then performed on element-wise multiplication of inputs and relevance scores. Simulation results demonstrate that the proposed methodology effectively captures various types of interactions between dynamics, outperforming existing methods limited to polynomial interactions. Moreover, the effectiveness of this novel approach is confirmed through its application in identifying a real-world industrial system, specifically the PH neutralization process.
|
1802.02663
|
Sridhar Chimalakonda
|
Sridhar Chimalakonda, Kesav V. Nori
|
A Patterns Based Approach for Design of Educational Technologies
|
Preprint Submitted to Educational Technology Research and Development
Journal, Springer
| null | null | null |
cs.SE cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Instructional design is a fundamental base for educational technologies as it
lays the foundation to facilitate learning and teaching based on pedagogical
underpinnings. However, most of the educational technologies today face two
core challenges in this context: (i) lack of instructional design as a basis
(ii) lack of support for a variety of instructional designs. In order to
address these challenges, we propose a patterns based approach for design of
educational technologies. This is in contrast with existing literature that
focuses either on patterns in education or in software, and not both. The core
idea of our approach is to leverage patterns for modeling instructional design
knowledge and to connect it with patterns in software architecture. We discuss
different categories of patterns in instructional design. We then present the
notion of Pattern-Oriented Instructional Design (POID) as a way to model
instructional design as a connection of patterns (GoalPattern, ProcessPattern,
ContentPattern) and integrate it with Pattern-Oriented Software Architecture
(POSA) based on fundamental principles in software engineering. We demonstrate
our approach through adult literacy case study (287 million learners, 22 Indian
Languages and a variety of instructional designs). The results of our approach
(both web and mobile versions) are available at http://rice.iiit.ac.in and were
adopted by National Literacy Mission Authority of Government of India.
|
[
{
"created": "Wed, 7 Feb 2018 22:41:54 GMT",
"version": "v1"
}
] |
2018-02-09
|
[
[
"Chimalakonda",
"Sridhar",
""
],
[
"Nori",
"Kesav V.",
""
]
] |
Instructional design is a fundamental base for educational technologies as it lays the foundation to facilitate learning and teaching based on pedagogical underpinnings. However, most of the educational technologies today face two core challenges in this context: (i) lack of instructional design as a basis (ii) lack of support for a variety of instructional designs. In order to address these challenges, we propose a patterns based approach for design of educational technologies. This is in contrast with existing literature that focuses either on patterns in education or in software, and not both. The core idea of our approach is to leverage patterns for modeling instructional design knowledge and to connect it with patterns in software architecture. We discuss different categories of patterns in instructional design. We then present the notion of Pattern-Oriented Instructional Design (POID) as a way to model instructional design as a connection of patterns (GoalPattern, ProcessPattern, ContentPattern) and integrate it with Pattern-Oriented Software Architecture (POSA) based on fundamental principles in software engineering. We demonstrate our approach through adult literacy case study (287 million learners, 22 Indian Languages and a variety of instructional designs). The results of our approach (both web and mobile versions) are available at http://rice.iiit.ac.in and were adopted by National Literacy Mission Authority of Government of India.
|
2402.12545
|
Danna Zheng
|
Danna Zheng, Danyang Liu, Mirella Lapata, Jeff Z. Pan
|
TrustScore: Reference-Free Evaluation of LLM Response Trustworthiness
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large Language Models (LLMs) have demonstrated impressive capabilities across
various domains, prompting a surge in their practical applications. However,
concerns have arisen regarding the trustworthiness of LLMs outputs,
particularly in closed-book question-answering tasks, where non-experts may
struggle to identify inaccuracies due to the absence of contextual or ground
truth information. This paper introduces TrustScore, a framework based on the
concept of Behavioral Consistency, which evaluates whether an LLMs response
aligns with its intrinsic knowledge. Additionally, TrustScore can seamlessly
integrate with fact-checking methods, which assesses alignment with external
knowledge sources. The experimental results show that TrustScore achieves
strong correlations with human judgments, surpassing existing reference-free
metrics, and achieving results on par with reference-based metrics.
|
[
{
"created": "Mon, 19 Feb 2024 21:12:14 GMT",
"version": "v1"
},
{
"created": "Mon, 6 May 2024 22:02:10 GMT",
"version": "v2"
}
] |
2024-05-08
|
[
[
"Zheng",
"Danna",
""
],
[
"Liu",
"Danyang",
""
],
[
"Lapata",
"Mirella",
""
],
[
"Pan",
"Jeff Z.",
""
]
] |
Large Language Models (LLMs) have demonstrated impressive capabilities across various domains, prompting a surge in their practical applications. However, concerns have arisen regarding the trustworthiness of LLMs outputs, particularly in closed-book question-answering tasks, where non-experts may struggle to identify inaccuracies due to the absence of contextual or ground truth information. This paper introduces TrustScore, a framework based on the concept of Behavioral Consistency, which evaluates whether an LLMs response aligns with its intrinsic knowledge. Additionally, TrustScore can seamlessly integrate with fact-checking methods, which assesses alignment with external knowledge sources. The experimental results show that TrustScore achieves strong correlations with human judgments, surpassing existing reference-free metrics, and achieving results on par with reference-based metrics.
|
1610.09256
|
Amir H Jafari
|
Amir H. Jafari, Ming Ding, David Lopez-Perez, Jie Zhang
|
Performance Impact of LOS and NLOS Transmissions in Dense Cellular
Networks under Rician Fading
|
24 pages, 3 figures. Submitted to IEEE Transactions on Wireless
Communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we analyse the performance of dense small cell network (SCNs).
We derive analytical expressions for both their coverage probability and their
area spectral efficiency (ASE) using a path loss model that considers both
line-of-sight (LOS) and non-LOS (NLOS) components. Due to the close proximity
of small cell base stations (BSs) and user equipments (UEs) in such dense SCNs,
we also consider Rician fading as the multi-path fading channel model for both
the LOS and NLOS fading transmissions. The Rayleigh fading used in most of
existing works analysing dense SCNs is not accurate enough. Then, we compare
the performance impact of LOS and NLOS transmissions in dense SCNs under Rician
fading with that based on Rayleigh fading. The analysis and the simulation
results show that in dense SCNs where LOS transmissions dominate the
performance, the impact of Rician fading on the overall system performance is
minor, and does not help to address the performance losses brought by the
transition of many interfering signals from NLOS to LOS.
|
[
{
"created": "Fri, 28 Oct 2016 15:07:21 GMT",
"version": "v1"
}
] |
2016-10-31
|
[
[
"Jafari",
"Amir H.",
""
],
[
"Ding",
"Ming",
""
],
[
"Lopez-Perez",
"David",
""
],
[
"Zhang",
"Jie",
""
]
] |
In this paper, we analyse the performance of dense small cell network (SCNs). We derive analytical expressions for both their coverage probability and their area spectral efficiency (ASE) using a path loss model that considers both line-of-sight (LOS) and non-LOS (NLOS) components. Due to the close proximity of small cell base stations (BSs) and user equipments (UEs) in such dense SCNs, we also consider Rician fading as the multi-path fading channel model for both the LOS and NLOS fading transmissions. The Rayleigh fading used in most of existing works analysing dense SCNs is not accurate enough. Then, we compare the performance impact of LOS and NLOS transmissions in dense SCNs under Rician fading with that based on Rayleigh fading. The analysis and the simulation results show that in dense SCNs where LOS transmissions dominate the performance, the impact of Rician fading on the overall system performance is minor, and does not help to address the performance losses brought by the transition of many interfering signals from NLOS to LOS.
|
1705.06002
|
Aubrey Alston
|
Aubrey Alston
|
Attribute-based Encryption for Attribute-based Authentication,
Authorization, Storage, and Transmission in Distributed Storage Systems
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Attribute-based encryption is a form of encryption which offers the capacity
to encrypt data such that it is only accessible to individuals holding a
satisfactory configuration of attributes. As cloud and distributed computing
become more pervasive in both private and public spheres, attribute-based
encryption holds potential to address the issue of achieving secure
authentication, authorization, and transmission in these environments where
performance must scale with security while also supporting fine-grained access
control among a massively large number of consumers. With this work, we offer
an example generic configurable stateless protocol for secure attribute-based
authentication, authorization, storage, and transmission in distributed storage
systems based upon ciphertext-policy attribute-based encryption (CP-ABE),
discuss the experience of implementing a distributed storage system around this
protocol, and present future avenues of work enabled by such a protocol. The
key contribution of this work is an illustration of a means by which any CP-ABE
system may be utilized in a black-box manner for attribute-based authentication
and cryptographically enforced attribute-based access control in distributed
storage systems.
|
[
{
"created": "Wed, 17 May 2017 04:23:45 GMT",
"version": "v1"
}
] |
2017-05-18
|
[
[
"Alston",
"Aubrey",
""
]
] |
Attribute-based encryption is a form of encryption which offers the capacity to encrypt data such that it is only accessible to individuals holding a satisfactory configuration of attributes. As cloud and distributed computing become more pervasive in both private and public spheres, attribute-based encryption holds potential to address the issue of achieving secure authentication, authorization, and transmission in these environments where performance must scale with security while also supporting fine-grained access control among a massively large number of consumers. With this work, we offer an example generic configurable stateless protocol for secure attribute-based authentication, authorization, storage, and transmission in distributed storage systems based upon ciphertext-policy attribute-based encryption (CP-ABE), discuss the experience of implementing a distributed storage system around this protocol, and present future avenues of work enabled by such a protocol. The key contribution of this work is an illustration of a means by which any CP-ABE system may be utilized in a black-box manner for attribute-based authentication and cryptographically enforced attribute-based access control in distributed storage systems.
|
2203.10133
|
Peter West
|
Peter West, Chris Quirk, Michel Galley, Yejin Choi
|
Probing Factually Grounded Content Transfer with Factual Ablation
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Despite recent success, large neural models often generate factually
incorrect text. Compounding this is the lack of a standard automatic evaluation
for factuality--it cannot be meaningfully improved if it cannot be measured.
Grounded generation promises a path to solving both of these problems: models
draw on a reliable external document (grounding) for factual information,
simplifying the challenge of factuality. Measuring factuality is also
simplified--to factual consistency, testing whether the generation agrees with
the grounding, rather than all facts. Yet, without a standard automatic metric
for factual consistency, factually grounded generation remains an open problem.
We study this problem for content transfer, in which generations extend a
prompt, using information from factual grounding. Particularly, this domain
allows us to introduce the notion of factual ablation for automatically
measuring factual consistency: this captures the intuition that the model
should be less likely to produce an output given a less relevant grounding
document. In practice, we measure this by presenting a model with two grounding
documents, and the model should prefer to use the more factually relevant one.
We contribute two evaluation sets to measure this. Applying our new evaluation,
we propose multiple novel methods improving over strong baselines.
|
[
{
"created": "Fri, 18 Mar 2022 19:18:54 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Mar 2022 00:08:38 GMT",
"version": "v2"
}
] |
2022-03-30
|
[
[
"West",
"Peter",
""
],
[
"Quirk",
"Chris",
""
],
[
"Galley",
"Michel",
""
],
[
"Choi",
"Yejin",
""
]
] |
Despite recent success, large neural models often generate factually incorrect text. Compounding this is the lack of a standard automatic evaluation for factuality--it cannot be meaningfully improved if it cannot be measured. Grounded generation promises a path to solving both of these problems: models draw on a reliable external document (grounding) for factual information, simplifying the challenge of factuality. Measuring factuality is also simplified--to factual consistency, testing whether the generation agrees with the grounding, rather than all facts. Yet, without a standard automatic metric for factual consistency, factually grounded generation remains an open problem. We study this problem for content transfer, in which generations extend a prompt, using information from factual grounding. Particularly, this domain allows us to introduce the notion of factual ablation for automatically measuring factual consistency: this captures the intuition that the model should be less likely to produce an output given a less relevant grounding document. In practice, we measure this by presenting a model with two grounding documents, and the model should prefer to use the more factually relevant one. We contribute two evaluation sets to measure this. Applying our new evaluation, we propose multiple novel methods improving over strong baselines.
|
2205.15523
|
Jeremiah Deng
|
Jinyong Hou, Jeremiah D. Deng, Stephen Cranefield, Xuejie Din
|
Variational Transfer Learning using Cross-Domain Latent Modulation
|
Under review. Extended version of a previous WACV paper
(arXiv:2012.11727). 13 pages, 8 figures
| null | null | null |
cs.LG cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
To successfully apply trained neural network models to new domains, powerful
transfer learning solutions are essential. We propose to introduce a novel
cross-domain latent modulation mechanism to a variational autoencoder framework
so as to achieve effective transfer learning. Our key idea is to procure deep
representations from one data domain and use it to influence the
reparameterization of the latent variable of another domain. Specifically, deep
representations of the source and target domains are first extracted by a
unified inference model and aligned by employing gradient reversal. The learned
deep representations are then cross-modulated to the latent encoding of the
alternative domain, where consistency constraints are also applied. In the
empirical validation that includes a number of transfer learning benchmark
tasks for unsupervised domain adaptation and image-to-image translation, our
model demonstrates competitive performance, which is also supported by evidence
obtained from visualization.
|
[
{
"created": "Tue, 31 May 2022 03:47:08 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Jan 2024 05:30:22 GMT",
"version": "v2"
}
] |
2024-02-01
|
[
[
"Hou",
"Jinyong",
""
],
[
"Deng",
"Jeremiah D.",
""
],
[
"Cranefield",
"Stephen",
""
],
[
"Din",
"Xuejie",
""
]
] |
To successfully apply trained neural network models to new domains, powerful transfer learning solutions are essential. We propose to introduce a novel cross-domain latent modulation mechanism to a variational autoencoder framework so as to achieve effective transfer learning. Our key idea is to procure deep representations from one data domain and use it to influence the reparameterization of the latent variable of another domain. Specifically, deep representations of the source and target domains are first extracted by a unified inference model and aligned by employing gradient reversal. The learned deep representations are then cross-modulated to the latent encoding of the alternative domain, where consistency constraints are also applied. In the empirical validation that includes a number of transfer learning benchmark tasks for unsupervised domain adaptation and image-to-image translation, our model demonstrates competitive performance, which is also supported by evidence obtained from visualization.
|
1902.10785
|
Ruizhi Liao
|
Ruizhi Liao, Jonathan Rubin, Grace Lam, Seth Berkowitz, Sandeep Dalal,
William Wells, Steven Horng, Polina Golland
|
Semi-supervised Learning for Quantification of Pulmonary Edema in Chest
X-Ray Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose and demonstrate machine learning algorithms to assess the severity
of pulmonary edema in chest x-ray images of congestive heart failure patients.
Accurate assessment of pulmonary edema in heart failure is critical when making
treatment and disposition decisions. Our work is grounded in a large-scale
clinical dataset of over 300,000 x-ray images with associated radiology
reports. While edema severity labels can be extracted unambiguously from a
small fraction of the radiology reports, accurate annotation is challenging in
most cases. To take advantage of the unlabeled images, we develop a Bayesian
model that includes a variational auto-encoder for learning a latent
representation from the entire image set trained jointly with a regressor that
employs this representation for predicting pulmonary edema severity. Our
experimental results suggest that modeling the distribution of images jointly
with the limited labels improves the accuracy of pulmonary edema scoring
compared to a strictly supervised approach. To the best of our knowledge, this
is the first attempt to employ machine learning algorithms to automatically and
quantitatively assess the severity of pulmonary edema in chest x-ray images.
|
[
{
"created": "Wed, 27 Feb 2019 21:03:40 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Apr 2019 14:27:07 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Apr 2019 01:47:21 GMT",
"version": "v3"
}
] |
2019-04-11
|
[
[
"Liao",
"Ruizhi",
""
],
[
"Rubin",
"Jonathan",
""
],
[
"Lam",
"Grace",
""
],
[
"Berkowitz",
"Seth",
""
],
[
"Dalal",
"Sandeep",
""
],
[
"Wells",
"William",
""
],
[
"Horng",
"Steven",
""
],
[
"Golland",
"Polina",
""
]
] |
We propose and demonstrate machine learning algorithms to assess the severity of pulmonary edema in chest x-ray images of congestive heart failure patients. Accurate assessment of pulmonary edema in heart failure is critical when making treatment and disposition decisions. Our work is grounded in a large-scale clinical dataset of over 300,000 x-ray images with associated radiology reports. While edema severity labels can be extracted unambiguously from a small fraction of the radiology reports, accurate annotation is challenging in most cases. To take advantage of the unlabeled images, we develop a Bayesian model that includes a variational auto-encoder for learning a latent representation from the entire image set trained jointly with a regressor that employs this representation for predicting pulmonary edema severity. Our experimental results suggest that modeling the distribution of images jointly with the limited labels improves the accuracy of pulmonary edema scoring compared to a strictly supervised approach. To the best of our knowledge, this is the first attempt to employ machine learning algorithms to automatically and quantitatively assess the severity of pulmonary edema in chest x-ray images.
|
2011.04446
|
Tanvirul Alam
|
Tanvirul Alam, Akib Khan and Firoj Alam
|
Bangla Text Classification using Transformers
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text classification has been one of the earliest problems in NLP. Over time
the scope of application areas has broadened and the difficulty of dealing with
new areas (e.g., noisy social media content) has increased. The problem-solving
strategy switched from classical machine learning to deep learning algorithms.
One of the recent deep neural network architecture is the Transformer. Models
designed with this type of network and its variants recently showed their
success in many downstream natural language processing tasks, especially for
resource-rich languages, e.g., English. However, these models have not been
explored fully for Bangla text classification tasks. In this work, we fine-tune
multilingual transformer models for Bangla text classification tasks in
different domains, including sentiment analysis, emotion detection, news
categorization, and authorship attribution. We obtain the state of the art
results on six benchmark datasets, improving upon the previous results by 5-29%
accuracy across different tasks.
|
[
{
"created": "Mon, 9 Nov 2020 14:12:07 GMT",
"version": "v1"
}
] |
2020-11-10
|
[
[
"Alam",
"Tanvirul",
""
],
[
"Khan",
"Akib",
""
],
[
"Alam",
"Firoj",
""
]
] |
Text classification has been one of the earliest problems in NLP. Over time the scope of application areas has broadened and the difficulty of dealing with new areas (e.g., noisy social media content) has increased. The problem-solving strategy switched from classical machine learning to deep learning algorithms. One of the recent deep neural network architecture is the Transformer. Models designed with this type of network and its variants recently showed their success in many downstream natural language processing tasks, especially for resource-rich languages, e.g., English. However, these models have not been explored fully for Bangla text classification tasks. In this work, we fine-tune multilingual transformer models for Bangla text classification tasks in different domains, including sentiment analysis, emotion detection, news categorization, and authorship attribution. We obtain the state of the art results on six benchmark datasets, improving upon the previous results by 5-29% accuracy across different tasks.
|
1902.03782
|
Jianxin Lin
|
Jianxin Lin, Zhibo Chen, Yingce Xia, Sen Liu, Tao Qin, Jiebo Luo
|
Exploring Explicit Domain Supervision for Latent Space Disentanglement
in Unpaired Image-to-Image Translation
|
Accepted by IEEE Transaction on Pattern Analysis and Machine
Intelligence (TPAMI).13 pages, 11 figures, 7 Tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image-to-image translation tasks have been widely investigated with
Generative Adversarial Networks (GANs). However, existing approaches are mostly
designed in an unsupervised manner while little attention has been paid to
domain information within unpaired data. In this paper, we treat domain
information as explicit supervision and design an unpaired image-to-image
translation framework, Domain-supervised GAN (DosGAN), which takes the first
step towards the exploration of explicit domain supervision. In contrast to
representing domain characteristics using different generators or domain codes,
we pre-train a classification network to explicitly classify the domain of an
image. After pre-training, this network is used to extract the domain-specific
features of each image. Such features, together with the domain-independent
features extracted by another encoder (shared across different domains), are
used to generate image in target domain. Extensive experiments on multiple
facial attribute translation, multiple identity translation, multiple season
translation and conditional edges-to-shoes/handbags demonstrate the
effectiveness of our method. In addition, we can transfer the domain-specific
feature extractor obtained on the Facescrub dataset with domain supervision
information to unseen domains, such as faces in the CelebA dataset. We also
succeed in achieving conditional translation with any two images in CelebA,
while previous models like StarGAN cannot handle this task.
|
[
{
"created": "Mon, 11 Feb 2019 09:07:30 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Mar 2019 03:22:18 GMT",
"version": "v2"
},
{
"created": "Tue, 1 Oct 2019 02:54:46 GMT",
"version": "v3"
},
{
"created": "Tue, 22 Oct 2019 08:24:52 GMT",
"version": "v4"
}
] |
2019-10-23
|
[
[
"Lin",
"Jianxin",
""
],
[
"Chen",
"Zhibo",
""
],
[
"Xia",
"Yingce",
""
],
[
"Liu",
"Sen",
""
],
[
"Qin",
"Tao",
""
],
[
"Luo",
"Jiebo",
""
]
] |
Image-to-image translation tasks have been widely investigated with Generative Adversarial Networks (GANs). However, existing approaches are mostly designed in an unsupervised manner while little attention has been paid to domain information within unpaired data. In this paper, we treat domain information as explicit supervision and design an unpaired image-to-image translation framework, Domain-supervised GAN (DosGAN), which takes the first step towards the exploration of explicit domain supervision. In contrast to representing domain characteristics using different generators or domain codes, we pre-train a classification network to explicitly classify the domain of an image. After pre-training, this network is used to extract the domain-specific features of each image. Such features, together with the domain-independent features extracted by another encoder (shared across different domains), are used to generate image in target domain. Extensive experiments on multiple facial attribute translation, multiple identity translation, multiple season translation and conditional edges-to-shoes/handbags demonstrate the effectiveness of our method. In addition, we can transfer the domain-specific feature extractor obtained on the Facescrub dataset with domain supervision information to unseen domains, such as faces in the CelebA dataset. We also succeed in achieving conditional translation with any two images in CelebA, while previous models like StarGAN cannot handle this task.
|
2203.09596
|
Miroslav Bures
|
Vaclav Rechtberger, Miroslav Bures, Bestoun S. Ahmed, Youcef Belkhier,
Jiri Nema, Hynek Schvach
|
Prioritized Variable-length Test Cases Generation for Finite State
Machines
|
Paper accepted at the ITEQS workshop of the 15th IEEE International
Conference on Software Testing, Verification and Validation (ICST) 2022
conference, April 4 - 13, 2022, https://icst2022.vrain.upv.es/. New version -
correction of typo in captions of Table III
| null | null | null |
cs.SE cs.AI cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Model-based Testing (MBT) is an effective approach for testing when parts of
a system-under-test have the characteristics of a finite state machine (FSM).
Despite various strategies in the literature on this topic, little work exists
to handle special testing situations. More specifically, when concurrently: (1)
the test paths can start and end only in defined states of the FSM, (2) a
prioritization mechanism that requires only defined states and transitions of
the FSM to be visited by test cases is required, and (3) the test paths must be
in a given length range, not necessarily of explicit uniform length. This paper
presents a test generation strategy that satisfies all these requirements. A
concurrent combination of these requirements is highly practical for real
industrial testing. Six variants of possible algorithms to implement this
strategy are described. Using a mixture of 180 problem instances from real
automotive and defense projects and artificially generated FSMs, all variants
are compared with a baseline strategy based on an established N-switch coverage
concept modification. Various properties of the generated test paths and their
potential to activate fictional defects defined in FSMs are evaluated. The
presented strategy outperforms the baseline in most problem configurations. Out
of the six analyzed variants, three give the best results even though a
universal best performer is hard to identify. Depending on the application of
the FSM, the strategy and evaluation presented in this paper are applicable
both in testing functional and non-functional software requirements.
|
[
{
"created": "Thu, 17 Mar 2022 20:16:45 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Apr 2022 08:34:34 GMT",
"version": "v2"
}
] |
2022-04-05
|
[
[
"Rechtberger",
"Vaclav",
""
],
[
"Bures",
"Miroslav",
""
],
[
"Ahmed",
"Bestoun S.",
""
],
[
"Belkhier",
"Youcef",
""
],
[
"Nema",
"Jiri",
""
],
[
"Schvach",
"Hynek",
""
]
] |
Model-based Testing (MBT) is an effective approach for testing when parts of a system-under-test have the characteristics of a finite state machine (FSM). Despite various strategies in the literature on this topic, little work exists to handle special testing situations. More specifically, when concurrently: (1) the test paths can start and end only in defined states of the FSM, (2) a prioritization mechanism that requires only defined states and transitions of the FSM to be visited by test cases is required, and (3) the test paths must be in a given length range, not necessarily of explicit uniform length. This paper presents a test generation strategy that satisfies all these requirements. A concurrent combination of these requirements is highly practical for real industrial testing. Six variants of possible algorithms to implement this strategy are described. Using a mixture of 180 problem instances from real automotive and defense projects and artificially generated FSMs, all variants are compared with a baseline strategy based on an established N-switch coverage concept modification. Various properties of the generated test paths and their potential to activate fictional defects defined in FSMs are evaluated. The presented strategy outperforms the baseline in most problem configurations. Out of the six analyzed variants, three give the best results even though a universal best performer is hard to identify. Depending on the application of the FSM, the strategy and evaluation presented in this paper are applicable both in testing functional and non-functional software requirements.
|
2312.10440
|
Rhea Sukthanker
|
Rhea Sanjay Sukthanker, Arjun Krishnakumar, Mahmoud Safari, Frank
Hutter
|
Weight-Entanglement Meets Gradient-Based Neural Architecture Search
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weight sharing is a fundamental concept in neural architecture search (NAS),
enabling gradient-based methods to explore cell-based architecture spaces
significantly faster than traditional blackbox approaches. In parallel, weight
\emph{entanglement} has emerged as a technique for intricate parameter sharing
among architectures within macro-level search spaces. %However, the macro
structure of such spaces poses compatibility challenges for gradient-based NAS
methods. %As a result, blackbox optimization methods have been commonly
employed, particularly in conjunction with supernet training, to maintain
search efficiency. %Due to the inherent differences in the structure of these
search spaces, these Since weight-entanglement poses compatibility challenges
for gradient-based NAS methods, these two paradigms have largely developed
independently in parallel sub-communities. This paper aims to bridge the gap
between these sub-communities by proposing a novel scheme to adapt
gradient-based methods for weight-entangled spaces. This enables us to conduct
an in-depth comparative assessment and analysis of the performance of
gradient-based NAS in weight-entangled search spaces. Our findings reveal that
this integration of weight-entanglement and gradient-based NAS brings forth the
various benefits of gradient-based methods (enhanced performance, improved
supernet training properties and superior any-time performance), while
preserving the memory efficiency of weight-entangled spaces. The code for our
work is openly accessible
\href{https://anonymous.4open.science/r/TangleNAS-527C}{here}
|
[
{
"created": "Sat, 16 Dec 2023 13:15:44 GMT",
"version": "v1"
}
] |
2023-12-19
|
[
[
"Sukthanker",
"Rhea Sanjay",
""
],
[
"Krishnakumar",
"Arjun",
""
],
[
"Safari",
"Mahmoud",
""
],
[
"Hutter",
"Frank",
""
]
] |
Weight sharing is a fundamental concept in neural architecture search (NAS), enabling gradient-based methods to explore cell-based architecture spaces significantly faster than traditional blackbox approaches. In parallel, weight \emph{entanglement} has emerged as a technique for intricate parameter sharing among architectures within macro-level search spaces. %However, the macro structure of such spaces poses compatibility challenges for gradient-based NAS methods. %As a result, blackbox optimization methods have been commonly employed, particularly in conjunction with supernet training, to maintain search efficiency. %Due to the inherent differences in the structure of these search spaces, these Since weight-entanglement poses compatibility challenges for gradient-based NAS methods, these two paradigms have largely developed independently in parallel sub-communities. This paper aims to bridge the gap between these sub-communities by proposing a novel scheme to adapt gradient-based methods for weight-entangled spaces. This enables us to conduct an in-depth comparative assessment and analysis of the performance of gradient-based NAS in weight-entangled search spaces. Our findings reveal that this integration of weight-entanglement and gradient-based NAS brings forth the various benefits of gradient-based methods (enhanced performance, improved supernet training properties and superior any-time performance), while preserving the memory efficiency of weight-entangled spaces. The code for our work is openly accessible \href{https://anonymous.4open.science/r/TangleNAS-527C}{here}
|
2202.00717
|
Tsung-Wei Huang
|
Cheng-Hsiang Chiu, Tsung-Wei Huang, Zizheng Guo, and Yibo Lin
|
Pipeflow: An Efficient Task-Parallel Pipeline Programming Framework
using Modern C++
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Pipeline is a fundamental parallel programming pattern. Mainstream pipeline
programming frameworks count on data abstractions to perform pipeline
scheduling. This design is convenient for data-centric pipeline applications
but inefficient for algorithms that only exploit task parallelism in pipeline.
As a result, we introduce a new task-parallel pipeline programming framework
called Pipeflow. Pipeflow does not design yet another data abstraction but
focuses on the pipeline scheduling itself, enabling more efficient
implementation of task-parallel pipeline algorithms than existing frameworks.
We have evaluated Pipeflow on both micro-benchmarks and real-world
applications. As an example, Pipeflow outperforms oneTBB 24% and 10% faster in
a VLSI placement and a timing analysis workloads that adopt pipeline
parallelism to speed up runtimes, respectively.
|
[
{
"created": "Tue, 1 Feb 2022 19:16:16 GMT",
"version": "v1"
}
] |
2022-02-03
|
[
[
"Chiu",
"Cheng-Hsiang",
""
],
[
"Huang",
"Tsung-Wei",
""
],
[
"Guo",
"Zizheng",
""
],
[
"Lin",
"Yibo",
""
]
] |
Pipeline is a fundamental parallel programming pattern. Mainstream pipeline programming frameworks count on data abstractions to perform pipeline scheduling. This design is convenient for data-centric pipeline applications but inefficient for algorithms that only exploit task parallelism in pipeline. As a result, we introduce a new task-parallel pipeline programming framework called Pipeflow. Pipeflow does not design yet another data abstraction but focuses on the pipeline scheduling itself, enabling more efficient implementation of task-parallel pipeline algorithms than existing frameworks. We have evaluated Pipeflow on both micro-benchmarks and real-world applications. As an example, Pipeflow outperforms oneTBB 24% and 10% faster in a VLSI placement and a timing analysis workloads that adopt pipeline parallelism to speed up runtimes, respectively.
|
2104.06773
|
Nermin Samet
|
Nermin Samet, Samet Hicsonmez, Emre Akbas
|
HoughNet: Integrating near and long-range evidence for visual detection
|
accepted to the IEEE Transactions on Pattern Analysis and Machine
Intelligence (TPAMI). arXiv admin note: substantial text overlap with
arXiv:2007.02355
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents HoughNet, a one-stage, anchor-free, voting-based,
bottom-up object detection method. Inspired by the Generalized Hough Transform,
HoughNet determines the presence of an object at a certain location by the sum
of the votes cast on that location. Votes are collected from both near and
long-distance locations based on a log-polar vote field. Thanks to this voting
mechanism, HoughNet is able to integrate both near and long-range,
class-conditional evidence for visual recognition, thereby generalizing and
enhancing current object detection methodology, which typically relies on only
local evidence. On the COCO dataset, HoughNet's best model achieves $46.4$ $AP$
(and $65.1$ $AP_{50}$), performing on par with the state-of-the-art in
bottom-up object detection and outperforming most major one-stage and two-stage
methods. We further validate the effectiveness of our proposal in other visual
detection tasks, namely, video object detection, instance segmentation, 3D
object detection and keypoint detection for human pose estimation, and an
additional "labels to photo" image generation task, where the integration of
our voting module consistently improves performance in all cases. Code is
available at https://github.com/nerminsamet/houghnet.
|
[
{
"created": "Wed, 14 Apr 2021 11:05:29 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Aug 2022 14:58:51 GMT",
"version": "v2"
}
] |
2022-08-19
|
[
[
"Samet",
"Nermin",
""
],
[
"Hicsonmez",
"Samet",
""
],
[
"Akbas",
"Emre",
""
]
] |
This paper presents HoughNet, a one-stage, anchor-free, voting-based, bottom-up object detection method. Inspired by the Generalized Hough Transform, HoughNet determines the presence of an object at a certain location by the sum of the votes cast on that location. Votes are collected from both near and long-distance locations based on a log-polar vote field. Thanks to this voting mechanism, HoughNet is able to integrate both near and long-range, class-conditional evidence for visual recognition, thereby generalizing and enhancing current object detection methodology, which typically relies on only local evidence. On the COCO dataset, HoughNet's best model achieves $46.4$ $AP$ (and $65.1$ $AP_{50}$), performing on par with the state-of-the-art in bottom-up object detection and outperforming most major one-stage and two-stage methods. We further validate the effectiveness of our proposal in other visual detection tasks, namely, video object detection, instance segmentation, 3D object detection and keypoint detection for human pose estimation, and an additional "labels to photo" image generation task, where the integration of our voting module consistently improves performance in all cases. Code is available at https://github.com/nerminsamet/houghnet.
|
1706.08461
|
Lorenzo Sabattini
|
Lorenzo Sabattini, Valeria Villani, Julia N. Czerniak, Alexander
Mertens, Cesare Fantuzzi
|
Methodological Approach for the Design of a Complex Inclusive
Human-Machine System
|
Proceedings of the IEEE Conference on Automation Science and
Engineering (CASE) 2017
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern industrial automatic machines and robotic cells are equipped with
highly complex human-machine interfaces (HMIs) that often prevent human
operators from an effective use of the automatic systems. In particular, this
applies to vulnerable users, such as those with low experience or education
level, the elderly and the disabled. To tackle this issue, it becomes necessary
to design user-oriented HMIs, which adapt to the capabilities and skills of
users, thus compensating their limitations and taking full advantage of their
knowledge. In this paper, we propose a methodological approach to the design of
complex adaptive human-machine systems that might be inclusive of all users, in
particular the vulnerable ones. The proposed approach takes into account both
the technical requirements and the requirements for ethical, legal and social
implications (ELSI) for the design of automatic systems. The technical
requirements derive from a thorough analysis of three use cases taken from the
European project INCLUSIVE. To achieve the ELSI requirements, the MEESTAR
approach is combined with the specific legal issues for occupational systems
and requirements of the target users.
|
[
{
"created": "Mon, 26 Jun 2017 16:30:20 GMT",
"version": "v1"
}
] |
2017-06-27
|
[
[
"Sabattini",
"Lorenzo",
""
],
[
"Villani",
"Valeria",
""
],
[
"Czerniak",
"Julia N.",
""
],
[
"Mertens",
"Alexander",
""
],
[
"Fantuzzi",
"Cesare",
""
]
] |
Modern industrial automatic machines and robotic cells are equipped with highly complex human-machine interfaces (HMIs) that often prevent human operators from an effective use of the automatic systems. In particular, this applies to vulnerable users, such as those with low experience or education level, the elderly and the disabled. To tackle this issue, it becomes necessary to design user-oriented HMIs, which adapt to the capabilities and skills of users, thus compensating their limitations and taking full advantage of their knowledge. In this paper, we propose a methodological approach to the design of complex adaptive human-machine systems that might be inclusive of all users, in particular the vulnerable ones. The proposed approach takes into account both the technical requirements and the requirements for ethical, legal and social implications (ELSI) for the design of automatic systems. The technical requirements derive from a thorough analysis of three use cases taken from the European project INCLUSIVE. To achieve the ELSI requirements, the MEESTAR approach is combined with the specific legal issues for occupational systems and requirements of the target users.
|
2111.08164
|
Dongran Yu
|
Dongran Yu, Bo Yang, Dayou Liu, Hui Wang and Shirui Pan
|
A Survey on Neural-symbolic Learning Systems
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, neural systems have demonstrated highly effective learning
ability and superior perception intelligence. However, they have been found to
lack effective reasoning and cognitive ability. On the other hand, symbolic
systems exhibit exceptional cognitive intelligence but suffer from poor
learning capabilities when compared to neural systems. Recognizing the
advantages and disadvantages of both methodologies, an ideal solution emerges:
combining neural systems and symbolic systems to create neural-symbolic
learning systems that possess powerful perception and cognition. The purpose of
this paper is to survey the advancements in neural-symbolic learning systems
from four distinct perspectives: challenges, methods, applications, and future
directions. By doing so, this research aims to propel this emerging field
forward, offering researchers a comprehensive and holistic overview. This
overview will not only highlight the current state-of-the-art but also identify
promising avenues for future research.
|
[
{
"created": "Wed, 10 Nov 2021 06:26:40 GMT",
"version": "v1"
},
{
"created": "Sun, 20 Nov 2022 12:01:10 GMT",
"version": "v2"
},
{
"created": "Sun, 25 Jun 2023 01:20:49 GMT",
"version": "v3"
}
] |
2023-06-27
|
[
[
"Yu",
"Dongran",
""
],
[
"Yang",
"Bo",
""
],
[
"Liu",
"Dayou",
""
],
[
"Wang",
"Hui",
""
],
[
"Pan",
"Shirui",
""
]
] |
In recent years, neural systems have demonstrated highly effective learning ability and superior perception intelligence. However, they have been found to lack effective reasoning and cognitive ability. On the other hand, symbolic systems exhibit exceptional cognitive intelligence but suffer from poor learning capabilities when compared to neural systems. Recognizing the advantages and disadvantages of both methodologies, an ideal solution emerges: combining neural systems and symbolic systems to create neural-symbolic learning systems that possess powerful perception and cognition. The purpose of this paper is to survey the advancements in neural-symbolic learning systems from four distinct perspectives: challenges, methods, applications, and future directions. By doing so, this research aims to propel this emerging field forward, offering researchers a comprehensive and holistic overview. This overview will not only highlight the current state-of-the-art but also identify promising avenues for future research.
|
1305.7332
|
Jan Kr\v{c}\'al
|
Holger Hermanns, Jan Kr\v{c}\'al, Jan K\v{r}et\'insk\'y
|
Compositional Verification and Optimization of Interactive Markov Chains
| null | null |
10.1007/978-3-642-40184-8_26
| null |
cs.LO cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interactive Markov chains (IMC) are compositional behavioural models
extending labelled transition systems and continuous-time Markov chains. We
provide a framework and algorithms for compositional verification and
optimization of IMC with respect to time-bounded properties. Firstly, we give a
specification formalism for IMC. Secondly, given a time-bounded property, an
IMC component and the assumption that its unknown environment satisfies a given
specification, we synthesize a scheduler for the component optimizing the
probability that the property is satisfied in any such environment.
|
[
{
"created": "Fri, 31 May 2013 09:18:14 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jun 2013 09:06:50 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Dec 2013 11:24:55 GMT",
"version": "v3"
}
] |
2013-12-05
|
[
[
"Hermanns",
"Holger",
""
],
[
"Krčál",
"Jan",
""
],
[
"Křetínský",
"Jan",
""
]
] |
Interactive Markov chains (IMC) are compositional behavioural models extending labelled transition systems and continuous-time Markov chains. We provide a framework and algorithms for compositional verification and optimization of IMC with respect to time-bounded properties. Firstly, we give a specification formalism for IMC. Secondly, given a time-bounded property, an IMC component and the assumption that its unknown environment satisfies a given specification, we synthesize a scheduler for the component optimizing the probability that the property is satisfied in any such environment.
|
2002.06712
|
Hossein Boomari
|
Hossein Boomari Soheila Farokhi
|
Computing Boundary Cycle of a Pseudo-Triangle Polygon from its
Visibility Graph
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visibility graph of a simple polygon is a graph with the same vertex set in
which there is an edge between a pair of vertices if and only if the segment
through them lies completely inside the polygon. Each pair of adjacent vertices
on the boundary of the polygon are assumed to be visible. Therefore, the
visibility graph of each polygon always contains its boundary edges. This
implies that we have always a Hamiltonian cycle in a visibility graph which
determines the order of vertices on the boundary of the corresponding polygon.
In this paper, we propose a polynomial time algorithm for determining such a
Hamiltonian cycle for a pseudo-triangle polygon from its visibility graph.
|
[
{
"created": "Sun, 16 Feb 2020 23:42:08 GMT",
"version": "v1"
}
] |
2020-02-18
|
[
[
"Farokhi",
"Hossein Boomari Soheila",
""
]
] |
Visibility graph of a simple polygon is a graph with the same vertex set in which there is an edge between a pair of vertices if and only if the segment through them lies completely inside the polygon. Each pair of adjacent vertices on the boundary of the polygon are assumed to be visible. Therefore, the visibility graph of each polygon always contains its boundary edges. This implies that we have always a Hamiltonian cycle in a visibility graph which determines the order of vertices on the boundary of the corresponding polygon. In this paper, we propose a polynomial time algorithm for determining such a Hamiltonian cycle for a pseudo-triangle polygon from its visibility graph.
|
2312.06336
|
Mohamed Manzour
|
M. Manzour, A. Ballardini, R. Izquierdo, M. A. Sotelo
|
Vehicle Lane Change Prediction based on Knowledge Graph Embeddings and
Bayesian Inference
| null | null | null | null |
cs.LG cs.AI cs.NE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Prediction of vehicle lane change maneuvers has gained a lot of momentum in
the last few years. Some recent works focus on predicting a vehicle's intention
by predicting its trajectory first. This is not enough, as it ignores the
context of the scene and the state of the surrounding vehicles (as they might
be risky to the target vehicle). Other works assessed the risk made by the
surrounding vehicles only by considering their existence around the target
vehicle, or by considering the distance and relative velocities between them
and the target vehicle as two separate numerical features. In this work, we
propose a solution that leverages Knowledge Graphs (KGs) to anticipate lane
changes based on linguistic contextual information in a way that goes well
beyond the capabilities of current perception systems. Our solution takes the
Time To Collision (TTC) with surrounding vehicles as input to assess the risk
on the target vehicle. Moreover, our KG is trained on the HighD dataset using
the TransE model to obtain the Knowledge Graph Embeddings (KGE). Then, we apply
Bayesian inference on top of the KG using the embeddings learned during
training. Finally, the model can predict lane changes two seconds ahead with
97.95% f1-score, which surpassed the state of the art, and three seconds before
changing lanes with 93.60% f1-score.
|
[
{
"created": "Mon, 11 Dec 2023 12:33:44 GMT",
"version": "v1"
}
] |
2023-12-12
|
[
[
"Manzour",
"M.",
""
],
[
"Ballardini",
"A.",
""
],
[
"Izquierdo",
"R.",
""
],
[
"Sotelo",
"M. A.",
""
]
] |
Prediction of vehicle lane change maneuvers has gained a lot of momentum in the last few years. Some recent works focus on predicting a vehicle's intention by predicting its trajectory first. This is not enough, as it ignores the context of the scene and the state of the surrounding vehicles (as they might be risky to the target vehicle). Other works assessed the risk made by the surrounding vehicles only by considering their existence around the target vehicle, or by considering the distance and relative velocities between them and the target vehicle as two separate numerical features. In this work, we propose a solution that leverages Knowledge Graphs (KGs) to anticipate lane changes based on linguistic contextual information in a way that goes well beyond the capabilities of current perception systems. Our solution takes the Time To Collision (TTC) with surrounding vehicles as input to assess the risk on the target vehicle. Moreover, our KG is trained on the HighD dataset using the TransE model to obtain the Knowledge Graph Embeddings (KGE). Then, we apply Bayesian inference on top of the KG using the embeddings learned during training. Finally, the model can predict lane changes two seconds ahead with 97.95% f1-score, which surpassed the state of the art, and three seconds before changing lanes with 93.60% f1-score.
|
1208.5933
|
Stephan Merz
|
Denis Cousineau and Damien Doligez and Leslie Lamport and Stephan Merz
and Daniel Ricketts and Hern\'an Vanzetto
|
TLA+ Proofs
|
A shorter version of this article appeared in the proceedings of the
conference Formal Methods 2012 (FM 2012, Paris, France, Springer LNCS 7436,
pp. 147-154)
| null | null | null |
cs.SE cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
TLA+ is a specification language based on standard set theory and temporal
logic that has constructs for hierarchical proofs. We describe how to write
TLA+ proofs and check them with TLAPS, the TLA+ Proof System. We use Peterson's
mutual exclusion algorithm as a simple example to describe the features of
TLAPS and show how it and the Toolbox (an IDE for TLA+) help users to manage
large, complex proofs.
|
[
{
"created": "Wed, 29 Aug 2012 14:44:39 GMT",
"version": "v1"
}
] |
2012-08-30
|
[
[
"Cousineau",
"Denis",
""
],
[
"Doligez",
"Damien",
""
],
[
"Lamport",
"Leslie",
""
],
[
"Merz",
"Stephan",
""
],
[
"Ricketts",
"Daniel",
""
],
[
"Vanzetto",
"Hernán",
""
]
] |
TLA+ is a specification language based on standard set theory and temporal logic that has constructs for hierarchical proofs. We describe how to write TLA+ proofs and check them with TLAPS, the TLA+ Proof System. We use Peterson's mutual exclusion algorithm as a simple example to describe the features of TLAPS and show how it and the Toolbox (an IDE for TLA+) help users to manage large, complex proofs.
|
1706.04719
|
Yiqing Guo
|
Yiqing Guo, Xiuping Jia, and David Paull
|
Effective Sequential Classifier Training for SVM-based Multitemporal
Remote Sensing Image Classification
| null | null |
10.1109/TIP.2018.2808767
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The explosive availability of remote sensing images has challenged supervised
classification algorithms such as Support Vector Machines (SVM), as training
samples tend to be highly limited due to the expensive and laborious task of
ground truthing. The temporal correlation and spectral similarity between
multitemporal images have opened up an opportunity to alleviate this problem.
In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is
proposed for multitemporal remote sensing image classification. The approach
leverages the classifiers of previous images to reduce the required number of
training samples for the classifier training of an incoming image. For each
incoming image, a rough classifier is firstly predicted based on the temporal
trend of a set of previous classifiers. The predicted classifier is then
fine-tuned into a more accurate position with current training samples. This
approach can be applied progressively to sequential image data, with only a
small number of training samples being required from each image. Experiments
were conducted with Sentinel-2A multitemporal data over an agricultural area in
Australia. Results showed that the proposed SCT-SVM achieved better
classification accuracies compared with two state-of-the-art model transfer
algorithms. When training data are insufficient, the overall classification
accuracy of the incoming image was improved from 76.18% to 94.02% with the
proposed SCT-SVM, compared with those obtained without the assistance from
previous images. These results demonstrate that the leverage of a priori
information from previous images can provide advantageous assistance for later
images in multitemporal image classification.
|
[
{
"created": "Thu, 15 Jun 2017 02:01:44 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jan 2018 02:24:47 GMT",
"version": "v2"
}
] |
2018-07-17
|
[
[
"Guo",
"Yiqing",
""
],
[
"Jia",
"Xiuping",
""
],
[
"Paull",
"David",
""
]
] |
The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.
|
1809.00800
|
Sho Yokoi
|
Sho Yokoi, Sosuke Kobayashi, Kenji Fukumizu, Jun Suzuki, Kentaro Inui
|
Pointwise HSIC: A Linear-Time Kernelized Co-occurrence Norm for Sparse
Linguistic Expressions
|
Accepted by EMNLP 2018
|
EMNLP 2018
|
10.18653/v1/D18-1203
| null |
cs.CL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a new kernel-based co-occurrence measure that can
be applied to sparse linguistic expressions (e.g., sentences) with a very short
learning time, as an alternative to pointwise mutual information (PMI). As well
as deriving PMI from mutual information, we derive this new measure from the
Hilbert--Schmidt independence criterion (HSIC); thus, we call the new measure
the pointwise HSIC (PHSIC). PHSIC can be interpreted as a smoothed variant of
PMI that allows various similarity metrics (e.g., sentence embeddings) to be
plugged in as kernels. Moreover, PHSIC can be estimated by simple and fast
(linear in the size of the data) matrix calculations regardless of whether we
use linear or nonlinear kernels. Empirically, in a dialogue response selection
task, PHSIC is learned thousands of times faster than an RNN-based PMI while
outperforming PMI in accuracy. In addition, we also demonstrate that PHSIC is
beneficial as a criterion of a data selection task for machine translation
owing to its ability to give high (low) scores to a consistent (inconsistent)
pair with other pairs.
|
[
{
"created": "Tue, 4 Sep 2018 05:33:00 GMT",
"version": "v1"
}
] |
2020-10-13
|
[
[
"Yokoi",
"Sho",
""
],
[
"Kobayashi",
"Sosuke",
""
],
[
"Fukumizu",
"Kenji",
""
],
[
"Suzuki",
"Jun",
""
],
[
"Inui",
"Kentaro",
""
]
] |
In this paper, we propose a new kernel-based co-occurrence measure that can be applied to sparse linguistic expressions (e.g., sentences) with a very short learning time, as an alternative to pointwise mutual information (PMI). As well as deriving PMI from mutual information, we derive this new measure from the Hilbert--Schmidt independence criterion (HSIC); thus, we call the new measure the pointwise HSIC (PHSIC). PHSIC can be interpreted as a smoothed variant of PMI that allows various similarity metrics (e.g., sentence embeddings) to be plugged in as kernels. Moreover, PHSIC can be estimated by simple and fast (linear in the size of the data) matrix calculations regardless of whether we use linear or nonlinear kernels. Empirically, in a dialogue response selection task, PHSIC is learned thousands of times faster than an RNN-based PMI while outperforming PMI in accuracy. In addition, we also demonstrate that PHSIC is beneficial as a criterion of a data selection task for machine translation owing to its ability to give high (low) scores to a consistent (inconsistent) pair with other pairs.
|
2407.15283
|
Sheila Schoepp
|
Sheila Schoepp, Mehran Taghian, Shotaro Miwa, Yoshihiro Mitsuka,
Shadan Golestan, Osmar Za\"iane
|
Enhancing Hardware Fault Tolerance in Machines with Reinforcement
Learning Policy Gradient Algorithms
| null | null | null | null |
cs.LG cs.AI cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Industry is rapidly moving towards fully autonomous and interconnected
systems that can detect and adapt to changing conditions, including machine
hardware faults. Traditional methods for adding hardware fault tolerance to
machines involve duplicating components and algorithmically reconfiguring a
machine's processes when a fault occurs. However, the growing interest in
reinforcement learning-based robotic control offers a new perspective on
achieving hardware fault tolerance. However, limited research has explored the
potential of these approaches for hardware fault tolerance in machines. This
paper investigates the potential of two state-of-the-art reinforcement learning
algorithms, Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC), to
enhance hardware fault tolerance into machines. We assess the performance of
these algorithms in two OpenAI Gym simulated environments, Ant-v2 and
FetchReach-v1. Robot models in these environments are subjected to six
simulated hardware faults. Additionally, we conduct an ablation study to
determine the optimal method for transferring an agent's knowledge, acquired
through learning in a normal (pre-fault) environment, to a (post-)fault
environment in a continual learning setting. Our results demonstrate that
reinforcement learning-based approaches can enhance hardware fault tolerance in
simulated machines, with adaptation occurring within minutes. Specifically, PPO
exhibits the fastest adaptation when retaining the knowledge within its models,
while SAC performs best when discarding all acquired knowledge. Overall, this
study highlights the potential of reinforcement learning-based approaches, such
as PPO and SAC, for hardware fault tolerance in machines. These findings pave
the way for the development of robust and adaptive machines capable of
effectively operating in real-world scenarios.
|
[
{
"created": "Sun, 21 Jul 2024 22:24:16 GMT",
"version": "v1"
}
] |
2024-07-23
|
[
[
"Schoepp",
"Sheila",
""
],
[
"Taghian",
"Mehran",
""
],
[
"Miwa",
"Shotaro",
""
],
[
"Mitsuka",
"Yoshihiro",
""
],
[
"Golestan",
"Shadan",
""
],
[
"Zaïane",
"Osmar",
""
]
] |
Industry is rapidly moving towards fully autonomous and interconnected systems that can detect and adapt to changing conditions, including machine hardware faults. Traditional methods for adding hardware fault tolerance to machines involve duplicating components and algorithmically reconfiguring a machine's processes when a fault occurs. However, the growing interest in reinforcement learning-based robotic control offers a new perspective on achieving hardware fault tolerance. However, limited research has explored the potential of these approaches for hardware fault tolerance in machines. This paper investigates the potential of two state-of-the-art reinforcement learning algorithms, Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC), to enhance hardware fault tolerance into machines. We assess the performance of these algorithms in two OpenAI Gym simulated environments, Ant-v2 and FetchReach-v1. Robot models in these environments are subjected to six simulated hardware faults. Additionally, we conduct an ablation study to determine the optimal method for transferring an agent's knowledge, acquired through learning in a normal (pre-fault) environment, to a (post-)fault environment in a continual learning setting. Our results demonstrate that reinforcement learning-based approaches can enhance hardware fault tolerance in simulated machines, with adaptation occurring within minutes. Specifically, PPO exhibits the fastest adaptation when retaining the knowledge within its models, while SAC performs best when discarding all acquired knowledge. Overall, this study highlights the potential of reinforcement learning-based approaches, such as PPO and SAC, for hardware fault tolerance in machines. These findings pave the way for the development of robust and adaptive machines capable of effectively operating in real-world scenarios.
|
1709.01008
|
Raphael Toledo
|
Raphael R. Toledo, George Danezis, Isao Echizen
|
Mix-ORAM: Using delegate shuffles
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Oblivious RAM (ORAM) is a key technology for providing private storage and
querying on untrusted machines but is commonly seen as impractical due to the
high overhead of the re-randomization, called the eviction, the client incurs.
We propose in this work to securely delegate the eviction to semi-trusted third
parties to enable any client to accede the ORAM technology and present four
different designs inspired by mix-net technologies with reasonable periodic
costs.
|
[
{
"created": "Mon, 4 Sep 2017 15:45:41 GMT",
"version": "v1"
}
] |
2017-09-05
|
[
[
"Toledo",
"Raphael R.",
""
],
[
"Danezis",
"George",
""
],
[
"Echizen",
"Isao",
""
]
] |
Oblivious RAM (ORAM) is a key technology for providing private storage and querying on untrusted machines but is commonly seen as impractical due to the high overhead of the re-randomization, called the eviction, the client incurs. We propose in this work to securely delegate the eviction to semi-trusted third parties to enable any client to accede the ORAM technology and present four different designs inspired by mix-net technologies with reasonable periodic costs.
|
1911.07608
|
Ali Asgher Mansoor Habiby
|
Ali Asgher Mansoor Habiby and Ahamed Thoppu
|
Application of Reinforcement Learning for 5G Scheduling Parameter
Optimization
|
7 pages, 11 figures. Complete experiment conducted on a Live 5G
Network and live 5G site
| null | null | null |
cs.NI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RF Network parametric optimization requires a wealth of experience and
knowledge to achieve the optimal balance between coverage, capacity, system
efficiency and customer experience from the telecom sites serving the users.
With 5G, the complications of Air interface scheduling have increased due to
the usage of massive MIMO, beamforming and introduction of higher modulation
schemes with varying numerologies. In this work, we tune a machine learning
model to "learn" the best combination of parameters for a given traffic profile
using Cross Entropy Method Reinforcement Learning and compare these with RF
Subject Matter Expert "SME" recommendations. This work is aimed towards
automatic parameter tuning and feature optimization by acting as a Self
Organizing Network module
|
[
{
"created": "Mon, 21 Oct 2019 16:05:53 GMT",
"version": "v1"
}
] |
2019-11-19
|
[
[
"Habiby",
"Ali Asgher Mansoor",
""
],
[
"Thoppu",
"Ahamed",
""
]
] |
RF Network parametric optimization requires a wealth of experience and knowledge to achieve the optimal balance between coverage, capacity, system efficiency and customer experience from the telecom sites serving the users. With 5G, the complications of Air interface scheduling have increased due to the usage of massive MIMO, beamforming and introduction of higher modulation schemes with varying numerologies. In this work, we tune a machine learning model to "learn" the best combination of parameters for a given traffic profile using Cross Entropy Method Reinforcement Learning and compare these with RF Subject Matter Expert "SME" recommendations. This work is aimed towards automatic parameter tuning and feature optimization by acting as a Self Organizing Network module
|
2011.13336
|
Xidong Mu
|
Yuanwei Liu and Xidong Mu and Xiao Liu and Marco Di Renzo and Zhiguo
Ding and Robert Schober
|
Reconfigurable Intelligent Surface (RIS) Aided Multi-User Networks:
Interplay Between NOMA and RIS
|
14 pages, 5 figures, 1 table
|
in IEEE Wireless Communications, vol. 29, no. 2, pp. 169-176,
April 2022
|
10.1109/MWC.102.2100363
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article focuses on the exploitation of reconfigurable intelligent
surfaces (RISs) in multi-user networks employing orthogonal multiple access
(OMA) or non-orthogonal multiple access (NOMA), with an emphasis on
investigating the interplay between NOMA and RIS. Depending on whether the RIS
reflection coefficients can be adjusted only once or multiple times during one
transmission, we distinguish between static and dynamic RIS configurations. In
particular, the capacity region of RIS aided single-antenna NOMA networks is
characterized and compared with the OMA rate region from an
information-theoretic perspective, revealing that the dynamic RIS configuration
is capacity-achieving. Then, the impact of the RIS deployment location on the
performance of different multiple access schemes is investigated, which reveals
that asymmetric and symmetric deployment strategies are preferable for NOMA and
OMA, respectively. Furthermore, for RIS aided multiple-antenna NOMA networks,
three novel joint active and passive beamformer designs are proposed based on
both beamformer based and cluster based strategies. Finally, open research
problems for RIS-NOMA networks are highlighted.
|
[
{
"created": "Thu, 26 Nov 2020 15:05:16 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Oct 2021 21:22:02 GMT",
"version": "v2"
}
] |
2022-09-14
|
[
[
"Liu",
"Yuanwei",
""
],
[
"Mu",
"Xidong",
""
],
[
"Liu",
"Xiao",
""
],
[
"Di Renzo",
"Marco",
""
],
[
"Ding",
"Zhiguo",
""
],
[
"Schober",
"Robert",
""
]
] |
This article focuses on the exploitation of reconfigurable intelligent surfaces (RISs) in multi-user networks employing orthogonal multiple access (OMA) or non-orthogonal multiple access (NOMA), with an emphasis on investigating the interplay between NOMA and RIS. Depending on whether the RIS reflection coefficients can be adjusted only once or multiple times during one transmission, we distinguish between static and dynamic RIS configurations. In particular, the capacity region of RIS aided single-antenna NOMA networks is characterized and compared with the OMA rate region from an information-theoretic perspective, revealing that the dynamic RIS configuration is capacity-achieving. Then, the impact of the RIS deployment location on the performance of different multiple access schemes is investigated, which reveals that asymmetric and symmetric deployment strategies are preferable for NOMA and OMA, respectively. Furthermore, for RIS aided multiple-antenna NOMA networks, three novel joint active and passive beamformer designs are proposed based on both beamformer based and cluster based strategies. Finally, open research problems for RIS-NOMA networks are highlighted.
|
1605.05546
|
Bodhayan Roy
|
Ajit Arvind Diwan and Bodhayan Roy
|
Partitions of planar point sets into polygons
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we characterize planar point sets that can be partitioned into
disjoint polygons of arbitrarily specified sizes. We provide an algorithm to
construct such a partition, if it exists, in polynomial time. We show that this
problem is equivalent to finding a specified $2$-factor in the visibility graph
of the point set. The characterization for the case where all cycles have
length $3$ also translates to finding a $K_3$-factor of the visibility graph of
the point set. We show that the generalized problem of finding a $K_k$-factor
of the visibility graph of a given point set for $k \geq 5$ is NP-hard.
|
[
{
"created": "Wed, 18 May 2016 12:21:39 GMT",
"version": "v1"
}
] |
2016-05-19
|
[
[
"Diwan",
"Ajit Arvind",
""
],
[
"Roy",
"Bodhayan",
""
]
] |
In this paper, we characterize planar point sets that can be partitioned into disjoint polygons of arbitrarily specified sizes. We provide an algorithm to construct such a partition, if it exists, in polynomial time. We show that this problem is equivalent to finding a specified $2$-factor in the visibility graph of the point set. The characterization for the case where all cycles have length $3$ also translates to finding a $K_3$-factor of the visibility graph of the point set. We show that the generalized problem of finding a $K_k$-factor of the visibility graph of a given point set for $k \geq 5$ is NP-hard.
|
2305.13653
|
Yang Bai
|
Yang Bai, Min Cao, Daming Gao, Ziqiang Cao, Chen Chen, Zhenfeng Fan,
Liqiang Nie, Min Zhang
|
RaSa: Relation and Sensitivity Aware Representation Learning for
Text-based Person Search
|
Accepted by IJCAI 2023. Code is available at
https://github.com/Flame-Chasers/RaSa
| null |
10.24963/ijcai.2023/62
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Text-based person search aims to retrieve the specified person images given a
textual description. The key to tackling such a challenging task is to learn
powerful multi-modal representations. Towards this, we propose a Relation and
Sensitivity aware representation learning method (RaSa), including two novel
tasks: Relation-Aware learning (RA) and Sensitivity-Aware learning (SA). For
one thing, existing methods cluster representations of all positive pairs
without distinction and overlook the noise problem caused by the weak positive
pairs where the text and the paired image have noise correspondences, thus
leading to overfitting learning. RA offsets the overfitting risk by introducing
a novel positive relation detection task (i.e., learning to distinguish strong
and weak positive pairs). For another thing, learning invariant representation
under data augmentation (i.e., being insensitive to some transformations) is a
general practice for improving representation's robustness in existing methods.
Beyond that, we encourage the representation to perceive the sensitive
transformation by SA (i.e., learning to detect the replaced words), thus
promoting the representation's robustness. Experiments demonstrate that RaSa
outperforms existing state-of-the-art methods by 6.94%, 4.45% and 15.35% in
terms of Rank@1 on CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively.
Code is available at: https://github.com/Flame-Chasers/RaSa.
|
[
{
"created": "Tue, 23 May 2023 03:53:57 GMT",
"version": "v1"
}
] |
2023-12-04
|
[
[
"Bai",
"Yang",
""
],
[
"Cao",
"Min",
""
],
[
"Gao",
"Daming",
""
],
[
"Cao",
"Ziqiang",
""
],
[
"Chen",
"Chen",
""
],
[
"Fan",
"Zhenfeng",
""
],
[
"Nie",
"Liqiang",
""
],
[
"Zhang",
"Min",
""
]
] |
Text-based person search aims to retrieve the specified person images given a textual description. The key to tackling such a challenging task is to learn powerful multi-modal representations. Towards this, we propose a Relation and Sensitivity aware representation learning method (RaSa), including two novel tasks: Relation-Aware learning (RA) and Sensitivity-Aware learning (SA). For one thing, existing methods cluster representations of all positive pairs without distinction and overlook the noise problem caused by the weak positive pairs where the text and the paired image have noise correspondences, thus leading to overfitting learning. RA offsets the overfitting risk by introducing a novel positive relation detection task (i.e., learning to distinguish strong and weak positive pairs). For another thing, learning invariant representation under data augmentation (i.e., being insensitive to some transformations) is a general practice for improving representation's robustness in existing methods. Beyond that, we encourage the representation to perceive the sensitive transformation by SA (i.e., learning to detect the replaced words), thus promoting the representation's robustness. Experiments demonstrate that RaSa outperforms existing state-of-the-art methods by 6.94%, 4.45% and 15.35% in terms of Rank@1 on CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. Code is available at: https://github.com/Flame-Chasers/RaSa.
|
1903.03698
|
Vitchyr H. Pong
|
Vitchyr H. Pong, Murtaza Dalal, Steven Lin, Ashvin Nair, Shikhar Bahl,
Sergey Levine
|
Skew-Fit: State-Covering Self-Supervised Reinforcement Learning
|
ICML 2020. 8 pages, 8 figures; 9 pages appendix (6 additional
figures)
| null | null | null |
cs.LG cs.AI cs.RO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous agents that must exhibit flexible and broad capabilities will need
to be equipped with large repertoires of skills. Defining each skill with a
manually-designed reward function limits this repertoire and imposes a manual
engineering burden. Self-supervised agents that set their own goals can
automate this process, but designing appropriate goal setting objectives can be
difficult, and often involves heuristic design decisions. In this paper, we
propose a formal exploration objective for goal-reaching policies that
maximizes state coverage. We show that this objective is equivalent to
maximizing goal reaching performance together with the entropy of the goal
distribution, where goals correspond to full state observations. To instantiate
this principle, we present an algorithm called Skew-Fit for learning a
maximum-entropy goal distributions. We prove that, under regularity conditions,
Skew-Fit converges to a uniform distribution over the set of valid states, even
when we do not know this set beforehand. Our experiments show that combining
Skew-Fit for learning goal distributions with existing goal-reaching methods
outperforms a variety of prior methods on open-sourced visual goal-reaching
tasks. Moreover, we demonstrate that Skew-Fit enables a real-world robot to
learn to open a door, entirely from scratch, from pixels, and without any
manually-designed reward function.
|
[
{
"created": "Fri, 8 Mar 2019 23:32:17 GMT",
"version": "v1"
},
{
"created": "Fri, 31 May 2019 15:30:20 GMT",
"version": "v2"
},
{
"created": "Sun, 9 Feb 2020 20:24:12 GMT",
"version": "v3"
},
{
"created": "Tue, 4 Aug 2020 04:07:27 GMT",
"version": "v4"
}
] |
2020-08-05
|
[
[
"Pong",
"Vitchyr H.",
""
],
[
"Dalal",
"Murtaza",
""
],
[
"Lin",
"Steven",
""
],
[
"Nair",
"Ashvin",
""
],
[
"Bahl",
"Shikhar",
""
],
[
"Levine",
"Sergey",
""
]
] |
Autonomous agents that must exhibit flexible and broad capabilities will need to be equipped with large repertoires of skills. Defining each skill with a manually-designed reward function limits this repertoire and imposes a manual engineering burden. Self-supervised agents that set their own goals can automate this process, but designing appropriate goal setting objectives can be difficult, and often involves heuristic design decisions. In this paper, we propose a formal exploration objective for goal-reaching policies that maximizes state coverage. We show that this objective is equivalent to maximizing goal reaching performance together with the entropy of the goal distribution, where goals correspond to full state observations. To instantiate this principle, we present an algorithm called Skew-Fit for learning a maximum-entropy goal distributions. We prove that, under regularity conditions, Skew-Fit converges to a uniform distribution over the set of valid states, even when we do not know this set beforehand. Our experiments show that combining Skew-Fit for learning goal distributions with existing goal-reaching methods outperforms a variety of prior methods on open-sourced visual goal-reaching tasks. Moreover, we demonstrate that Skew-Fit enables a real-world robot to learn to open a door, entirely from scratch, from pixels, and without any manually-designed reward function.
|
1112.0805
|
Shiqiang Wang Mr.
|
Shiqiang Wang, Qingyang Song, Lei Guo, Abbas Jamalipour
|
Constellation Mapping for Physical-Layer Network Coding with M-QAM
Modulation
|
Final version at IEEE GLOBECOM 2012
|
IEEE Global Communications Conference (GLOBECOM) 2012, pp.
4429-4434
|
10.1109/GLOCOM.2012.6503815
| null |
cs.IT cs.NI cs.SY math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The denoise-and-forward (DNF) method of physical-layer network coding (PNC)
is a promising approach for wireless relaying networks. In this paper, we
consider DNF-based PNC with M-ary quadrature amplitude modulation (M-QAM) and
propose a mapping scheme that maps the superposed M-QAM signal to coded
symbols. The mapping scheme supports both square and non-square M-QAM
modulations, with various original constellation mappings (e.g. binary-coded or
Gray-coded). Subsequently, we evaluate the symbol error rate and bit error rate
(BER) of M-QAM modulated PNC that uses the proposed mapping scheme. Afterwards,
as an application, a rate adaptation scheme for the DNF method of PNC is
proposed. Simulation results show that the rate-adaptive PNC is advantageous in
various scenarios.
|
[
{
"created": "Sun, 4 Dec 2011 22:53:46 GMT",
"version": "v1"
},
{
"created": "Sat, 11 May 2013 19:51:23 GMT",
"version": "v2"
}
] |
2013-05-14
|
[
[
"Wang",
"Shiqiang",
""
],
[
"Song",
"Qingyang",
""
],
[
"Guo",
"Lei",
""
],
[
"Jamalipour",
"Abbas",
""
]
] |
The denoise-and-forward (DNF) method of physical-layer network coding (PNC) is a promising approach for wireless relaying networks. In this paper, we consider DNF-based PNC with M-ary quadrature amplitude modulation (M-QAM) and propose a mapping scheme that maps the superposed M-QAM signal to coded symbols. The mapping scheme supports both square and non-square M-QAM modulations, with various original constellation mappings (e.g. binary-coded or Gray-coded). Subsequently, we evaluate the symbol error rate and bit error rate (BER) of M-QAM modulated PNC that uses the proposed mapping scheme. Afterwards, as an application, a rate adaptation scheme for the DNF method of PNC is proposed. Simulation results show that the rate-adaptive PNC is advantageous in various scenarios.
|
1409.1533
|
Mizuki Oka
|
Mizuki Oka and Hirotake Abe and Takashi Ikegami
|
Dynamic Homeostasis in Packet Switching Networks
|
18 pages
| null | null | null |
cs.NI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, we investigate the adaptation and robustness of a packet
switching network (PSN), the fundamental architecture of the Internet. We claim
that the adaptation introduced by a transmission control protocol (TCP)
congestion control mechanism is interpretable as the self-organization of
multiple attractors and stability to switch from one attractor to another. To
discuss this argument quantitatively, we study the adaptation of the Internet
by simulating a PSN using ns-2. Our hypothesis is that the robustness and
fragility of the Internet can be attributed to the inherent dynamics of the PSN
feedback mechanism called the congestion window size, or \textit{cwnd}. By
varying the data input into the PSN system, we investigate the possible
self-organization of attractors in cwnd temporal dynamics and discuss the
adaptability and robustness of PSNs. The present study provides an example of
Ashby's Law of Requisite Variety in action.
|
[
{
"created": "Wed, 3 Sep 2014 00:36:32 GMT",
"version": "v1"
}
] |
2014-09-05
|
[
[
"Oka",
"Mizuki",
""
],
[
"Abe",
"Hirotake",
""
],
[
"Ikegami",
"Takashi",
""
]
] |
In this study, we investigate the adaptation and robustness of a packet switching network (PSN), the fundamental architecture of the Internet. We claim that the adaptation introduced by a transmission control protocol (TCP) congestion control mechanism is interpretable as the self-organization of multiple attractors and stability to switch from one attractor to another. To discuss this argument quantitatively, we study the adaptation of the Internet by simulating a PSN using ns-2. Our hypothesis is that the robustness and fragility of the Internet can be attributed to the inherent dynamics of the PSN feedback mechanism called the congestion window size, or \textit{cwnd}. By varying the data input into the PSN system, we investigate the possible self-organization of attractors in cwnd temporal dynamics and discuss the adaptability and robustness of PSNs. The present study provides an example of Ashby's Law of Requisite Variety in action.
|
2111.09248
|
Joaquin Delgado Fernandez
|
Joaquin Delgado Fernandez, Sergio Potenciano Menci, Charles Lee,
Gilbert Fridgen
|
Privacy-preserving Federated Learning for Residential Short Term Load
Forecasting
| null | null |
10.1016/j.apenergy.2022.119915
|
Applied Energy Volume 326, 15 November 2022, 119915
|
cs.LG cs.AI cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With high levels of intermittent power generation and dynamic demand
patterns, accurate forecasts for residential loads have become essential. Smart
meters can play an important role when making these forecasts as they provide
detailed load data. However, using smart meter data for load forecasting is
challenging due to data privacy requirements. This paper investigates how these
requirements can be addressed through a combination of federated learning and
privacy preserving techniques such as differential privacy and secure
aggregation. For our analysis, we employ a large set of residential load data
and simulate how different federated learning models and privacy preserving
techniques affect performance and privacy. Our simulations reveal that
combining federated learning and privacy preserving techniques can secure both
high forecasting accuracy and near-complete privacy. Specifically, we find that
such combinations enable a high level of information sharing while ensuring
privacy of both the processed load data and forecasting models. Moreover, we
identify and discuss challenges of applying federated learning, differential
privacy and secure aggregation for residential short-term load forecasting.
|
[
{
"created": "Wed, 17 Nov 2021 17:27:59 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Dec 2021 10:18:10 GMT",
"version": "v2"
},
{
"created": "Fri, 16 Sep 2022 10:10:37 GMT",
"version": "v3"
},
{
"created": "Mon, 19 Sep 2022 09:00:42 GMT",
"version": "v4"
}
] |
2022-09-20
|
[
[
"Fernandez",
"Joaquin Delgado",
""
],
[
"Menci",
"Sergio Potenciano",
""
],
[
"Lee",
"Charles",
""
],
[
"Fridgen",
"Gilbert",
""
]
] |
With high levels of intermittent power generation and dynamic demand patterns, accurate forecasts for residential loads have become essential. Smart meters can play an important role when making these forecasts as they provide detailed load data. However, using smart meter data for load forecasting is challenging due to data privacy requirements. This paper investigates how these requirements can be addressed through a combination of federated learning and privacy preserving techniques such as differential privacy and secure aggregation. For our analysis, we employ a large set of residential load data and simulate how different federated learning models and privacy preserving techniques affect performance and privacy. Our simulations reveal that combining federated learning and privacy preserving techniques can secure both high forecasting accuracy and near-complete privacy. Specifically, we find that such combinations enable a high level of information sharing while ensuring privacy of both the processed load data and forecasting models. Moreover, we identify and discuss challenges of applying federated learning, differential privacy and secure aggregation for residential short-term load forecasting.
|
1812.07989
|
Xiaodan Zhang
|
Xiaodan Zhang, Xinbo Gao, Wen Lu, and Lihuo He
|
A Gated Peripheral-Foveal Convolutional Neural Network for Unified Image
Aesthetic Prediction
|
Add more experiments
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning fine-grained details is a key issue in image aesthetic assessment.
Most of the previous methods extract the fine-grained details via random
cropping strategy, which may undermine the integrity of semantic information.
Extensive studies show that humans perceive fine-grained details with a mixture
of foveal vision and peripheral vision. Fovea has the highest possible visual
acuity and is responsible for seeing the details. The peripheral vision is used
for perceiving the broad spatial scene and selecting the attended regions for
the fovea. Inspired by these observations, we propose a Gated Peripheral-Foveal
Convolutional Neural Network (GPF-CNN). It is a dedicated double-subnet neural
network, i.e. a peripheral subnet and a foveal subnet. The former aims to mimic
the functions of peripheral vision to encode the holistic information and
provide the attended regions. The latter aims to extract fine-grained features
on these key regions. Considering that the peripheral vision and foveal vision
play different roles in processing different visual stimuli, we further employ
a gated information fusion (GIF) network to weight their contributions. The
weights are determined through the fully connected layers followed by a sigmoid
function. We conduct comprehensive experiments on the standard AVA and
Photo.net datasets for unified aesthetic prediction tasks: (i) aesthetic
quality classification; (ii) aesthetic score regression; and (iii) aesthetic
score distribution prediction. The experimental results demonstrate the
effectiveness of the proposed method.
|
[
{
"created": "Wed, 19 Dec 2018 14:57:06 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jun 2019 08:04:30 GMT",
"version": "v2"
}
] |
2019-06-27
|
[
[
"Zhang",
"Xiaodan",
""
],
[
"Gao",
"Xinbo",
""
],
[
"Lu",
"Wen",
""
],
[
"He",
"Lihuo",
""
]
] |
Learning fine-grained details is a key issue in image aesthetic assessment. Most of the previous methods extract the fine-grained details via random cropping strategy, which may undermine the integrity of semantic information. Extensive studies show that humans perceive fine-grained details with a mixture of foveal vision and peripheral vision. Fovea has the highest possible visual acuity and is responsible for seeing the details. The peripheral vision is used for perceiving the broad spatial scene and selecting the attended regions for the fovea. Inspired by these observations, we propose a Gated Peripheral-Foveal Convolutional Neural Network (GPF-CNN). It is a dedicated double-subnet neural network, i.e. a peripheral subnet and a foveal subnet. The former aims to mimic the functions of peripheral vision to encode the holistic information and provide the attended regions. The latter aims to extract fine-grained features on these key regions. Considering that the peripheral vision and foveal vision play different roles in processing different visual stimuli, we further employ a gated information fusion (GIF) network to weight their contributions. The weights are determined through the fully connected layers followed by a sigmoid function. We conduct comprehensive experiments on the standard AVA and Photo.net datasets for unified aesthetic prediction tasks: (i) aesthetic quality classification; (ii) aesthetic score regression; and (iii) aesthetic score distribution prediction. The experimental results demonstrate the effectiveness of the proposed method.
|
2403.12273
|
Linus Nwankwo
|
Linus Nwankwo and Elmar Rueckert
|
Multimodal Human-Autonomous Agents Interaction Using Pre-Trained
Language and Visual Foundation Models
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we extended the method proposed in [17] to enable humans to
interact naturally with autonomous agents through vocal and textual
conversations. Our extended method exploits the inherent capabilities of
pre-trained large language models (LLMs), multimodal visual language models
(VLMs), and speech recognition (SR) models to decode the high-level natural
language conversations and semantic understanding of the robot's task
environment, and abstract them to the robot's actionable commands or queries.
We performed a quantitative evaluation of our framework's natural vocal
conversation understanding with participants from different racial backgrounds
and English language accents. The participants interacted with the robot using
both spoken and textual instructional commands. Based on the logged interaction
data, our framework achieved 87.55% vocal commands decoding accuracy, 86.27%
commands execution success, and an average latency of 0.89 seconds from
receiving the participants' vocal chat commands to initiating the robot's
actual physical action. The video demonstrations of this paper can be found at
https://linusnep.github.io/MTCC-IRoNL/.
|
[
{
"created": "Mon, 18 Mar 2024 21:41:09 GMT",
"version": "v1"
}
] |
2024-03-20
|
[
[
"Nwankwo",
"Linus",
""
],
[
"Rueckert",
"Elmar",
""
]
] |
In this paper, we extended the method proposed in [17] to enable humans to interact naturally with autonomous agents through vocal and textual conversations. Our extended method exploits the inherent capabilities of pre-trained large language models (LLMs), multimodal visual language models (VLMs), and speech recognition (SR) models to decode the high-level natural language conversations and semantic understanding of the robot's task environment, and abstract them to the robot's actionable commands or queries. We performed a quantitative evaluation of our framework's natural vocal conversation understanding with participants from different racial backgrounds and English language accents. The participants interacted with the robot using both spoken and textual instructional commands. Based on the logged interaction data, our framework achieved 87.55% vocal commands decoding accuracy, 86.27% commands execution success, and an average latency of 0.89 seconds from receiving the participants' vocal chat commands to initiating the robot's actual physical action. The video demonstrations of this paper can be found at https://linusnep.github.io/MTCC-IRoNL/.
|
1406.4077
|
Ma\"el Le Treust
|
Ma\"el Le Treust
|
Joint Empirical Coordination of Source and Channel
|
accepted to IEEE Trans. on IT
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a decentralized and self-configuring network, the communication devices
are considered as autonomous decision-makers that sense their environment and
that implement optimal transmission schemes. It is essential that these
autonomous devices cooperate and coordinate their actions, to ensure the
reliability of the transmissions and the stability of the network. We study a
point-to-point scenario in which the encoder and the decoder implement
decentralized policies that are coordinated. The coordination is measured in
terms of empirical frequency of symbols of source and channel. The encoder and
the decoder perform a coding scheme such that the empirical distribution of the
symbols is close to a target joint probability distribution. We characterize
the set of achievable target probability distributions for a point-to-point
source-channel model, in which the encoder is non-causal and the decoder is
strictly causal i.e., it returns an action based on the observation of the past
channel outputs. The objectives of the encoder and of the decoder, are captured
by some utility function, evaluated with respect to the set of achievable
target probability distributions. In this article, we investigate the
maximization problem of a utility function that is common to both encoder and
decoder. We show that the compression and the transmission of information are
particular cases of the empirical coordination.
|
[
{
"created": "Mon, 16 Jun 2014 17:29:56 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Jun 2014 07:37:01 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Jun 2017 09:15:32 GMT",
"version": "v3"
}
] |
2017-06-05
|
[
[
"Treust",
"Maël Le",
""
]
] |
In a decentralized and self-configuring network, the communication devices are considered as autonomous decision-makers that sense their environment and that implement optimal transmission schemes. It is essential that these autonomous devices cooperate and coordinate their actions, to ensure the reliability of the transmissions and the stability of the network. We study a point-to-point scenario in which the encoder and the decoder implement decentralized policies that are coordinated. The coordination is measured in terms of empirical frequency of symbols of source and channel. The encoder and the decoder perform a coding scheme such that the empirical distribution of the symbols is close to a target joint probability distribution. We characterize the set of achievable target probability distributions for a point-to-point source-channel model, in which the encoder is non-causal and the decoder is strictly causal i.e., it returns an action based on the observation of the past channel outputs. The objectives of the encoder and of the decoder, are captured by some utility function, evaluated with respect to the set of achievable target probability distributions. In this article, we investigate the maximization problem of a utility function that is common to both encoder and decoder. We show that the compression and the transmission of information are particular cases of the empirical coordination.
|
1612.02900
|
Tarik Kazaz
|
Tarik Kazaz, Xianjun Jiao, Merima Kulin, Ingrid Moerman
|
Demo: WiSCoP - Wireless Sensor Communication Prototyping Platform
|
2 pages, 2 figures, to be published in the EWSN'17 Proceedings of the
2017 International Conference on Embedded Wireless Systems and Networks,
Uppsala, Sweden - February 20-22, 2017
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To enhance system performance of future heterogeneous wireless networks the
co-design of PHY, MAC, and higher layer protocols is inevitable. In this work,
we present WiSCoP - a novel embedded platform for experimentation, prototyping
and implementation of integrated cross-layer network design approaches. WiSCoP
is built on top of a Zynq hardware platform integrated with FMCOMMS1/2/4 RF
front ends. We demonstrate the flexibility of WiSCoP by using it to prototype a
fully standard compliant IEEE 802.15.4 stack with real-time performance and
cross-layer integration.
|
[
{
"created": "Fri, 9 Dec 2016 03:44:14 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Jan 2017 15:27:58 GMT",
"version": "v2"
}
] |
2017-01-06
|
[
[
"Kazaz",
"Tarik",
""
],
[
"Jiao",
"Xianjun",
""
],
[
"Kulin",
"Merima",
""
],
[
"Moerman",
"Ingrid",
""
]
] |
To enhance system performance of future heterogeneous wireless networks the co-design of PHY, MAC, and higher layer protocols is inevitable. In this work, we present WiSCoP - a novel embedded platform for experimentation, prototyping and implementation of integrated cross-layer network design approaches. WiSCoP is built on top of a Zynq hardware platform integrated with FMCOMMS1/2/4 RF front ends. We demonstrate the flexibility of WiSCoP by using it to prototype a fully standard compliant IEEE 802.15.4 stack with real-time performance and cross-layer integration.
|
2403.01059
|
Ryan Gardner
|
Noah Ford, Ryan W. Gardner, Austin Juhl, and Nathan Larson
|
Continuous Mean-Zero Disagreement-Regularized Imitation Learning
(CMZ-DRIL)
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine-learning paradigms such as imitation learning and reinforcement
learning can generate highly performant agents in a variety of complex
environments. However, commonly used methods require large quantities of data
and/or a known reward function. This paper presents a method called Continuous
Mean-Zero Disagreement-Regularized Imitation Learning (CMZ-DRIL) that employs a
novel reward structure to improve the performance of imitation-learning agents
that have access to only a handful of expert demonstrations. CMZ-DRIL uses
reinforcement learning to minimize uncertainty among an ensemble of agents
trained to model the expert demonstrations. This method does not use any
environment-specific rewards, but creates a continuous and mean-zero reward
function from the action disagreement of the agent ensemble. As demonstrated in
a waypoint-navigation environment and in two MuJoCo environments, CMZ-DRIL can
generate performant agents that behave more similarly to the expert than
primary previous approaches in several key metrics.
|
[
{
"created": "Sat, 2 Mar 2024 01:40:37 GMT",
"version": "v1"
}
] |
2024-03-05
|
[
[
"Ford",
"Noah",
""
],
[
"Gardner",
"Ryan W.",
""
],
[
"Juhl",
"Austin",
""
],
[
"Larson",
"Nathan",
""
]
] |
Machine-learning paradigms such as imitation learning and reinforcement learning can generate highly performant agents in a variety of complex environments. However, commonly used methods require large quantities of data and/or a known reward function. This paper presents a method called Continuous Mean-Zero Disagreement-Regularized Imitation Learning (CMZ-DRIL) that employs a novel reward structure to improve the performance of imitation-learning agents that have access to only a handful of expert demonstrations. CMZ-DRIL uses reinforcement learning to minimize uncertainty among an ensemble of agents trained to model the expert demonstrations. This method does not use any environment-specific rewards, but creates a continuous and mean-zero reward function from the action disagreement of the agent ensemble. As demonstrated in a waypoint-navigation environment and in two MuJoCo environments, CMZ-DRIL can generate performant agents that behave more similarly to the expert than primary previous approaches in several key metrics.
|
cs/0506049
|
Bernard Jacquemin
|
Caroline Brun (XRCE), Bernard Jacquemin (ISC, XRCE), Fr\'ed\'erique
Segond (XRCE)
|
Exploitation de dictionnaires \'{e}lectroniques pour la
d\'{e}sambigu\"{i}sation s\'{e}mantique lexicale
|
25 pp
|
Traitement Automatique des Langues (TAL) 42, no. 3 (2001) pp.
667-690
| null | null |
cs.DL
| null |
This paper presents a lexical disambiguation system, initially developed for
English and now adapted to French. This system associates a word with its
meaning in a given context using electronic dictionaries as semantically
annotated corpora in order to extract semantic disambiguation rules. We
describe the rule extraction and application process as well as the evaluation
of the system. The results for French give us insight information on some
possible improvments of the nature and content of lexical resources adapted for
disambiguation in this framework.
|
[
{
"created": "Sun, 12 Jun 2005 16:48:33 GMT",
"version": "v1"
}
] |
2016-08-16
|
[
[
"Brun",
"Caroline",
"",
"XRCE"
],
[
"Jacquemin",
"Bernard",
"",
"ISC, XRCE"
],
[
"Segond",
"Frédérique",
"",
"XRCE"
]
] |
This paper presents a lexical disambiguation system, initially developed for English and now adapted to French. This system associates a word with its meaning in a given context using electronic dictionaries as semantically annotated corpora in order to extract semantic disambiguation rules. We describe the rule extraction and application process as well as the evaluation of the system. The results for French give us insight information on some possible improvments of the nature and content of lexical resources adapted for disambiguation in this framework.
|
1303.1285
|
Animesh Kumar
|
Animesh Kumar
|
Bandlimited Signal Reconstruction From the Distribution of Unknown
Sampling Locations
|
Submitted to SampTA 2013 workshop
| null |
10.1109/TSP.2015.2394248
| null |
cs.IT math.IT math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the reconstruction of bandlimited fields from samples taken at
unknown but statistically distributed sampling locations. The setup is
motivated by distributed sampling where precise knowledge of sensor locations
can be difficult.
Periodic one-dimensional bandlimited fields are considered for sampling.
Perfect samples of the field at independent and identically distributed
locations are obtained. The statistical realization of sampling locations is
not known. First, it is shown that a bandlimited field cannot be uniquely
determined with samples taken at statistically distributed but unknown
locations, even if the number of samples is infinite. Next, it is assumed that
the order of sample locations is known. In this case, using insights from
order-statistics, an estimate for the field with useful asymptotic properties
is designed. Distortion (mean-squared error) and central-limit are established
for this estimate.
|
[
{
"created": "Wed, 6 Mar 2013 09:39:09 GMT",
"version": "v1"
}
] |
2017-07-12
|
[
[
"Kumar",
"Animesh",
""
]
] |
We study the reconstruction of bandlimited fields from samples taken at unknown but statistically distributed sampling locations. The setup is motivated by distributed sampling where precise knowledge of sensor locations can be difficult. Periodic one-dimensional bandlimited fields are considered for sampling. Perfect samples of the field at independent and identically distributed locations are obtained. The statistical realization of sampling locations is not known. First, it is shown that a bandlimited field cannot be uniquely determined with samples taken at statistically distributed but unknown locations, even if the number of samples is infinite. Next, it is assumed that the order of sample locations is known. In this case, using insights from order-statistics, an estimate for the field with useful asymptotic properties is designed. Distortion (mean-squared error) and central-limit are established for this estimate.
|
1909.05042
|
Seyedrebvar Hosseini
|
Seyedrebvar Hosseini, Burak Turhan
|
Iterative versus Exhaustive Data Selection for Cross Project Defect
Prediction: An Extended Replication Study
|
Conducting a major revision based on the feedback from the Empirical
Software Engineering Journal
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Context: The effectiveness of data selection approaches in improving the
performance of cross project defect prediction(CPDP) has been shown in multiple
previous studies. Beside that, replication studies play an important role in
the support of any valid study. Repeating a study using the same or different
subjects can lead to better understandings of the nature of the problem.
Objective: We use an iterative dataset selection (IDS) approach to generate
training datasets and evaluate them on a set of randomly created validation
datasets in the context of CPDP while considering a higher range of flexibility
which makes the approach more feasible in practice.
Method: We replicate an earlier study and present some insights into the
achieved results while pointing out some of the shortcomings of the original
study. Using the lessons learned, we propose to use an alternative
training/validation dataset generation approaches which not only is more
feasible in practice, but also achieves slightly better performances. We
compare the results of our experiments to those from scenarios A, B, C and D
from the original study.
Results:Our experiments reveal that IDS is heavily recall based. The average
recall performance for all test sets is 0.933 which is significantly higher
than that from the replicated method. This in turn comes with a loss in
precision. IDS has the lowest precision among the compared scenarios that use
Decision Table learner. IDS however, achieves comparable or better F-measure
performances. IDS achieves higher mean, median and min F-measure values while
being more stable generally, in comparison with the replicated method.
Conclusions: We conclude that datasets obtained from iterative/search-based
approaches is a promising way to tackle CPDP. Especially, the performance
increase in terms of both time and performance encourages further investigation
of our approach.
|
[
{
"created": "Wed, 11 Sep 2019 13:32:09 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Apr 2020 09:49:36 GMT",
"version": "v2"
}
] |
2020-04-22
|
[
[
"Hosseini",
"Seyedrebvar",
""
],
[
"Turhan",
"Burak",
""
]
] |
Context: The effectiveness of data selection approaches in improving the performance of cross project defect prediction(CPDP) has been shown in multiple previous studies. Beside that, replication studies play an important role in the support of any valid study. Repeating a study using the same or different subjects can lead to better understandings of the nature of the problem. Objective: We use an iterative dataset selection (IDS) approach to generate training datasets and evaluate them on a set of randomly created validation datasets in the context of CPDP while considering a higher range of flexibility which makes the approach more feasible in practice. Method: We replicate an earlier study and present some insights into the achieved results while pointing out some of the shortcomings of the original study. Using the lessons learned, we propose to use an alternative training/validation dataset generation approaches which not only is more feasible in practice, but also achieves slightly better performances. We compare the results of our experiments to those from scenarios A, B, C and D from the original study. Results:Our experiments reveal that IDS is heavily recall based. The average recall performance for all test sets is 0.933 which is significantly higher than that from the replicated method. This in turn comes with a loss in precision. IDS has the lowest precision among the compared scenarios that use Decision Table learner. IDS however, achieves comparable or better F-measure performances. IDS achieves higher mean, median and min F-measure values while being more stable generally, in comparison with the replicated method. Conclusions: We conclude that datasets obtained from iterative/search-based approaches is a promising way to tackle CPDP. Especially, the performance increase in terms of both time and performance encourages further investigation of our approach.
|
1309.7145
|
Pierre Flener
|
Nicolas Beldiceanu, Pierre Flener, Justin Pearson, Pascal Van
Hentenryck
|
Propagating Regular Counting Constraints
|
Includes a SICStus Prolog source file with the propagator
| null | null | null |
cs.AI cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Constraints over finite sequences of variables are ubiquitous in sequencing
and timetabling. Moreover, the wide variety of such constraints in practical
applications led to general modelling techniques and generic propagation
algorithms, often based on deterministic finite automata (DFA) and their
extensions. We consider counter-DFAs (cDFA), which provide concise models for
regular counting constraints, that is constraints over the number of times a
regular-language pattern occurs in a sequence. We show how to enforce domain
consistency in polynomial time for atmost and atleast regular counting
constraints based on the frequent case of a cDFA with only accepting states and
a single counter that can be incremented by transitions. We also prove that the
satisfaction of exact regular counting constraints is NP-hard and indicate that
an incomplete algorithm for exact regular counting constraints is faster and
provides more pruning than the existing propagator from [3]. Regular counting
constraints are closely related to the CostRegular constraint but contribute
both a natural abstraction and some computational advantages.
|
[
{
"created": "Fri, 27 Sep 2013 08:23:52 GMT",
"version": "v1"
}
] |
2013-09-30
|
[
[
"Beldiceanu",
"Nicolas",
""
],
[
"Flener",
"Pierre",
""
],
[
"Pearson",
"Justin",
""
],
[
"Van Hentenryck",
"Pascal",
""
]
] |
Constraints over finite sequences of variables are ubiquitous in sequencing and timetabling. Moreover, the wide variety of such constraints in practical applications led to general modelling techniques and generic propagation algorithms, often based on deterministic finite automata (DFA) and their extensions. We consider counter-DFAs (cDFA), which provide concise models for regular counting constraints, that is constraints over the number of times a regular-language pattern occurs in a sequence. We show how to enforce domain consistency in polynomial time for atmost and atleast regular counting constraints based on the frequent case of a cDFA with only accepting states and a single counter that can be incremented by transitions. We also prove that the satisfaction of exact regular counting constraints is NP-hard and indicate that an incomplete algorithm for exact regular counting constraints is faster and provides more pruning than the existing propagator from [3]. Regular counting constraints are closely related to the CostRegular constraint but contribute both a natural abstraction and some computational advantages.
|
2310.02046
|
Michel Nass
|
Michel Nass, Emil Alegroth, Robert Feldt
|
Improving web element localization by using a large language model
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Web-based test automation heavily relies on accurately finding web elements.
Traditional methods compare attributes but don't grasp the context and meaning
of elements and words. The emergence of Large Language Models (LLMs) like
GPT-4, which can show human-like reasoning abilities on some tasks, offers new
opportunities for software engineering and web element localization. This paper
introduces and evaluates VON Similo LLM, an enhanced web element localization
approach. Using an LLM, it selects the most likely web element from the
top-ranked ones identified by the existing VON Similo method, ideally aiming to
get closer to human-like selection accuracy. An experimental study was
conducted using 804 web element pairs from 48 real-world web applications. We
measured the number of correctly identified elements as well as the execution
times, comparing the effectiveness and efficiency of VON Similo LLM against the
baseline algorithm. In addition, motivations from the LLM were recorded and
analyzed for all instances where the original approach failed to find the right
web element. VON Similo LLM demonstrated improved performance, reducing failed
localizations from 70 to 39 (out of 804), a 44 percent reduction. Despite its
slower execution time and additional costs of using the GPT-4 model, the LLMs
human-like reasoning showed promise in enhancing web element localization. LLM
technology can enhance web element identification in GUI test automation,
reducing false positives and potentially lowering maintenance costs. However,
further research is necessary to fully understand LLMs capabilities,
limitations, and practical use in GUI testing.
|
[
{
"created": "Tue, 3 Oct 2023 13:39:22 GMT",
"version": "v1"
}
] |
2023-10-04
|
[
[
"Nass",
"Michel",
""
],
[
"Alegroth",
"Emil",
""
],
[
"Feldt",
"Robert",
""
]
] |
Web-based test automation heavily relies on accurately finding web elements. Traditional methods compare attributes but don't grasp the context and meaning of elements and words. The emergence of Large Language Models (LLMs) like GPT-4, which can show human-like reasoning abilities on some tasks, offers new opportunities for software engineering and web element localization. This paper introduces and evaluates VON Similo LLM, an enhanced web element localization approach. Using an LLM, it selects the most likely web element from the top-ranked ones identified by the existing VON Similo method, ideally aiming to get closer to human-like selection accuracy. An experimental study was conducted using 804 web element pairs from 48 real-world web applications. We measured the number of correctly identified elements as well as the execution times, comparing the effectiveness and efficiency of VON Similo LLM against the baseline algorithm. In addition, motivations from the LLM were recorded and analyzed for all instances where the original approach failed to find the right web element. VON Similo LLM demonstrated improved performance, reducing failed localizations from 70 to 39 (out of 804), a 44 percent reduction. Despite its slower execution time and additional costs of using the GPT-4 model, the LLMs human-like reasoning showed promise in enhancing web element localization. LLM technology can enhance web element identification in GUI test automation, reducing false positives and potentially lowering maintenance costs. However, further research is necessary to fully understand LLMs capabilities, limitations, and practical use in GUI testing.
|
1901.05344
|
Georg Hager
|
Francesco Cremonesi, Georg Hager, Gerhard Wellein, Felix Sch\"urmann
|
Analytic Performance Modeling and Analysis of Detailed Neuron
Simulations
|
18 pages, 6 figures, 15 tables
| null |
10.1177/1094342020912528
| null |
cs.PF cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Big science initiatives are trying to reconstruct and model the brain by
attempting to simulate brain tissue at larger scales and with increasingly more
biological detail than previously thought possible. The exponential growth of
parallel computer performance has been supporting these developments, and at
the same time maintainers of neuroscientific simulation code have strived to
optimally and efficiently exploit new hardware features. Current state of the
art software for the simulation of biological networks has so far been
developed using performance engineering practices, but a thorough analysis and
modeling of the computational and performance characteristics, especially in
the case of morphologically detailed neuron simulations, is lacking. Other
computational sciences have successfully used analytic performance engineering
and modeling methods to gain insight on the computational properties of
simulation kernels, aid developers in performance optimizations and eventually
drive co-design efforts, but to our knowledge a model-based performance
analysis of neuron simulations has not yet been conducted.
We present a detailed study of the shared-memory performance of
morphologically detailed neuron simulations based on the Execution-Cache-Memory
(ECM) performance model. We demonstrate that this model can deliver accurate
predictions of the runtime of almost all the kernels that constitute the neuron
models under investigation. The gained insight is used to identify the main
governing mechanisms underlying performance bottlenecks in the simulation. The
implications of this analysis on the optimization of neural simulation software
and eventually co-design of future hardware architectures are discussed. In
this sense, our work represents a valuable conceptual and quantitative
contribution to understanding the performance properties of biological networks
simulations.
|
[
{
"created": "Wed, 16 Jan 2019 15:28:06 GMT",
"version": "v1"
}
] |
2020-06-25
|
[
[
"Cremonesi",
"Francesco",
""
],
[
"Hager",
"Georg",
""
],
[
"Wellein",
"Gerhard",
""
],
[
"Schürmann",
"Felix",
""
]
] |
Big science initiatives are trying to reconstruct and model the brain by attempting to simulate brain tissue at larger scales and with increasingly more biological detail than previously thought possible. The exponential growth of parallel computer performance has been supporting these developments, and at the same time maintainers of neuroscientific simulation code have strived to optimally and efficiently exploit new hardware features. Current state of the art software for the simulation of biological networks has so far been developed using performance engineering practices, but a thorough analysis and modeling of the computational and performance characteristics, especially in the case of morphologically detailed neuron simulations, is lacking. Other computational sciences have successfully used analytic performance engineering and modeling methods to gain insight on the computational properties of simulation kernels, aid developers in performance optimizations and eventually drive co-design efforts, but to our knowledge a model-based performance analysis of neuron simulations has not yet been conducted. We present a detailed study of the shared-memory performance of morphologically detailed neuron simulations based on the Execution-Cache-Memory (ECM) performance model. We demonstrate that this model can deliver accurate predictions of the runtime of almost all the kernels that constitute the neuron models under investigation. The gained insight is used to identify the main governing mechanisms underlying performance bottlenecks in the simulation. The implications of this analysis on the optimization of neural simulation software and eventually co-design of future hardware architectures are discussed. In this sense, our work represents a valuable conceptual and quantitative contribution to understanding the performance properties of biological networks simulations.
|
1111.1093
|
Sabu Thampi m
|
Sabu M. Thampi, Ann Jisma Jacob
|
Securing Biometric Images using Reversible Watermarking
|
8 pages, 7 figures
|
International Journal of Image Processing (IJIP), Volume:
5,Issue:4, September/October 2011
| null | null |
cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Biometric security is a fast growing area. Protecting biometric data is very
important since it can be misused by attackers. In order to increase security
of biometric data there are different methods in which watermarking is widely
accepted. A more acceptable, new important development in this area is
reversible watermarking in which the original image can be completely restored
and the watermark can be retrieved. But reversible watermarking in biometrics
is an understudied area. Reversible watermarking maintains high quality of
biometric data. This paper proposes Rotational Replacement of LSB as a
reversible watermarking scheme for biometric images. PSNR is the regular method
used for quality measurement of biometric data. In this paper we also show that
SSIM Index is a better alternate for effective quality assessment for
reversible watermarked biometric data by comparing with the well known
reversible watermarking scheme using Difference Expansion.
|
[
{
"created": "Fri, 4 Nov 2011 10:50:45 GMT",
"version": "v1"
}
] |
2011-11-07
|
[
[
"Thampi",
"Sabu M.",
""
],
[
"Jacob",
"Ann Jisma",
""
]
] |
Biometric security is a fast growing area. Protecting biometric data is very important since it can be misused by attackers. In order to increase security of biometric data there are different methods in which watermarking is widely accepted. A more acceptable, new important development in this area is reversible watermarking in which the original image can be completely restored and the watermark can be retrieved. But reversible watermarking in biometrics is an understudied area. Reversible watermarking maintains high quality of biometric data. This paper proposes Rotational Replacement of LSB as a reversible watermarking scheme for biometric images. PSNR is the regular method used for quality measurement of biometric data. In this paper we also show that SSIM Index is a better alternate for effective quality assessment for reversible watermarked biometric data by comparing with the well known reversible watermarking scheme using Difference Expansion.
|
1807.08512
|
Muhammad Kamran Janjua
|
Alessandro Calefati, Muhammad Kamran Janjua, Shah Nawaz, Ignazio Gallo
|
Git Loss for Deep Face Recognition
|
12 pages. Accepted at BMVC2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional Neural Networks (CNNs) have been widely used in computer vision
tasks, such as face recognition and verification, and have achieved
state-of-the-art results due to their ability to capture discriminative deep
features. Conventionally, CNNs have been trained with softmax as supervision
signal to penalize the classification loss. In order to further enhance the
discriminative capability of deep features, we introduce a joint supervision
signal, Git loss, which leverages on softmax and center loss functions. The aim
of our loss function is to minimize the intra-class variations as well as
maximize the inter-class distances. Such minimization and maximization of deep
features are considered ideal for face recognition task. We perform experiments
on two popular face recognition benchmarks datasets and show that our proposed
loss function achieves maximum separability between deep face features of
different identities and achieves state-of-the-art accuracy on two major face
recognition benchmark datasets: Labeled Faces in the Wild (LFW) and YouTube
Faces (YTF). However, it should be noted that the major objective of Git loss
is to achieve maximum separability between deep features of divergent
identities.
|
[
{
"created": "Mon, 23 Jul 2018 10:20:29 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Jul 2018 14:31:38 GMT",
"version": "v2"
},
{
"created": "Thu, 26 Jul 2018 10:21:28 GMT",
"version": "v3"
},
{
"created": "Sat, 28 Jul 2018 17:29:04 GMT",
"version": "v4"
}
] |
2018-07-31
|
[
[
"Calefati",
"Alessandro",
""
],
[
"Janjua",
"Muhammad Kamran",
""
],
[
"Nawaz",
"Shah",
""
],
[
"Gallo",
"Ignazio",
""
]
] |
Convolutional Neural Networks (CNNs) have been widely used in computer vision tasks, such as face recognition and verification, and have achieved state-of-the-art results due to their ability to capture discriminative deep features. Conventionally, CNNs have been trained with softmax as supervision signal to penalize the classification loss. In order to further enhance the discriminative capability of deep features, we introduce a joint supervision signal, Git loss, which leverages on softmax and center loss functions. The aim of our loss function is to minimize the intra-class variations as well as maximize the inter-class distances. Such minimization and maximization of deep features are considered ideal for face recognition task. We perform experiments on two popular face recognition benchmarks datasets and show that our proposed loss function achieves maximum separability between deep face features of different identities and achieves state-of-the-art accuracy on two major face recognition benchmark datasets: Labeled Faces in the Wild (LFW) and YouTube Faces (YTF). However, it should be noted that the major objective of Git loss is to achieve maximum separability between deep features of divergent identities.
|
1402.3547
|
Meirav Zehavi
|
Hadas Shachnai, Meirav Zehavi
|
Representative Families: A Unified Tradeoff-Based Approach
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $M=(E,{\cal I})$ be a matroid, and let $\cal S$ be a family of subsets of
size $p$ of $E$. A subfamily $\widehat{\cal S}\subseteq{\cal S}$ represents
${\cal S}$ if for every pair of sets $X\in{\cal S}$ and $Y\subseteq E\setminus
X$ such that $X\cup Y\in{\cal I}$, there is a set $\widehat{X}\in\widehat{\cal
S}$ disjoint from $Y$ such that $\widehat{X}\cup Y\in{\cal I}$. Fomin et al.
(Proc. ACM-SIAM Symposium on Discrete Algorithms, 2014) introduced a powerful
technique for fast computation of representative families for uniform matroids.
In this paper, we show that this technique leads to a unified approach for
substantially improving the running times of parameterized algorithms for some
classic problems. This includes, among others, $k$-Partial Cover, $k$-Internal
Out-Branching, and Long Directed Cycle. Our approach exploits an interesting
tradeoff between running time and the size of the representative families.
|
[
{
"created": "Fri, 14 Feb 2014 18:32:15 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Apr 2014 17:46:10 GMT",
"version": "v2"
}
] |
2014-04-22
|
[
[
"Shachnai",
"Hadas",
""
],
[
"Zehavi",
"Meirav",
""
]
] |
Let $M=(E,{\cal I})$ be a matroid, and let $\cal S$ be a family of subsets of size $p$ of $E$. A subfamily $\widehat{\cal S}\subseteq{\cal S}$ represents ${\cal S}$ if for every pair of sets $X\in{\cal S}$ and $Y\subseteq E\setminus X$ such that $X\cup Y\in{\cal I}$, there is a set $\widehat{X}\in\widehat{\cal S}$ disjoint from $Y$ such that $\widehat{X}\cup Y\in{\cal I}$. Fomin et al. (Proc. ACM-SIAM Symposium on Discrete Algorithms, 2014) introduced a powerful technique for fast computation of representative families for uniform matroids. In this paper, we show that this technique leads to a unified approach for substantially improving the running times of parameterized algorithms for some classic problems. This includes, among others, $k$-Partial Cover, $k$-Internal Out-Branching, and Long Directed Cycle. Our approach exploits an interesting tradeoff between running time and the size of the representative families.
|
1803.09867
|
Liang Lin
|
Keze Wang and Xiaopeng Yan and Dongyu Zhang and Lei Zhang and Liang
Lin
|
Towards Human-Machine Cooperation: Self-supervised Sample Mining for
Object Detection
|
We enabled to mine from unlabeled or partially labeled data to boost
object detection (Accepted by CVPR 2018) The source code is available at
http://kezewang.com/codes/SSM_CVPR.zip
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Though quite challenging, leveraging large-scale unlabeled or partially
labeled images in a cost-effective way has increasingly attracted interests for
its great importance to computer vision. To tackle this problem, many Active
Learning (AL) methods have been developed. However, these methods mainly define
their sample selection criteria within a single image context, leading to the
suboptimal robustness and impractical solution for large-scale object
detection. In this paper, aiming to remedy the drawbacks of existing AL
methods, we present a principled Self-supervised Sample Mining (SSM) process
accounting for the real challenges in object detection. Specifically, our SSM
process concentrates on automatically discovering and pseudo-labeling reliable
region proposals for enhancing the object detector via the introduced cross
image validation, i.e., pasting these proposals into different labeled images
to comprehensively measure their values under different image contexts. By
resorting to the SSM process, we propose a new AL framework for gradually
incorporating unlabeled or partially labeled data into the model learning while
minimizing the annotating effort of users. Extensive experiments on two public
benchmarks clearly demonstrate our proposed framework can achieve the
comparable performance to the state-of-the-art methods with significantly fewer
annotations.
|
[
{
"created": "Tue, 27 Mar 2018 03:06:51 GMT",
"version": "v1"
},
{
"created": "Thu, 24 May 2018 11:59:38 GMT",
"version": "v2"
}
] |
2018-05-25
|
[
[
"Wang",
"Keze",
""
],
[
"Yan",
"Xiaopeng",
""
],
[
"Zhang",
"Dongyu",
""
],
[
"Zhang",
"Lei",
""
],
[
"Lin",
"Liang",
""
]
] |
Though quite challenging, leveraging large-scale unlabeled or partially labeled images in a cost-effective way has increasingly attracted interests for its great importance to computer vision. To tackle this problem, many Active Learning (AL) methods have been developed. However, these methods mainly define their sample selection criteria within a single image context, leading to the suboptimal robustness and impractical solution for large-scale object detection. In this paper, aiming to remedy the drawbacks of existing AL methods, we present a principled Self-supervised Sample Mining (SSM) process accounting for the real challenges in object detection. Specifically, our SSM process concentrates on automatically discovering and pseudo-labeling reliable region proposals for enhancing the object detector via the introduced cross image validation, i.e., pasting these proposals into different labeled images to comprehensively measure their values under different image contexts. By resorting to the SSM process, we propose a new AL framework for gradually incorporating unlabeled or partially labeled data into the model learning while minimizing the annotating effort of users. Extensive experiments on two public benchmarks clearly demonstrate our proposed framework can achieve the comparable performance to the state-of-the-art methods with significantly fewer annotations.
|
1801.01715
|
Luca Baldesi
|
Luca Baldesi and Athina Markopoulou and Carter T. Butts
|
Spectral Graph Forge: Graph Generation Targeting Modularity
| null | null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Community structure is an important property that captures inhomogeneities
common in large networks, and modularity is one of the most widely used metrics
for such community structure. In this paper, we introduce a principled
methodology, the Spectral Graph Forge, for generating random graphs that
preserves community structure from a real network of interest, in terms of
modularity. Our approach leverages the fact that the spectral structure of
matrix representations of a graph encodes global information about community
structure. The Spectral Graph Forge uses a low-rank approximation of the
modularity matrix to generate synthetic graphs that match a target modularity
within user-selectable degree of accuracy, while allowing other aspects of
structure to vary. We show that the Spectral Graph Forge outperforms
state-of-the-art techniques in terms of accuracy in targeting the modularity
and randomness of the realizations, while also preserving other local
structural properties and node attributes. We discuss extensions of the
Spectral Graph Forge to target other properties beyond modularity, and its
applications to anonymization.
|
[
{
"created": "Fri, 5 Jan 2018 11:11:20 GMT",
"version": "v1"
}
] |
2018-01-08
|
[
[
"Baldesi",
"Luca",
""
],
[
"Markopoulou",
"Athina",
""
],
[
"Butts",
"Carter T.",
""
]
] |
Community structure is an important property that captures inhomogeneities common in large networks, and modularity is one of the most widely used metrics for such community structure. In this paper, we introduce a principled methodology, the Spectral Graph Forge, for generating random graphs that preserves community structure from a real network of interest, in terms of modularity. Our approach leverages the fact that the spectral structure of matrix representations of a graph encodes global information about community structure. The Spectral Graph Forge uses a low-rank approximation of the modularity matrix to generate synthetic graphs that match a target modularity within user-selectable degree of accuracy, while allowing other aspects of structure to vary. We show that the Spectral Graph Forge outperforms state-of-the-art techniques in terms of accuracy in targeting the modularity and randomness of the realizations, while also preserving other local structural properties and node attributes. We discuss extensions of the Spectral Graph Forge to target other properties beyond modularity, and its applications to anonymization.
|
2403.18845
|
Abdelghani MADDI
|
Abdelghani Maddi (GEMASS), Luis Miotti (CEPN)
|
On The Peer Review Reports: Does Size Matter?
|
arXiv admin note: substantial text overlap with arXiv:2309.02000
|
Scientometrics (2024)
|
10.1007/s11192-024-04977-6
| null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Amidst the ever-expanding realm of scientific production and the
proliferation of predatory journals, the focus on peer review remains paramount
for scientometricians and sociologists of science. Despite this attention,
there is a notable scarcity of empirical investigations into the tangible
impact of peer review on publication quality. This study aims to address this
gap by conducting a comprehensive analysis of how peer review contributes to
the quality of scholarly publications, as measured by the citations they
receive.Utilizing an adjusted dataset comprising 57,482 publications from
Publons to Web of Science and employing the Raking Ratio method, our study
reveals intriguing insights. Specifically, our findings shed light on a nuanced
relationship between the length of reviewer reports and the subsequent
citations received by publications. Through a robust regression analysis, we
establish that, beginning from 947 words, the length of reviewer reports is
significantly associated with an increase in citations.These results not only
confirm the initial hypothesis that longer reports indicate requested
improvements, thereby enhancing the quality and visibility of articles, but
also underscore the importance of timely and comprehensive reviewer reports.
Furthermore, insights from Publons' data suggest that open access to reports
can influence reviewer behavior, encouraging more detailed reports.Beyond the
scholarly landscape, our findings prompt a reevaluation of the role of
reviewers, emphasizing the need to recognize and value this resource-intensive
yet underappreciated activity in institutional evaluations. Additionally, the
study sounds a cautionary note regarding the challenges faced by peer review in
the context of an increasing volume of submissions, potentially compromising
the vigilance of peers in swiftly assessing numerous articles.
|
[
{
"created": "Thu, 7 Mar 2024 08:54:26 GMT",
"version": "v1"
}
] |
2024-03-29
|
[
[
"Maddi",
"Abdelghani",
"",
"GEMASS"
],
[
"Miotti",
"Luis",
"",
"CEPN"
]
] |
Amidst the ever-expanding realm of scientific production and the proliferation of predatory journals, the focus on peer review remains paramount for scientometricians and sociologists of science. Despite this attention, there is a notable scarcity of empirical investigations into the tangible impact of peer review on publication quality. This study aims to address this gap by conducting a comprehensive analysis of how peer review contributes to the quality of scholarly publications, as measured by the citations they receive.Utilizing an adjusted dataset comprising 57,482 publications from Publons to Web of Science and employing the Raking Ratio method, our study reveals intriguing insights. Specifically, our findings shed light on a nuanced relationship between the length of reviewer reports and the subsequent citations received by publications. Through a robust regression analysis, we establish that, beginning from 947 words, the length of reviewer reports is significantly associated with an increase in citations.These results not only confirm the initial hypothesis that longer reports indicate requested improvements, thereby enhancing the quality and visibility of articles, but also underscore the importance of timely and comprehensive reviewer reports. Furthermore, insights from Publons' data suggest that open access to reports can influence reviewer behavior, encouraging more detailed reports.Beyond the scholarly landscape, our findings prompt a reevaluation of the role of reviewers, emphasizing the need to recognize and value this resource-intensive yet underappreciated activity in institutional evaluations. Additionally, the study sounds a cautionary note regarding the challenges faced by peer review in the context of an increasing volume of submissions, potentially compromising the vigilance of peers in swiftly assessing numerous articles.
|
1708.09221
|
Jonathan Klawitter
|
Jonathan Klawitter, Tamara Mchedlidze, Martin N\"ollenburg
|
Experimental Evaluation of Book Drawing Algorithms
|
Appears in the Proceedings of the 25th International Symposium on
Graph Drawing and Network Visualization (GD 2017)
| null | null | null |
cs.DS cs.CG cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A $k$-page book drawing of a graph $G=(V,E)$ consists of a linear ordering of
its vertices along a spine and an assignment of each edge to one of the $k$
pages, which are half-planes bounded by the spine. In a book drawing, two edges
cross if and only if they are assigned to the same page and their vertices
alternate along the spine. Crossing minimization in a $k$-page book drawing is
NP-hard, yet book drawings have multiple applications in visualization and
beyond. Therefore several heuristic book drawing algorithms exist, but there is
no broader comparative study on their relative performance. In this paper, we
propose a comprehensive benchmark set of challenging graph classes for book
drawing algorithms and provide an extensive experimental study of the
performance of existing book drawing algorithms.
|
[
{
"created": "Wed, 30 Aug 2017 11:35:20 GMT",
"version": "v1"
}
] |
2017-08-31
|
[
[
"Klawitter",
"Jonathan",
""
],
[
"Mchedlidze",
"Tamara",
""
],
[
"Nöllenburg",
"Martin",
""
]
] |
A $k$-page book drawing of a graph $G=(V,E)$ consists of a linear ordering of its vertices along a spine and an assignment of each edge to one of the $k$ pages, which are half-planes bounded by the spine. In a book drawing, two edges cross if and only if they are assigned to the same page and their vertices alternate along the spine. Crossing minimization in a $k$-page book drawing is NP-hard, yet book drawings have multiple applications in visualization and beyond. Therefore several heuristic book drawing algorithms exist, but there is no broader comparative study on their relative performance. In this paper, we propose a comprehensive benchmark set of challenging graph classes for book drawing algorithms and provide an extensive experimental study of the performance of existing book drawing algorithms.
|
2011.08733
|
Amruta Yelamanchili
|
Jagriti Agrawal and Amruta Yelamanchili and Steve Chien
|
Using Explainable Scheduling for the Mars 2020 Rover Mission
|
Submitted to the International Workshop of Explainable AI Planning
(XAIP) at the International Conference on Automated Planning and Scheduling
(ICAPS) 2020
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the reasoning behind the behavior of an automated scheduling
system is essential to ensure that it will be trusted and consequently used to
its full capabilities in critical applications. In cases where a scheduler
schedules activities in an invalid location, it is usually easy for the user to
infer the missing constraint by inspecting the schedule with the invalid
activity to determine the missing constraint. If a scheduler fails to schedule
activities because constraints could not be satisfied, determining the cause
can be more challenging. In such cases it is important to understand which
constraints caused the activities to fail to be scheduled and how to alter
constraints to achieve the desired schedule. In this paper, we describe such a
scheduling system for NASA's Mars 2020 Perseverance Rover, as well as
Crosscheck, an explainable scheduling tool that explains the scheduler
behavior. The scheduling system and Crosscheck are the baseline for operational
use to schedule activities for the Mars 2020 rover. As we describe, the
scheduler generates a schedule given a set of activities and their constraints
and Crosscheck: (1) provides a visual representation of the generated schedule;
(2) analyzes and explains why activities failed to schedule given the
constraints provided; and (3) provides guidance on potential constraint
relaxations to enable the activities to schedule in future scheduler runs.
|
[
{
"created": "Tue, 17 Nov 2020 16:10:49 GMT",
"version": "v1"
}
] |
2020-11-18
|
[
[
"Agrawal",
"Jagriti",
""
],
[
"Yelamanchili",
"Amruta",
""
],
[
"Chien",
"Steve",
""
]
] |
Understanding the reasoning behind the behavior of an automated scheduling system is essential to ensure that it will be trusted and consequently used to its full capabilities in critical applications. In cases where a scheduler schedules activities in an invalid location, it is usually easy for the user to infer the missing constraint by inspecting the schedule with the invalid activity to determine the missing constraint. If a scheduler fails to schedule activities because constraints could not be satisfied, determining the cause can be more challenging. In such cases it is important to understand which constraints caused the activities to fail to be scheduled and how to alter constraints to achieve the desired schedule. In this paper, we describe such a scheduling system for NASA's Mars 2020 Perseverance Rover, as well as Crosscheck, an explainable scheduling tool that explains the scheduler behavior. The scheduling system and Crosscheck are the baseline for operational use to schedule activities for the Mars 2020 rover. As we describe, the scheduler generates a schedule given a set of activities and their constraints and Crosscheck: (1) provides a visual representation of the generated schedule; (2) analyzes and explains why activities failed to schedule given the constraints provided; and (3) provides guidance on potential constraint relaxations to enable the activities to schedule in future scheduler runs.
|
2111.11103
|
Florian Fervers
|
Florian Fervers, Timo Breuer, Gregor Stachowiak, Sebastian Bullinger,
Christoph Bodensteiner, Michael Arens
|
Improving Semantic Image Segmentation via Label Fusion in Semantically
Textured Meshes
| null | null |
10.5220/0010841800003124
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Models for semantic segmentation require a large amount of hand-labeled
training data which is costly and time-consuming to produce. For this purpose,
we present a label fusion framework that is capable of improving semantic pixel
labels of video sequences in an unsupervised manner. We make use of a 3D mesh
representation of the environment and fuse the predictions of different frames
into a consistent representation using semantic mesh textures. Rendering the
semantic mesh using the original intrinsic and extrinsic camera parameters
yields a set of improved semantic segmentation images. Due to our optimized
CUDA implementation, we are able to exploit the entire $c$-dimensional
probability distribution of annotations over $c$ classes in an
uncertainty-aware manner. We evaluate our method on the Scannet dataset where
we improve annotations produced by the state-of-the-art segmentation network
ESANet from $52.05 \%$ to $58.25 \%$ pixel accuracy. We publish the source code
of our framework online to foster future research in this area
(\url{https://github.com/fferflo/semantic-meshes}). To the best of our
knowledge, this is the first publicly available label fusion framework for
semantic image segmentation based on meshes with semantic textures.
|
[
{
"created": "Mon, 22 Nov 2021 10:47:32 GMT",
"version": "v1"
}
] |
2022-02-25
|
[
[
"Fervers",
"Florian",
""
],
[
"Breuer",
"Timo",
""
],
[
"Stachowiak",
"Gregor",
""
],
[
"Bullinger",
"Sebastian",
""
],
[
"Bodensteiner",
"Christoph",
""
],
[
"Arens",
"Michael",
""
]
] |
Models for semantic segmentation require a large amount of hand-labeled training data which is costly and time-consuming to produce. For this purpose, we present a label fusion framework that is capable of improving semantic pixel labels of video sequences in an unsupervised manner. We make use of a 3D mesh representation of the environment and fuse the predictions of different frames into a consistent representation using semantic mesh textures. Rendering the semantic mesh using the original intrinsic and extrinsic camera parameters yields a set of improved semantic segmentation images. Due to our optimized CUDA implementation, we are able to exploit the entire $c$-dimensional probability distribution of annotations over $c$ classes in an uncertainty-aware manner. We evaluate our method on the Scannet dataset where we improve annotations produced by the state-of-the-art segmentation network ESANet from $52.05 \%$ to $58.25 \%$ pixel accuracy. We publish the source code of our framework online to foster future research in this area (\url{https://github.com/fferflo/semantic-meshes}). To the best of our knowledge, this is the first publicly available label fusion framework for semantic image segmentation based on meshes with semantic textures.
|
1910.08753
|
Zhenzhong Wang
|
Zhenzhong Wang, Min Jiang, Xing Gao, Liang Feng, Weizhen Hu, Kay Chen
Tan
|
Evolutionary Dynamic Multi-objective Optimization Via Regression
Transfer Learning
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic multi-objective optimization problems (DMOPs) remain a challenge to
be settled, because of conflicting objective functions change over time. In
recent years, transfer learning has been proven to be a kind of effective
approach in solving DMOPs. In this paper, a novel transfer learning based
dynamic multi-objective optimization algorithm (DMOA) is proposed called
regression transfer learning prediction based DMOA (RTLP-DMOA). The algorithm
aims to generate an excellent initial population to accelerate the evolutionary
process and improve the evolutionary performance in solving DMOPs. When an
environmental change is detected, a regression transfer learning prediction
model is constructed by reusing the historical population, which can predict
objective values. Then, with the assistance of this prediction model, some
high-quality solutions with better predicted objective values are selected as
the initial population, which can improve the performance of the evolutionary
process. We compare the proposed algorithm with three state-of-the-art
algorithms on benchmark functions. Experimental results indicate that the
proposed algorithm can significantly enhance the performance of static
multi-objective optimization algorithms and is competitive in convergence and
diversity.
|
[
{
"created": "Sat, 19 Oct 2019 11:29:52 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Oct 2019 04:35:43 GMT",
"version": "v2"
}
] |
2019-10-23
|
[
[
"Wang",
"Zhenzhong",
""
],
[
"Jiang",
"Min",
""
],
[
"Gao",
"Xing",
""
],
[
"Feng",
"Liang",
""
],
[
"Hu",
"Weizhen",
""
],
[
"Tan",
"Kay Chen",
""
]
] |
Dynamic multi-objective optimization problems (DMOPs) remain a challenge to be settled, because of conflicting objective functions change over time. In recent years, transfer learning has been proven to be a kind of effective approach in solving DMOPs. In this paper, a novel transfer learning based dynamic multi-objective optimization algorithm (DMOA) is proposed called regression transfer learning prediction based DMOA (RTLP-DMOA). The algorithm aims to generate an excellent initial population to accelerate the evolutionary process and improve the evolutionary performance in solving DMOPs. When an environmental change is detected, a regression transfer learning prediction model is constructed by reusing the historical population, which can predict objective values. Then, with the assistance of this prediction model, some high-quality solutions with better predicted objective values are selected as the initial population, which can improve the performance of the evolutionary process. We compare the proposed algorithm with three state-of-the-art algorithms on benchmark functions. Experimental results indicate that the proposed algorithm can significantly enhance the performance of static multi-objective optimization algorithms and is competitive in convergence and diversity.
|
2306.10944
|
Dong Xing
|
Dong Xing, Pengjie Gu, Qian Zheng, Xinrun Wang, Shanqi Liu, Longtao
Zheng, Bo An, Gang Pan
|
Controlling Type Confounding in Ad Hoc Teamwork with Instance-wise
Teammate Feedback Rectification
|
Accepted by ICML 2023
| null | null | null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ad hoc teamwork requires an agent to cooperate with unknown teammates without
prior coordination. Many works propose to abstract teammate instances into
high-level representation of types and then pre-train the best response for
each type. However, most of them do not consider the distribution of teammate
instances within a type. This could expose the agent to the hidden risk of
\emph{type confounding}. In the worst case, the best response for an abstract
teammate type could be the worst response for all specific instances of that
type. This work addresses the issue from the lens of causal inference. We first
theoretically demonstrate that this phenomenon is due to the spurious
correlation brought by uncontrolled teammate distribution. Then, we propose our
solution, CTCAT, which disentangles such correlation through an instance-wise
teammate feedback rectification. This operation reweights the interaction of
teammate instances within a shared type to reduce the influence of type
confounding. The effect of CTCAT is evaluated in multiple domains, including
classic ad hoc teamwork tasks and real-world scenarios. Results show that CTCAT
is robust to the influence of type confounding, a practical issue that directly
hazards the robustness of our trained agents but was unnoticed in previous
works.
|
[
{
"created": "Mon, 19 Jun 2023 14:03:39 GMT",
"version": "v1"
}
] |
2023-06-21
|
[
[
"Xing",
"Dong",
""
],
[
"Gu",
"Pengjie",
""
],
[
"Zheng",
"Qian",
""
],
[
"Wang",
"Xinrun",
""
],
[
"Liu",
"Shanqi",
""
],
[
"Zheng",
"Longtao",
""
],
[
"An",
"Bo",
""
],
[
"Pan",
"Gang",
""
]
] |
Ad hoc teamwork requires an agent to cooperate with unknown teammates without prior coordination. Many works propose to abstract teammate instances into high-level representation of types and then pre-train the best response for each type. However, most of them do not consider the distribution of teammate instances within a type. This could expose the agent to the hidden risk of \emph{type confounding}. In the worst case, the best response for an abstract teammate type could be the worst response for all specific instances of that type. This work addresses the issue from the lens of causal inference. We first theoretically demonstrate that this phenomenon is due to the spurious correlation brought by uncontrolled teammate distribution. Then, we propose our solution, CTCAT, which disentangles such correlation through an instance-wise teammate feedback rectification. This operation reweights the interaction of teammate instances within a shared type to reduce the influence of type confounding. The effect of CTCAT is evaluated in multiple domains, including classic ad hoc teamwork tasks and real-world scenarios. Results show that CTCAT is robust to the influence of type confounding, a practical issue that directly hazards the robustness of our trained agents but was unnoticed in previous works.
|
1805.09786
|
\c{C}a\u{g}lar G\"ul\c{c}ehre
|
Caglar Gulcehre, Misha Denil, Mateusz Malinowski, Ali Razavi, Razvan
Pascanu, Karl Moritz Hermann, Peter Battaglia, Victor Bapst, David Raposo,
Adam Santoro, Nando de Freitas
|
Hyperbolic Attention Networks
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce hyperbolic attention networks to endow neural networks with
enough capacity to match the complexity of data with hierarchical and power-law
structure. A few recent approaches have successfully demonstrated the benefits
of imposing hyperbolic geometry on the parameters of shallow networks. We
extend this line of work by imposing hyperbolic geometry on the activations of
neural networks. This allows us to exploit hyperbolic geometry to reason about
embeddings produced by deep networks. We achieve this by re-expressing the
ubiquitous mechanism of soft attention in terms of operations defined for
hyperboloid and Klein models. Our method shows improvements in terms of
generalization on neural machine translation, learning on graphs and visual
question answering tasks while keeping the neural representations compact.
|
[
{
"created": "Thu, 24 May 2018 17:11:35 GMT",
"version": "v1"
}
] |
2018-05-25
|
[
[
"Gulcehre",
"Caglar",
""
],
[
"Denil",
"Misha",
""
],
[
"Malinowski",
"Mateusz",
""
],
[
"Razavi",
"Ali",
""
],
[
"Pascanu",
"Razvan",
""
],
[
"Hermann",
"Karl Moritz",
""
],
[
"Battaglia",
"Peter",
""
],
[
"Bapst",
"Victor",
""
],
[
"Raposo",
"David",
""
],
[
"Santoro",
"Adam",
""
],
[
"de Freitas",
"Nando",
""
]
] |
We introduce hyperbolic attention networks to endow neural networks with enough capacity to match the complexity of data with hierarchical and power-law structure. A few recent approaches have successfully demonstrated the benefits of imposing hyperbolic geometry on the parameters of shallow networks. We extend this line of work by imposing hyperbolic geometry on the activations of neural networks. This allows us to exploit hyperbolic geometry to reason about embeddings produced by deep networks. We achieve this by re-expressing the ubiquitous mechanism of soft attention in terms of operations defined for hyperboloid and Klein models. Our method shows improvements in terms of generalization on neural machine translation, learning on graphs and visual question answering tasks while keeping the neural representations compact.
|
1806.03743
|
Sabrina Mielke
|
Ryan Cotterell, Sabrina J. Mielke, Jason Eisner, Brian Roark
|
Are All Languages Equally Hard to Language-Model?
|
Published at NAACL 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For general modeling methods applied to diverse languages, a natural question
is: how well should we expect our models to work on languages with differing
typological profiles? In this work, we develop an evaluation framework for fair
cross-linguistic comparison of language models, using translated text so that
all models are asked to predict approximately the same information. We then
conduct a study on 21 languages, demonstrating that in some languages, the
textual expression of the information is harder to predict with both $n$-gram
and LSTM language models. We show complex inflectional morphology to be a cause
of performance differences among languages.
|
[
{
"created": "Sun, 10 Jun 2018 23:24:33 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Feb 2020 18:30:29 GMT",
"version": "v2"
}
] |
2020-02-26
|
[
[
"Cotterell",
"Ryan",
""
],
[
"Mielke",
"Sabrina J.",
""
],
[
"Eisner",
"Jason",
""
],
[
"Roark",
"Brian",
""
]
] |
For general modeling methods applied to diverse languages, a natural question is: how well should we expect our models to work on languages with differing typological profiles? In this work, we develop an evaluation framework for fair cross-linguistic comparison of language models, using translated text so that all models are asked to predict approximately the same information. We then conduct a study on 21 languages, demonstrating that in some languages, the textual expression of the information is harder to predict with both $n$-gram and LSTM language models. We show complex inflectional morphology to be a cause of performance differences among languages.
|
1802.06894
|
Kejun Huang
|
Kejun Huang, Xiao Fu, Nicholas D. Sidiropoulos
|
Learning Hidden Markov Models from Pairwise Co-occurrences with
Application to Topic Modeling
|
ICML 2018
| null | null | null |
cs.CL cs.LG eess.SP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new algorithm for identifying the transition and emission
probabilities of a hidden Markov model (HMM) from the emitted data.
Expectation-maximization becomes computationally prohibitive for long
observation records, which are often required for identification. The new
algorithm is particularly suitable for cases where the available sample size is
large enough to accurately estimate second-order output probabilities, but not
higher-order ones. We show that if one is only able to obtain a reliable
estimate of the pairwise co-occurrence probabilities of the emissions, it is
still possible to uniquely identify the HMM if the emission probability is
\emph{sufficiently scattered}. We apply our method to hidden topic Markov
modeling, and demonstrate that we can learn topics with higher quality if
documents are modeled as observations of HMMs sharing the same emission (topic)
probability, compared to the simple but widely used bag-of-words model.
|
[
{
"created": "Mon, 19 Feb 2018 22:33:56 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Jun 2018 21:15:46 GMT",
"version": "v2"
}
] |
2018-06-20
|
[
[
"Huang",
"Kejun",
""
],
[
"Fu",
"Xiao",
""
],
[
"Sidiropoulos",
"Nicholas D.",
""
]
] |
We present a new algorithm for identifying the transition and emission probabilities of a hidden Markov model (HMM) from the emitted data. Expectation-maximization becomes computationally prohibitive for long observation records, which are often required for identification. The new algorithm is particularly suitable for cases where the available sample size is large enough to accurately estimate second-order output probabilities, but not higher-order ones. We show that if one is only able to obtain a reliable estimate of the pairwise co-occurrence probabilities of the emissions, it is still possible to uniquely identify the HMM if the emission probability is \emph{sufficiently scattered}. We apply our method to hidden topic Markov modeling, and demonstrate that we can learn topics with higher quality if documents are modeled as observations of HMMs sharing the same emission (topic) probability, compared to the simple but widely used bag-of-words model.
|
2102.05007
|
Matan Eyal
|
Matan Eyal, Asaf Amrami, Hillel Taub-Tabib, Yoav Goldberg
|
Bootstrapping Relation Extractors using Syntactic Search by Examples
|
EACL 2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The advent of neural-networks in NLP brought with it substantial improvements
in supervised relation extraction. However, obtaining a sufficient quantity of
training data remains a key challenge. In this work we propose a process for
bootstrapping training datasets which can be performed quickly by
non-NLP-experts. We take advantage of search engines over syntactic-graphs
(Such as Shlain et al. (2020)) which expose a friendly by-example syntax. We
use these to obtain positive examples by searching for sentences that are
syntactically similar to user input examples. We apply this technique to
relations from TACRED and DocRED and show that the resulting models are
competitive with models trained on manually annotated data and on data obtained
from distant supervision. The models also outperform models trained using NLG
data augmentation techniques. Extending the search-based approach with the NLG
method further improves the results.
|
[
{
"created": "Tue, 9 Feb 2021 18:17:59 GMT",
"version": "v1"
}
] |
2021-02-10
|
[
[
"Eyal",
"Matan",
""
],
[
"Amrami",
"Asaf",
""
],
[
"Taub-Tabib",
"Hillel",
""
],
[
"Goldberg",
"Yoav",
""
]
] |
The advent of neural-networks in NLP brought with it substantial improvements in supervised relation extraction. However, obtaining a sufficient quantity of training data remains a key challenge. In this work we propose a process for bootstrapping training datasets which can be performed quickly by non-NLP-experts. We take advantage of search engines over syntactic-graphs (Such as Shlain et al. (2020)) which expose a friendly by-example syntax. We use these to obtain positive examples by searching for sentences that are syntactically similar to user input examples. We apply this technique to relations from TACRED and DocRED and show that the resulting models are competitive with models trained on manually annotated data and on data obtained from distant supervision. The models also outperform models trained using NLG data augmentation techniques. Extending the search-based approach with the NLG method further improves the results.
|
2306.14790
|
Tianchen Yang
|
Tianchen Yang, Qifan Zhang, Zhaoyang Sun, and Yubo Hou
|
Automatic Assessment of Divergent Thinking in Chinese Language with
TransDis: A Transformer-Based Language Model Approach
| null | null |
10.3758/s13428-023-02313-z
| null |
cs.CL stat.AP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Language models have been increasingly popular for automatic creativity
assessment, generating semantic distances to objectively measure the quality of
creative ideas. However, there is currently a lack of an automatic assessment
system for evaluating creative ideas in the Chinese language. To address this
gap, we developed TransDis, a scoring system using transformer-based language
models, capable of providing valid originality (quality) and flexibility
(variety) scores for Alternative Uses Task (AUT) responses in Chinese. Study 1
demonstrated that the latent model-rated originality factor, comprised of three
transformer-based models, strongly predicted human originality ratings, and the
model-rated flexibility strongly correlated with human flexibility ratings as
well. Criterion validity analyses indicated that model-rated originality and
flexibility positively correlated to other creativity measures, demonstrating
similar validity to human ratings. Study 2 & 3 showed that TransDis effectively
distinguished participants instructed to provide creative vs. common uses
(Study 2) and participants instructed to generate ideas in a flexible vs.
persistent way (Study 3). Our findings suggest that TransDis can be a reliable
and low-cost tool for measuring idea originality and flexibility in Chinese
language, potentially paving the way for automatic creativity assessment in
other languages. We offer an open platform to compute originality and
flexibility for AUT responses in Chinese and over 50 other languages
(https://osf.io/59jv2/).
|
[
{
"created": "Mon, 26 Jun 2023 15:48:05 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Oct 2023 00:55:28 GMT",
"version": "v2"
},
{
"created": "Sun, 24 Dec 2023 15:08:59 GMT",
"version": "v3"
}
] |
2023-12-27
|
[
[
"Yang",
"Tianchen",
""
],
[
"Zhang",
"Qifan",
""
],
[
"Sun",
"Zhaoyang",
""
],
[
"Hou",
"Yubo",
""
]
] |
Language models have been increasingly popular for automatic creativity assessment, generating semantic distances to objectively measure the quality of creative ideas. However, there is currently a lack of an automatic assessment system for evaluating creative ideas in the Chinese language. To address this gap, we developed TransDis, a scoring system using transformer-based language models, capable of providing valid originality (quality) and flexibility (variety) scores for Alternative Uses Task (AUT) responses in Chinese. Study 1 demonstrated that the latent model-rated originality factor, comprised of three transformer-based models, strongly predicted human originality ratings, and the model-rated flexibility strongly correlated with human flexibility ratings as well. Criterion validity analyses indicated that model-rated originality and flexibility positively correlated to other creativity measures, demonstrating similar validity to human ratings. Study 2 & 3 showed that TransDis effectively distinguished participants instructed to provide creative vs. common uses (Study 2) and participants instructed to generate ideas in a flexible vs. persistent way (Study 3). Our findings suggest that TransDis can be a reliable and low-cost tool for measuring idea originality and flexibility in Chinese language, potentially paving the way for automatic creativity assessment in other languages. We offer an open platform to compute originality and flexibility for AUT responses in Chinese and over 50 other languages (https://osf.io/59jv2/).
|
1107.4893
|
Zeev Nutov
|
Nachshon Cohen and Zeev Nutov
|
Approximating minimum-power edge-multicovers
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a graph with edge costs, the {\em power} of a node is themaximum cost
of an edge incident to it, and the power of a graph is the sum of the powers of
its nodes. Motivated by applications in wireless networks, we consider the
following fundamental problem in wireless network design. Given a graph
$G=(V,E)$ with edge costs and degree bounds $\{r(v):v \in V\}$, the {\sf
Minimum-Power Edge-Multi-Cover} ({\sf MPEMC}) problem is to find a
minimum-power subgraph $J$ of $G$ such that the degree of every node $v$ in $J$
is at least $r(v)$. We give two approximation algorithms for {\sf MPEMC}, with
ratios $O(\log k)$ and $k+1/2$, where $k=\max_{v \in V} r(v)$ is the maximum
degree bound. This improves the previous ratios $O(\log n)$ and $k+1$, and
implies ratios $O(\log k)$ for the {\sf Minimum-Power $k$-Outconnected
Subgraph} and $O(\log k \log \frac{n}{n-k})$ for the {\sf Minimum-Power
$k$-Connected Subgraph} problems; the latter is the currently best known ratio
for the min-cost version of the problem.
|
[
{
"created": "Mon, 25 Jul 2011 11:07:16 GMT",
"version": "v1"
}
] |
2011-07-26
|
[
[
"Cohen",
"Nachshon",
""
],
[
"Nutov",
"Zeev",
""
]
] |
Given a graph with edge costs, the {\em power} of a node is themaximum cost of an edge incident to it, and the power of a graph is the sum of the powers of its nodes. Motivated by applications in wireless networks, we consider the following fundamental problem in wireless network design. Given a graph $G=(V,E)$ with edge costs and degree bounds $\{r(v):v \in V\}$, the {\sf Minimum-Power Edge-Multi-Cover} ({\sf MPEMC}) problem is to find a minimum-power subgraph $J$ of $G$ such that the degree of every node $v$ in $J$ is at least $r(v)$. We give two approximation algorithms for {\sf MPEMC}, with ratios $O(\log k)$ and $k+1/2$, where $k=\max_{v \in V} r(v)$ is the maximum degree bound. This improves the previous ratios $O(\log n)$ and $k+1$, and implies ratios $O(\log k)$ for the {\sf Minimum-Power $k$-Outconnected Subgraph} and $O(\log k \log \frac{n}{n-k})$ for the {\sf Minimum-Power $k$-Connected Subgraph} problems; the latter is the currently best known ratio for the min-cost version of the problem.
|
1602.08132
|
Zhenhao Ge
|
Zhenhao Ge, Sudhendu R. Sharma, Mark J. T. Smith
|
Adaptive Frequency Cepstral Coefficients for Word Mispronunciation
Detection
|
4th International Congress on Image and Signal Processing (CISP) 2011
| null |
10.1109/CISP.2011.6100685
| null |
cs.SD cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Systems based on automatic speech recognition (ASR) technology can provide
important functionality in computer assisted language learning applications.
This is a young but growing area of research motivated by the large number of
students studying foreign languages. Here we propose a Hidden Markov Model
(HMM)-based method to detect mispronunciations. Exploiting the specific dialog
scripting employed in language learning software, HMMs are trained for
different pronunciations. New adaptive features have been developed and
obtained through an adaptive warping of the frequency scale prior to computing
the cepstral coefficients. The optimization criterion used for the warping
function is to maximize separation of two major groups of pronunciations
(native and non-native) in terms of classification rate. Experimental results
show that the adaptive frequency scale yields a better coefficient
representation leading to higher classification rates in comparison with
conventional HMMs using Mel-frequency cepstral coefficients.
|
[
{
"created": "Thu, 25 Feb 2016 22:17:31 GMT",
"version": "v1"
}
] |
2016-02-29
|
[
[
"Ge",
"Zhenhao",
""
],
[
"Sharma",
"Sudhendu R.",
""
],
[
"Smith",
"Mark J. T.",
""
]
] |
Systems based on automatic speech recognition (ASR) technology can provide important functionality in computer assisted language learning applications. This is a young but growing area of research motivated by the large number of students studying foreign languages. Here we propose a Hidden Markov Model (HMM)-based method to detect mispronunciations. Exploiting the specific dialog scripting employed in language learning software, HMMs are trained for different pronunciations. New adaptive features have been developed and obtained through an adaptive warping of the frequency scale prior to computing the cepstral coefficients. The optimization criterion used for the warping function is to maximize separation of two major groups of pronunciations (native and non-native) in terms of classification rate. Experimental results show that the adaptive frequency scale yields a better coefficient representation leading to higher classification rates in comparison with conventional HMMs using Mel-frequency cepstral coefficients.
|
2205.12850
|
Alexander Lindermayr
|
Giulia Bernardini, Alexander Lindermayr, Alberto Marchetti-Spaccamela,
Nicole Megow, Leen Stougie, Michelle Sweering
|
A Universal Error Measure for Input Predictions Applied to Online Graph
Problems
|
To appear in NeurIPS 2022
| null | null | null |
cs.DS cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a novel measure for quantifying the error in input predictions.
The error is based on a minimum-cost hyperedge cover in a suitably defined
hypergraph and provides a general template which we apply to online graph
problems. The measure captures errors due to absent predicted requests as well
as unpredicted actual requests; hence, predicted and actual inputs can be of
arbitrary size. We achieve refined performance guarantees for previously
studied network design problems in the online-list model, such as Steiner tree
and facility location. Further, we initiate the study of learning-augmented
algorithms for online routing problems, such as the online traveling
salesperson problem and the online dial-a-ride problem, where (transportation)
requests arrive over time (online-time model). We provide a general algorithmic
framework and we give error-dependent performance bounds that improve upon
known worst-case barriers, when given accurate predictions, at the cost of
slightly increased worst-case bounds when given predictions of arbitrary
quality.
|
[
{
"created": "Wed, 25 May 2022 15:24:03 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Oct 2022 06:53:25 GMT",
"version": "v2"
}
] |
2022-10-11
|
[
[
"Bernardini",
"Giulia",
""
],
[
"Lindermayr",
"Alexander",
""
],
[
"Marchetti-Spaccamela",
"Alberto",
""
],
[
"Megow",
"Nicole",
""
],
[
"Stougie",
"Leen",
""
],
[
"Sweering",
"Michelle",
""
]
] |
We introduce a novel measure for quantifying the error in input predictions. The error is based on a minimum-cost hyperedge cover in a suitably defined hypergraph and provides a general template which we apply to online graph problems. The measure captures errors due to absent predicted requests as well as unpredicted actual requests; hence, predicted and actual inputs can be of arbitrary size. We achieve refined performance guarantees for previously studied network design problems in the online-list model, such as Steiner tree and facility location. Further, we initiate the study of learning-augmented algorithms for online routing problems, such as the online traveling salesperson problem and the online dial-a-ride problem, where (transportation) requests arrive over time (online-time model). We provide a general algorithmic framework and we give error-dependent performance bounds that improve upon known worst-case barriers, when given accurate predictions, at the cost of slightly increased worst-case bounds when given predictions of arbitrary quality.
|
1907.09578
|
Andrey Zhmoginov
|
Andrey Zhmoginov, Ian Fischer, Mark Sandler
|
Information-Bottleneck Approach to Salient Region Discovery
| null | null | null | null |
cs.CV cs.IT cs.LG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new method for learning image attention masks in a
semi-supervised setting based on the Information Bottleneck principle. Provided
with a set of labeled images, the mask generation model is minimizing mutual
information between the input and the masked image while maximizing the mutual
information between the same masked image and the image label. In contrast with
other approaches, our attention model produces a Boolean rather than a
continuous mask, entirely concealing the information in masked-out pixels.
Using a set of synthetic datasets based on MNIST and CIFAR10 and the SVHN
datasets, we demonstrate that our method can successfully attend to features
known to define the image class.
|
[
{
"created": "Mon, 22 Jul 2019 21:13:30 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Feb 2020 22:14:54 GMT",
"version": "v2"
}
] |
2020-02-18
|
[
[
"Zhmoginov",
"Andrey",
""
],
[
"Fischer",
"Ian",
""
],
[
"Sandler",
"Mark",
""
]
] |
We propose a new method for learning image attention masks in a semi-supervised setting based on the Information Bottleneck principle. Provided with a set of labeled images, the mask generation model is minimizing mutual information between the input and the masked image while maximizing the mutual information between the same masked image and the image label. In contrast with other approaches, our attention model produces a Boolean rather than a continuous mask, entirely concealing the information in masked-out pixels. Using a set of synthetic datasets based on MNIST and CIFAR10 and the SVHN datasets, we demonstrate that our method can successfully attend to features known to define the image class.
|
2404.02699
|
Yu He
|
Zihan Yao, Yu He, Tianyu Qi and Ming Li
|
Scalable Model Editing via Customized Expert Networks
|
Accepted by COLM2024
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Addressing the issues of hallucinations and outdated knowledge in large
language models is critical for their reliable application. Model Editing
presents a promising avenue for mitigating these challenges in a cost-effective
manner. However, existing methods often suffer from unsatisfactory
generalization and unintended effects on non-edited samples. To overcome these
limitations, we introduce a novel approach: Scalable Model Editing via
Customized Expert Networks (SCEN), which is a two-stage continuous training
paradigm. Specifically, in the first stage, we train lightweight expert
networks individually for each piece of knowledge that needs to be updated.
Subsequently, we train a corresponding indexing neuron for each expert to
control the activation state of that expert. We conducted a series of
experiments on the ZsRE and Hallucination benchmarks by tuning the advanced
open-source LLM, Llama2, achieving state-of-the-art results compared to current
mainstream methods. Our code is available at
https://github.com/TAL-auroraX/SCEN.
|
[
{
"created": "Wed, 3 Apr 2024 12:57:19 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Aug 2024 13:10:50 GMT",
"version": "v2"
}
] |
2024-08-09
|
[
[
"Yao",
"Zihan",
""
],
[
"He",
"Yu",
""
],
[
"Qi",
"Tianyu",
""
],
[
"Li",
"Ming",
""
]
] |
Addressing the issues of hallucinations and outdated knowledge in large language models is critical for their reliable application. Model Editing presents a promising avenue for mitigating these challenges in a cost-effective manner. However, existing methods often suffer from unsatisfactory generalization and unintended effects on non-edited samples. To overcome these limitations, we introduce a novel approach: Scalable Model Editing via Customized Expert Networks (SCEN), which is a two-stage continuous training paradigm. Specifically, in the first stage, we train lightweight expert networks individually for each piece of knowledge that needs to be updated. Subsequently, we train a corresponding indexing neuron for each expert to control the activation state of that expert. We conducted a series of experiments on the ZsRE and Hallucination benchmarks by tuning the advanced open-source LLM, Llama2, achieving state-of-the-art results compared to current mainstream methods. Our code is available at https://github.com/TAL-auroraX/SCEN.
|
2103.17269
|
Michael Niemeyer
|
Michael Niemeyer, Andreas Geiger
|
CAMPARI: Camera-Aware Decomposed Generative Neural Radiance Fields
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tremendous progress in deep generative models has led to photorealistic image
synthesis. While achieving compelling results, most approaches operate in the
two-dimensional image domain, ignoring the three-dimensional nature of our
world. Several recent works therefore propose generative models which are
3D-aware, i.e., scenes are modeled in 3D and then rendered differentiably to
the image plane. This leads to impressive 3D consistency, but incorporating
such a bias comes at a price: the camera needs to be modeled as well. Current
approaches assume fixed intrinsics and a predefined prior over camera pose
ranges. As a result, parameter tuning is typically required for real-world
data, and results degrade if the data distribution is not matched. Our key
hypothesis is that learning a camera generator jointly with the image generator
leads to a more principled approach to 3D-aware image synthesis. Further, we
propose to decompose the scene into a background and foreground model, leading
to more efficient and disentangled scene representations. While training from
raw, unposed image collections, we learn a 3D- and camera-aware generative
model which faithfully recovers not only the image but also the camera data
distribution. At test time, our model generates images with explicit control
over the camera as well as the shape and appearance of the scene.
|
[
{
"created": "Wed, 31 Mar 2021 17:59:24 GMT",
"version": "v1"
}
] |
2021-04-01
|
[
[
"Niemeyer",
"Michael",
""
],
[
"Geiger",
"Andreas",
""
]
] |
Tremendous progress in deep generative models has led to photorealistic image synthesis. While achieving compelling results, most approaches operate in the two-dimensional image domain, ignoring the three-dimensional nature of our world. Several recent works therefore propose generative models which are 3D-aware, i.e., scenes are modeled in 3D and then rendered differentiably to the image plane. This leads to impressive 3D consistency, but incorporating such a bias comes at a price: the camera needs to be modeled as well. Current approaches assume fixed intrinsics and a predefined prior over camera pose ranges. As a result, parameter tuning is typically required for real-world data, and results degrade if the data distribution is not matched. Our key hypothesis is that learning a camera generator jointly with the image generator leads to a more principled approach to 3D-aware image synthesis. Further, we propose to decompose the scene into a background and foreground model, leading to more efficient and disentangled scene representations. While training from raw, unposed image collections, we learn a 3D- and camera-aware generative model which faithfully recovers not only the image but also the camera data distribution. At test time, our model generates images with explicit control over the camera as well as the shape and appearance of the scene.
|
2111.09047
|
Julien Berger
|
Julien Berger, Clemence Legros, Madina Abdykarim
|
Dimensionless formulation and similarity to assess the main phenomena of
heat and mass transfer in building porous material
| null | null |
10.1016/j.jobe.2020.101849
| null |
cs.CE physics.app-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Within the environmental context, several tools based on simulations have
been proposed to analyze the physical phenomena of heat and mass transfer in
porous materials. However, it is still an open challenge to propose tools that
do not require to perform computations to catch the dominant processes. Thus,
this article proposes to explore advantages of using a dimensionless analysis
by scaling the governing equations of heat and mass transfer. Proposed
methodology introduces dimensionless numbers and their nonlinear distortions.
The relevant investigation enables to enhance the preponderant phenomena to
\emph{(i)} compare different categories of materials, \emph{(ii)} evaluate the
competition between heat and mass transfer for each material or \emph{(iii)}
describe the transfer in multi-layered wall configurations. It also permits to
define hygrothermal kinetic, geometric and dynamic similarities among different
physical materials. Equivalent systems can be characterized in the framework of
experimental or wall designs. Three cases are presented for similarity studies
in terms of \emph{(i)} equivalent material length, \emph{(ii)} time of heat and
mass transfer and \emph{(iii)} experimental configurations. All these
advantages are illustrated in the given article considering $49$ building
materials separated in $7$ categories.
|
[
{
"created": "Wed, 17 Nov 2021 11:43:03 GMT",
"version": "v1"
}
] |
2021-11-18
|
[
[
"Berger",
"Julien",
""
],
[
"Legros",
"Clemence",
""
],
[
"Abdykarim",
"Madina",
""
]
] |
Within the environmental context, several tools based on simulations have been proposed to analyze the physical phenomena of heat and mass transfer in porous materials. However, it is still an open challenge to propose tools that do not require to perform computations to catch the dominant processes. Thus, this article proposes to explore advantages of using a dimensionless analysis by scaling the governing equations of heat and mass transfer. Proposed methodology introduces dimensionless numbers and their nonlinear distortions. The relevant investigation enables to enhance the preponderant phenomena to \emph{(i)} compare different categories of materials, \emph{(ii)} evaluate the competition between heat and mass transfer for each material or \emph{(iii)} describe the transfer in multi-layered wall configurations. It also permits to define hygrothermal kinetic, geometric and dynamic similarities among different physical materials. Equivalent systems can be characterized in the framework of experimental or wall designs. Three cases are presented for similarity studies in terms of \emph{(i)} equivalent material length, \emph{(ii)} time of heat and mass transfer and \emph{(iii)} experimental configurations. All these advantages are illustrated in the given article considering $49$ building materials separated in $7$ categories.
|
2303.07655
|
Kourosh Darvish
|
Kourosh Darvish, Serena Ivaldi, Daniele Pucci
|
Simultaneous Action Recognition and Human Whole-Body Motion and Dynamics
Prediction from Wearable Sensors
| null | null |
10.1109/Humanoids53995.2022.10000122
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a novel approach to solve simultaneously the problems of
human activity recognition and whole-body motion and dynamics prediction for
real-time applications. Starting from the dynamics of human motion and motor
system theory, the notion of mixture of experts from deep learning has been
extended to address this problem. In the proposed approach, experts are
modelled as a sequence-to-sequence recurrent neural networks (RNN)
architecture. Experiments show the results of 66-DoF real-world human motion
prediction and action recognition during different tasks like walking and
rotating. The code associated with this paper is available at:
\url{github.com/ami-iit/paper_darvish_2022_humanoids_action-kindyn-predicition}
|
[
{
"created": "Tue, 14 Mar 2023 06:52:41 GMT",
"version": "v1"
}
] |
2023-03-15
|
[
[
"Darvish",
"Kourosh",
""
],
[
"Ivaldi",
"Serena",
""
],
[
"Pucci",
"Daniele",
""
]
] |
This paper presents a novel approach to solve simultaneously the problems of human activity recognition and whole-body motion and dynamics prediction for real-time applications. Starting from the dynamics of human motion and motor system theory, the notion of mixture of experts from deep learning has been extended to address this problem. In the proposed approach, experts are modelled as a sequence-to-sequence recurrent neural networks (RNN) architecture. Experiments show the results of 66-DoF real-world human motion prediction and action recognition during different tasks like walking and rotating. The code associated with this paper is available at: \url{github.com/ami-iit/paper_darvish_2022_humanoids_action-kindyn-predicition}
|
1508.02050
|
Tahani Almanie
|
Tahani Almanie, Rsha Mirza and Elizabeth Lor
|
Crime Prediction Based On Crime Types And Using Spatial And Temporal
Criminal Hotspots
|
19 pages, 18 figures, 7 tables
|
International Journal of Data Mining & Knowledge Management
Process (IJDKP) Vol.5, No.4, July 2015
|
10.5121/ijdkp.2015.5401
| null |
cs.AI cs.CY cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper focuses on finding spatial and temporal criminal hotspots. It
analyses two different real-world crimes datasets for Denver, CO and Los
Angeles, CA and provides a comparison between the two datasets through a
statistical analysis supported by several graphs. Then, it clarifies how we
conducted Apriori algorithm to produce interesting frequent patterns for
criminal hotspots. In addition, the paper shows how we used Decision Tree
classifier and Naive Bayesian classifier in order to predict potential crime
types. To further analyse crimes datasets, the paper introduces an analysis
study by combining our findings of Denver crimes dataset with its demographics
information in order to capture the factors that might affect the safety of
neighborhoods. The results of this solution could be used to raise awareness
regarding the dangerous locations and to help agencies to predict future crimes
in a specific location within a particular time.
|
[
{
"created": "Sun, 9 Aug 2015 17:15:56 GMT",
"version": "v1"
}
] |
2015-08-11
|
[
[
"Almanie",
"Tahani",
""
],
[
"Mirza",
"Rsha",
""
],
[
"Lor",
"Elizabeth",
""
]
] |
This paper focuses on finding spatial and temporal criminal hotspots. It analyses two different real-world crimes datasets for Denver, CO and Los Angeles, CA and provides a comparison between the two datasets through a statistical analysis supported by several graphs. Then, it clarifies how we conducted Apriori algorithm to produce interesting frequent patterns for criminal hotspots. In addition, the paper shows how we used Decision Tree classifier and Naive Bayesian classifier in order to predict potential crime types. To further analyse crimes datasets, the paper introduces an analysis study by combining our findings of Denver crimes dataset with its demographics information in order to capture the factors that might affect the safety of neighborhoods. The results of this solution could be used to raise awareness regarding the dangerous locations and to help agencies to predict future crimes in a specific location within a particular time.
|
1502.00979
|
Yuming Jiang
|
Fengyou Sun and Yuming Jiang
|
Further Properties of Wireless Channel Capacity
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Future wireless communication calls for exploration of more efficient use of
wireless channel capacity to meet the increasing demand on higher data rate and
less latency. However, while the ergodic capacity and instantaneous capacity of
a wireless channel have been extensively studied, they are in many cases not
sufficient for use in assessing if data transmission over the channel meets the
quality of service (QoS) requirements. To address this limitation, we advocate
a set of wireless channel capacity concepts, namely "cumulative capacity",
"maximum cumulative capacity", "minimum cumulative capacity", and "range of
cumulative capacity", and for each, study its properties by taking into
consideration the impact of the underlying dependence structure of the
corresponding stochastic process. Specifically, their cumulative distribution
function (CDFs) are investigated extensively, where copula is adopted to
express the dependence structures. Results considering both generic and
specific dependence structures are derived. In particular, in addition to
i.i.d., a specially investigated dependence structure is comonotonicity, i.e,
the time series of wireless channel capacity are increasing functions of a
common random variable. Appealingly, copula can serve as a unifying technique
for obtaining results under various dependence assumptions, e.g. i.i.d. and
Markov dependence, which are widely seen in stochastic network calculus.
Moreover, some other characterizations of cumulative capacity are also studied,
including moment generating function, Mellin transform, and stochastic service
curve. With these properties, we believe QoS assessment of data transmission
over the channel can be further performed, e.g. by applying analytical
techniques and results of the stochastic network calculus theory.
|
[
{
"created": "Sat, 31 Jan 2015 13:10:57 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Apr 2016 20:09:46 GMT",
"version": "v2"
}
] |
2016-04-06
|
[
[
"Sun",
"Fengyou",
""
],
[
"Jiang",
"Yuming",
""
]
] |
Future wireless communication calls for exploration of more efficient use of wireless channel capacity to meet the increasing demand on higher data rate and less latency. However, while the ergodic capacity and instantaneous capacity of a wireless channel have been extensively studied, they are in many cases not sufficient for use in assessing if data transmission over the channel meets the quality of service (QoS) requirements. To address this limitation, we advocate a set of wireless channel capacity concepts, namely "cumulative capacity", "maximum cumulative capacity", "minimum cumulative capacity", and "range of cumulative capacity", and for each, study its properties by taking into consideration the impact of the underlying dependence structure of the corresponding stochastic process. Specifically, their cumulative distribution function (CDFs) are investigated extensively, where copula is adopted to express the dependence structures. Results considering both generic and specific dependence structures are derived. In particular, in addition to i.i.d., a specially investigated dependence structure is comonotonicity, i.e, the time series of wireless channel capacity are increasing functions of a common random variable. Appealingly, copula can serve as a unifying technique for obtaining results under various dependence assumptions, e.g. i.i.d. and Markov dependence, which are widely seen in stochastic network calculus. Moreover, some other characterizations of cumulative capacity are also studied, including moment generating function, Mellin transform, and stochastic service curve. With these properties, we believe QoS assessment of data transmission over the channel can be further performed, e.g. by applying analytical techniques and results of the stochastic network calculus theory.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.