id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2007.02325
|
Shuiqiao Yang
|
Jianlong Zhou, Hamad Zogan, Shuiqiao Yang, Shoaib Jameel, Guandong Xu,
Fang Chen
|
Detecting Community Depression Dynamics Due to COVID-19 Pandemic in
Australia
| null | null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recent COVID-19 pandemic has caused unprecedented impact across the
globe. We have also witnessed millions of people with increased mental health
issues, such as depression, stress, worry, fear, disgust, sadness, and anxiety,
which have become one of the major public health concerns during this severe
health crisis. For instance, depression is one of the most common mental health
issues according to the findings made by the World Health Organisation (WHO).
Depression can cause serious emotional, behavioural and physical health
problems with significant consequences, both personal and social costs
included. This paper studies community depression dynamics due to COVID-19
pandemic through user-generated content on Twitter. A new approach based on
multi-modal features from tweets and Term Frequency-Inverse Document Frequency
(TF-IDF) is proposed to build depression classification models. Multi-modal
features capture depression cues from emotion, topic and domain-specific
perspectives. We study the problem using recently scraped tweets from Twitter
users emanating from the state of New South Wales in Australia. Our novel
classification model is capable of extracting depression polarities which may
be affected by COVID-19 and related events during the COVID-19 period. The
results found that people became more depressed after the outbreak of COVID-19.
The measures implemented by the government such as the state lockdown also
increased depression levels. Further analysis in the Local Government Area
(LGA) level found that the community depression level was different across
different LGAs. Such granular level analysis of depression dynamics not only
can help authorities such as governmental departments to take corresponding
actions more objectively in specific regions if necessary but also allows users
to perceive the dynamics of depression over the time.
|
[
{
"created": "Sun, 5 Jul 2020 12:55:34 GMT",
"version": "v1"
}
] |
2020-07-07
|
[
[
"Zhou",
"Jianlong",
""
],
[
"Zogan",
"Hamad",
""
],
[
"Yang",
"Shuiqiao",
""
],
[
"Jameel",
"Shoaib",
""
],
[
"Xu",
"Guandong",
""
],
[
"Chen",
"Fang",
""
]
] |
The recent COVID-19 pandemic has caused unprecedented impact across the globe. We have also witnessed millions of people with increased mental health issues, such as depression, stress, worry, fear, disgust, sadness, and anxiety, which have become one of the major public health concerns during this severe health crisis. For instance, depression is one of the most common mental health issues according to the findings made by the World Health Organisation (WHO). Depression can cause serious emotional, behavioural and physical health problems with significant consequences, both personal and social costs included. This paper studies community depression dynamics due to COVID-19 pandemic through user-generated content on Twitter. A new approach based on multi-modal features from tweets and Term Frequency-Inverse Document Frequency (TF-IDF) is proposed to build depression classification models. Multi-modal features capture depression cues from emotion, topic and domain-specific perspectives. We study the problem using recently scraped tweets from Twitter users emanating from the state of New South Wales in Australia. Our novel classification model is capable of extracting depression polarities which may be affected by COVID-19 and related events during the COVID-19 period. The results found that people became more depressed after the outbreak of COVID-19. The measures implemented by the government such as the state lockdown also increased depression levels. Further analysis in the Local Government Area (LGA) level found that the community depression level was different across different LGAs. Such granular level analysis of depression dynamics not only can help authorities such as governmental departments to take corresponding actions more objectively in specific regions if necessary but also allows users to perceive the dynamics of depression over the time.
|
2407.03511
|
Oleksandr Kuznetsov
|
Oleksandr Kuznetsov, Anton Yezhov, Vladyslav Yusiuk, and Kateryna
Kuznetsova
|
Scalable Zero-Knowledge Proofs for Verifying Cryptographic Hashing in
Blockchain Applications
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Zero-knowledge proofs (ZKPs) have emerged as a promising solution to address
the scalability challenges in modern blockchain systems. This study proposes a
methodology for generating and verifying ZKPs to ensure the computational
integrity of cryptographic hashing, specifically focusing on the SHA-256
algorithm. By leveraging the Plonky2 framework, which implements the PLONK
protocol with FRI commitment scheme, we demonstrate the efficiency and
scalability of our approach for both random data and real data blocks from the
NEAR blockchain. The experimental results show consistent performance across
different data sizes and types, with the time required for proof generation and
verification remaining within acceptable limits. The generated circuits and
proofs maintain manageable sizes, even for real-world data blocks with a large
number of transactions. The proposed methodology contributes to the development
of secure and trustworthy blockchain systems, where the integrity of
computations can be verified without revealing the underlying data. Further
research is needed to assess the applicability of the approach to other
cryptographic primitives and to evaluate its performance in more complex
real-world scenarios.
|
[
{
"created": "Wed, 3 Jul 2024 21:19:01 GMT",
"version": "v1"
}
] |
2024-07-08
|
[
[
"Kuznetsov",
"Oleksandr",
""
],
[
"Yezhov",
"Anton",
""
],
[
"Yusiuk",
"Vladyslav",
""
],
[
"Kuznetsova",
"Kateryna",
""
]
] |
Zero-knowledge proofs (ZKPs) have emerged as a promising solution to address the scalability challenges in modern blockchain systems. This study proposes a methodology for generating and verifying ZKPs to ensure the computational integrity of cryptographic hashing, specifically focusing on the SHA-256 algorithm. By leveraging the Plonky2 framework, which implements the PLONK protocol with FRI commitment scheme, we demonstrate the efficiency and scalability of our approach for both random data and real data blocks from the NEAR blockchain. The experimental results show consistent performance across different data sizes and types, with the time required for proof generation and verification remaining within acceptable limits. The generated circuits and proofs maintain manageable sizes, even for real-world data blocks with a large number of transactions. The proposed methodology contributes to the development of secure and trustworthy blockchain systems, where the integrity of computations can be verified without revealing the underlying data. Further research is needed to assess the applicability of the approach to other cryptographic primitives and to evaluate its performance in more complex real-world scenarios.
|
2305.04050
|
Bar Karov
|
Bar Karov and Moni Naor
|
New Algorithms and Applications for Risk-Limiting Audits
|
A shorter version of this paper appears in the Proceeding of the 4th
Annual Symposium on Foundations of Responsible Computing, FORC 2023
| null | null | null |
cs.CY cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Risk-limiting audits (RLAs) are a significant tool in increasing confidence
in the accuracy of elections. They consist of randomized algorithms which check
that an election's vote tally, as reported by a vote tabulation system,
corresponds to the correct candidates winning. If an initial vote count leads
to the wrong election winner, an RLA guarantees to identify the error with high
probability over its own randomness. These audits operate by sequentially
sampling and examining ballots until they can either confirm the reported
winner or identify the true winner.
The first part of this work suggests a new generic method, called
``Batchcomp", for converting classical (ballot-level) RLAs into ones that
operate on batches. As a concrete application of the suggested method, we
develop the first ballot-level RLA for the Israeli Knesset elections, and
convert it to one which operates on batches. We ran the suggested ``Batchcomp"
procedure on the results of 22nd, 23rd and 24th Knesset elections, both with
and without errors.
The second part of this work suggests a new use-case for RLAs: verifying that
a population census leads to the correct allocation of political power to a
nation's districts or federal-states. We present an adaptation of ALPHA, an
existing RLA method, to a method which applies to censuses. Our census-RLA is
applicable in nations where parliament seats are allocated to geographical
regions in proportion to their population according to a certain class of
functions (highest averages). It relies on data from both the census and from
an additional procedure which is already conducted in many countries today,
called a post-enumeration survey.
|
[
{
"created": "Sat, 6 May 2023 13:34:39 GMT",
"version": "v1"
}
] |
2023-05-09
|
[
[
"Karov",
"Bar",
""
],
[
"Naor",
"Moni",
""
]
] |
Risk-limiting audits (RLAs) are a significant tool in increasing confidence in the accuracy of elections. They consist of randomized algorithms which check that an election's vote tally, as reported by a vote tabulation system, corresponds to the correct candidates winning. If an initial vote count leads to the wrong election winner, an RLA guarantees to identify the error with high probability over its own randomness. These audits operate by sequentially sampling and examining ballots until they can either confirm the reported winner or identify the true winner. The first part of this work suggests a new generic method, called ``Batchcomp", for converting classical (ballot-level) RLAs into ones that operate on batches. As a concrete application of the suggested method, we develop the first ballot-level RLA for the Israeli Knesset elections, and convert it to one which operates on batches. We ran the suggested ``Batchcomp" procedure on the results of 22nd, 23rd and 24th Knesset elections, both with and without errors. The second part of this work suggests a new use-case for RLAs: verifying that a population census leads to the correct allocation of political power to a nation's districts or federal-states. We present an adaptation of ALPHA, an existing RLA method, to a method which applies to censuses. Our census-RLA is applicable in nations where parliament seats are allocated to geographical regions in proportion to their population according to a certain class of functions (highest averages). It relies on data from both the census and from an additional procedure which is already conducted in many countries today, called a post-enumeration survey.
|
2308.00538
|
Lala Shakti Swarup Ray
|
Lala Shakti Swarup Ray, Vitor Fortes Rey, Bo Zhou, Sungho Suh, Paul
Lukowicz
|
PressureTransferNet: Human Attribute Guided Dynamic Ground Pressure
Profile Transfer using 3D simulated Pressure Maps
|
Activity and Behavior Computing 2023
| null | null | null |
cs.CV cs.AI cs.GR eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose PressureTransferNet, a novel method for Human Activity Recognition
(HAR) using ground pressure information. Our approach generates body-specific
dynamic ground pressure profiles for specific activities by leveraging existing
pressure data from different individuals. PressureTransferNet is an
encoder-decoder model taking a source pressure map and a target human attribute
vector as inputs, producing a new pressure map reflecting the target attribute.
To train the model, we use a sensor simulation to create a diverse dataset with
various human attributes and pressure profiles. Evaluation on a real-world
dataset shows its effectiveness in accurately transferring human attributes to
ground pressure profiles across different scenarios. We visually confirm the
fidelity of the synthesized pressure shapes using a physics-based deep learning
model and achieve a binary R-square value of 0.79 on areas with ground contact.
Validation through classification with F1 score (0.911$\pm$0.015) on physical
pressure mat data demonstrates the correctness of the synthesized pressure
maps, making our method valuable for data augmentation, denoising, sensor
simulation, and anomaly detection. Applications span sports science,
rehabilitation, and bio-mechanics, contributing to the development of HAR
systems.
|
[
{
"created": "Tue, 1 Aug 2023 13:31:25 GMT",
"version": "v1"
}
] |
2023-08-02
|
[
[
"Ray",
"Lala Shakti Swarup",
""
],
[
"Rey",
"Vitor Fortes",
""
],
[
"Zhou",
"Bo",
""
],
[
"Suh",
"Sungho",
""
],
[
"Lukowicz",
"Paul",
""
]
] |
We propose PressureTransferNet, a novel method for Human Activity Recognition (HAR) using ground pressure information. Our approach generates body-specific dynamic ground pressure profiles for specific activities by leveraging existing pressure data from different individuals. PressureTransferNet is an encoder-decoder model taking a source pressure map and a target human attribute vector as inputs, producing a new pressure map reflecting the target attribute. To train the model, we use a sensor simulation to create a diverse dataset with various human attributes and pressure profiles. Evaluation on a real-world dataset shows its effectiveness in accurately transferring human attributes to ground pressure profiles across different scenarios. We visually confirm the fidelity of the synthesized pressure shapes using a physics-based deep learning model and achieve a binary R-square value of 0.79 on areas with ground contact. Validation through classification with F1 score (0.911$\pm$0.015) on physical pressure mat data demonstrates the correctness of the synthesized pressure maps, making our method valuable for data augmentation, denoising, sensor simulation, and anomaly detection. Applications span sports science, rehabilitation, and bio-mechanics, contributing to the development of HAR systems.
|
2401.03428
|
Yuheng Cheng
|
Yuheng Cheng, Ceyao Zhang, Zhengwen Zhang, Xiangrui Meng, Sirui Hong,
Wenhao Li, Zihao Wang, Zekai Wang, Feng Yin, Junhua Zhao, Xiuqiang He
|
Exploring Large Language Model based Intelligent Agents: Definitions,
Methods, and Prospects
| null | null | null | null |
cs.AI cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intelligent agents stand out as a potential path toward artificial general
intelligence (AGI). Thus, researchers have dedicated significant effort to
diverse implementations for them. Benefiting from recent progress in large
language models (LLMs), LLM-based agents that use universal natural language as
an interface exhibit robust generalization capabilities across various
applications -- from serving as autonomous general-purpose task assistants to
applications in coding, social, and economic domains, LLM-based agents offer
extensive exploration opportunities. This paper surveys current research to
provide an in-depth overview of LLM-based intelligent agents within
single-agent and multi-agent systems. It covers their definitions, research
frameworks, and foundational components such as their composition, cognitive
and planning methods, tool utilization, and responses to environmental
feedback. We also delve into the mechanisms of deploying LLM-based agents in
multi-agent systems, including multi-role collaboration, message passing, and
strategies to alleviate communication issues between agents. The discussions
also shed light on popular datasets and application scenarios. We conclude by
envisioning prospects for LLM-based agents, considering the evolving landscape
of AI and natural language processing.
|
[
{
"created": "Sun, 7 Jan 2024 09:08:24 GMT",
"version": "v1"
}
] |
2024-01-09
|
[
[
"Cheng",
"Yuheng",
""
],
[
"Zhang",
"Ceyao",
""
],
[
"Zhang",
"Zhengwen",
""
],
[
"Meng",
"Xiangrui",
""
],
[
"Hong",
"Sirui",
""
],
[
"Li",
"Wenhao",
""
],
[
"Wang",
"Zihao",
""
],
[
"Wang",
"Zekai",
""
],
[
"Yin",
"Feng",
""
],
[
"Zhao",
"Junhua",
""
],
[
"He",
"Xiuqiang",
""
]
] |
Intelligent agents stand out as a potential path toward artificial general intelligence (AGI). Thus, researchers have dedicated significant effort to diverse implementations for them. Benefiting from recent progress in large language models (LLMs), LLM-based agents that use universal natural language as an interface exhibit robust generalization capabilities across various applications -- from serving as autonomous general-purpose task assistants to applications in coding, social, and economic domains, LLM-based agents offer extensive exploration opportunities. This paper surveys current research to provide an in-depth overview of LLM-based intelligent agents within single-agent and multi-agent systems. It covers their definitions, research frameworks, and foundational components such as their composition, cognitive and planning methods, tool utilization, and responses to environmental feedback. We also delve into the mechanisms of deploying LLM-based agents in multi-agent systems, including multi-role collaboration, message passing, and strategies to alleviate communication issues between agents. The discussions also shed light on popular datasets and application scenarios. We conclude by envisioning prospects for LLM-based agents, considering the evolving landscape of AI and natural language processing.
|
2401.04692
|
Nils Rodrigues
|
Nils Rodrigues, Frederik L. Dennig, Vincent Brandt, Daniel A. Keim,
Daniel Weiskopf
|
Comparative Evaluation of Animated Scatter Plot Transitions
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scatter plots are popular for displaying 2D data, but in practice, many data
sets have more than two dimensions. For the analysis of such multivariate data,
it is often necessary to switch between scatter plots of different dimension
pairs, e.g., in a scatter plot matrix (SPLOM). Alternative approaches include a
"grand tour" for an overview of the entire data set or creating artificial axes
from dimensionality reduction (DR). A cross-cutting concern in all techniques
is the ability of viewers to find correspondence between data points in
different views. Previous work proposed animations to preserve the mental map
between view changes and to trace points as well as clusters between scatter
plots of the same underlying data set. In this paper, we evaluate a variety of
spline- and rotation-based view transitions in a crowdsourced user study
focusing on ecological validity. Using the study results, we assess each
animation's suitability for tracing points and clusters across view changes. We
evaluate whether the order of horizontal and vertical rotation is relevant for
task accuracy. The results show that rotations with an orthographic camera or
staged expansion of a depth axis significantly outperform all other animation
techniques for the traceability of individual points. Further, we provide a
ranking of the animated transition techniques for traceability of individual
points. However, we could not find any significant differences for the
traceability of clusters. Furthermore, we identified differences by animation
direction that could guide further studies to determine potential confounds for
these differences. We publish the study data for reuse and provide the
animation framework as a D3.js plug-in.
|
[
{
"created": "Tue, 9 Jan 2024 17:39:45 GMT",
"version": "v1"
}
] |
2024-01-10
|
[
[
"Rodrigues",
"Nils",
""
],
[
"Dennig",
"Frederik L.",
""
],
[
"Brandt",
"Vincent",
""
],
[
"Keim",
"Daniel A.",
""
],
[
"Weiskopf",
"Daniel",
""
]
] |
Scatter plots are popular for displaying 2D data, but in practice, many data sets have more than two dimensions. For the analysis of such multivariate data, it is often necessary to switch between scatter plots of different dimension pairs, e.g., in a scatter plot matrix (SPLOM). Alternative approaches include a "grand tour" for an overview of the entire data set or creating artificial axes from dimensionality reduction (DR). A cross-cutting concern in all techniques is the ability of viewers to find correspondence between data points in different views. Previous work proposed animations to preserve the mental map between view changes and to trace points as well as clusters between scatter plots of the same underlying data set. In this paper, we evaluate a variety of spline- and rotation-based view transitions in a crowdsourced user study focusing on ecological validity. Using the study results, we assess each animation's suitability for tracing points and clusters across view changes. We evaluate whether the order of horizontal and vertical rotation is relevant for task accuracy. The results show that rotations with an orthographic camera or staged expansion of a depth axis significantly outperform all other animation techniques for the traceability of individual points. Further, we provide a ranking of the animated transition techniques for traceability of individual points. However, we could not find any significant differences for the traceability of clusters. Furthermore, we identified differences by animation direction that could guide further studies to determine potential confounds for these differences. We publish the study data for reuse and provide the animation framework as a D3.js plug-in.
|
cs/0309038
|
Valmir Barbosa
|
V. C. Barbosa, L. C. D. Campos
|
A novel evolutionary formulation of the maximum independent set problem
| null |
Journal of Combinatorial Optimization 8 (2004), 419-437
|
10.1007/s10878-004-4835-9
|
ES-615/03
|
cs.NE
| null |
We introduce a novel evolutionary formulation of the problem of finding a
maximum independent set of a graph. The new formulation is based on the
relationship that exists between a graph's independence number and its acyclic
orientations. It views such orientations as individuals and evolves them with
the aid of evolutionary operators that are very heavily based on the structure
of the graph and its acyclic orientations. The resulting heuristic has been
tested on some of the Second DIMACS Implementation Challenge benchmark graphs,
and has been found to be competitive when compared to several of the other
heuristics that have also been tested on those graphs.
|
[
{
"created": "Mon, 22 Sep 2003 13:05:51 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Barbosa",
"V. C.",
""
],
[
"Campos",
"L. C. D.",
""
]
] |
We introduce a novel evolutionary formulation of the problem of finding a maximum independent set of a graph. The new formulation is based on the relationship that exists between a graph's independence number and its acyclic orientations. It views such orientations as individuals and evolves them with the aid of evolutionary operators that are very heavily based on the structure of the graph and its acyclic orientations. The resulting heuristic has been tested on some of the Second DIMACS Implementation Challenge benchmark graphs, and has been found to be competitive when compared to several of the other heuristics that have also been tested on those graphs.
|
2309.11202
|
Uduak Uboh
|
Uduak Uboh
|
Using Artificial Intelligence for the Automation of Knitting Patterns
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Knitting patterns are a crucial component in the creation and design of
knitted materials. Traditionally, these patterns were taught informally, but
thanks to advancements in technology, anyone interested in knitting can use the
patterns as a guide to start knitting. Perhaps because knitting is mostly a
hobby, with the exception of industrial manufacturing utilising specialised
knitting machines, the use of Al in knitting is less widespread than its
application in other fields. However, it is important to determine whether
knitted pattern classification using an automated system is viable. In order to
recognise and classify knitting patterns. Using data augmentation and a
transfer learning technique, this study proposes a deep learning model. The
Inception ResNet-V2 is the main feature extraction and classification algorithm
used in the model. Metrics like accuracy, logarithmic loss, F1-score,
precision, and recall score were used to evaluate the model. The model
evaluation's findings demonstrate high model accuracy, precision, recall, and
F1 score. In addition, the AUC score for majority of the classes was in the
range (0.7-0.9). A comparative analysis was done using other pretrained models
and a ResNet-50 model with transfer learning and the proposed model evaluation
results surpassed all others. The major limitation for this project is time, as
with more time, there might have been better accuracy over a larger number of
epochs.
|
[
{
"created": "Wed, 20 Sep 2023 10:38:08 GMT",
"version": "v1"
}
] |
2023-09-21
|
[
[
"Uboh",
"Uduak",
""
]
] |
Knitting patterns are a crucial component in the creation and design of knitted materials. Traditionally, these patterns were taught informally, but thanks to advancements in technology, anyone interested in knitting can use the patterns as a guide to start knitting. Perhaps because knitting is mostly a hobby, with the exception of industrial manufacturing utilising specialised knitting machines, the use of Al in knitting is less widespread than its application in other fields. However, it is important to determine whether knitted pattern classification using an automated system is viable. In order to recognise and classify knitting patterns. Using data augmentation and a transfer learning technique, this study proposes a deep learning model. The Inception ResNet-V2 is the main feature extraction and classification algorithm used in the model. Metrics like accuracy, logarithmic loss, F1-score, precision, and recall score were used to evaluate the model. The model evaluation's findings demonstrate high model accuracy, precision, recall, and F1 score. In addition, the AUC score for majority of the classes was in the range (0.7-0.9). A comparative analysis was done using other pretrained models and a ResNet-50 model with transfer learning and the proposed model evaluation results surpassed all others. The major limitation for this project is time, as with more time, there might have been better accuracy over a larger number of epochs.
|
2303.01070
|
Xiaoyang Yu
|
Xiaoyang Yu, Youfang Lin, Xiangsen Wang, Sheng Han, Kai Lv
|
GHQ: Grouped Hybrid Q Learning for Heterogeneous Cooperative Multi-agent
Reinforcement Learning
| null | null |
10.1007/s40747-024-01415-1
| null |
cs.MA cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous deep multi-agent reinforcement learning (MARL) algorithms have
achieved impressive results, typically in homogeneous scenarios. However,
heterogeneous scenarios are also very common and usually harder to solve. In
this paper, we mainly discuss cooperative heterogeneous MARL problems in
Starcraft Multi-Agent Challenges (SMAC) environment. We firstly define and
describe the heterogeneous problems in SMAC. In order to comprehensively reveal
and study the problem, we make new maps added to the original SMAC maps. We
find that baseline algorithms fail to perform well in those heterogeneous maps.
To address this issue, we propose the Grouped Individual-Global-Max Consistency
(GIGM) and a novel MARL algorithm, Grouped Hybrid Q Learning (GHQ). GHQ
separates agents into several groups and keeps individual parameters for each
group, along with a novel hybrid structure for factorization. To enhance
coordination between groups, we maximize the Inter-group Mutual Information
(IGMI) between groups' trajectories. Experiments on original and new
heterogeneous maps show the fabulous performance of GHQ compared to other
state-of-the-art algorithms.
|
[
{
"created": "Thu, 2 Mar 2023 08:45:49 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Aug 2024 09:05:09 GMT",
"version": "v2"
}
] |
2024-08-15
|
[
[
"Yu",
"Xiaoyang",
""
],
[
"Lin",
"Youfang",
""
],
[
"Wang",
"Xiangsen",
""
],
[
"Han",
"Sheng",
""
],
[
"Lv",
"Kai",
""
]
] |
Previous deep multi-agent reinforcement learning (MARL) algorithms have achieved impressive results, typically in homogeneous scenarios. However, heterogeneous scenarios are also very common and usually harder to solve. In this paper, we mainly discuss cooperative heterogeneous MARL problems in Starcraft Multi-Agent Challenges (SMAC) environment. We firstly define and describe the heterogeneous problems in SMAC. In order to comprehensively reveal and study the problem, we make new maps added to the original SMAC maps. We find that baseline algorithms fail to perform well in those heterogeneous maps. To address this issue, we propose the Grouped Individual-Global-Max Consistency (GIGM) and a novel MARL algorithm, Grouped Hybrid Q Learning (GHQ). GHQ separates agents into several groups and keeps individual parameters for each group, along with a novel hybrid structure for factorization. To enhance coordination between groups, we maximize the Inter-group Mutual Information (IGMI) between groups' trajectories. Experiments on original and new heterogeneous maps show the fabulous performance of GHQ compared to other state-of-the-art algorithms.
|
1903.01888
|
Luana Ruiz
|
Luana Ruiz, Fernando Gama and Alejandro Ribeiro
|
Gated Graph Convolutional Recurrent Neural Networks
|
Accepted at EUSIPCO 2019
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph processes model a number of important problems such as identifying the
epicenter of an earthquake or predicting weather. In this paper, we propose a
Graph Convolutional Recurrent Neural Network (GCRNN) architecture specifically
tailored to deal with these problems. GCRNNs use convolutional filter banks to
keep the number of trainable parameters independent of the size of the graph
and of the time sequences considered. We also put forward Gated GCRNNs, a
time-gated variation of GCRNNs akin to LSTMs. When compared with GNNs and
another graph recurrent architecture in experiments using both synthetic and
real-word data, GCRNNs significantly improve performance while using
considerably less parameters.
|
[
{
"created": "Tue, 5 Mar 2019 15:13:02 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jun 2019 14:15:19 GMT",
"version": "v2"
},
{
"created": "Thu, 27 Jun 2019 14:55:04 GMT",
"version": "v3"
}
] |
2019-06-28
|
[
[
"Ruiz",
"Luana",
""
],
[
"Gama",
"Fernando",
""
],
[
"Ribeiro",
"Alejandro",
""
]
] |
Graph processes model a number of important problems such as identifying the epicenter of an earthquake or predicting weather. In this paper, we propose a Graph Convolutional Recurrent Neural Network (GCRNN) architecture specifically tailored to deal with these problems. GCRNNs use convolutional filter banks to keep the number of trainable parameters independent of the size of the graph and of the time sequences considered. We also put forward Gated GCRNNs, a time-gated variation of GCRNNs akin to LSTMs. When compared with GNNs and another graph recurrent architecture in experiments using both synthetic and real-word data, GCRNNs significantly improve performance while using considerably less parameters.
|
2211.05590
|
Pierre-Alain Mo\"ellic
|
Raphael Joud, Pierre-Alain Moellic, Simon Pontie, Jean-Baptiste Rigaud
|
A Practical Introduction to Side-Channel Extraction of Deep Neural
Network Parameters
|
Accepted at Smart Card Research and Advanced Application Conference
(CARDIS 2022)
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Model extraction is a major threat for embedded deep neural network models
that leverages an extended attack surface. Indeed, by physically accessing a
device, an adversary may exploit side-channel leakages to extract critical
information of a model (i.e., its architecture or internal parameters).
Different adversarial objectives are possible including a fidelity-based
scenario where the architecture and parameters are precisely extracted (model
cloning). We focus this work on software implementation of deep neural networks
embedded in a high-end 32-bit microcontroller (Cortex-M7) and expose several
challenges related to fidelity-based parameters extraction through side-channel
analysis, from the basic multiplication operation to the feed-forward
connection through the layers. To precisely extract the value of parameters
represented in the single-precision floating point IEEE-754 standard, we
propose an iterative process that is evaluated with both simulations and traces
from a Cortex-M7 target. To our knowledge, this work is the first to target
such an high-end 32-bit platform. Importantly, we raise and discuss the
remaining challenges for the complete extraction of a deep neural network
model, more particularly the critical case of biases.
|
[
{
"created": "Thu, 10 Nov 2022 14:02:39 GMT",
"version": "v1"
}
] |
2022-11-11
|
[
[
"Joud",
"Raphael",
""
],
[
"Moellic",
"Pierre-Alain",
""
],
[
"Pontie",
"Simon",
""
],
[
"Rigaud",
"Jean-Baptiste",
""
]
] |
Model extraction is a major threat for embedded deep neural network models that leverages an extended attack surface. Indeed, by physically accessing a device, an adversary may exploit side-channel leakages to extract critical information of a model (i.e., its architecture or internal parameters). Different adversarial objectives are possible including a fidelity-based scenario where the architecture and parameters are precisely extracted (model cloning). We focus this work on software implementation of deep neural networks embedded in a high-end 32-bit microcontroller (Cortex-M7) and expose several challenges related to fidelity-based parameters extraction through side-channel analysis, from the basic multiplication operation to the feed-forward connection through the layers. To precisely extract the value of parameters represented in the single-precision floating point IEEE-754 standard, we propose an iterative process that is evaluated with both simulations and traces from a Cortex-M7 target. To our knowledge, this work is the first to target such an high-end 32-bit platform. Importantly, we raise and discuss the remaining challenges for the complete extraction of a deep neural network model, more particularly the critical case of biases.
|
1110.6384
|
Serge Gaspers
|
Serge Gaspers and Stefan Szeider
|
Backdoors to Acyclic SAT
| null | null | null | null |
cs.DS cs.AI cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Backdoor sets, a notion introduced by Williams et al. in 2003, are certain
sets of key variables of a CNF formula F that make it easy to solve the
formula; by assigning truth values to the variables in a backdoor set, the
formula gets reduced to one or several polynomial-time solvable formulas. More
specifically, a weak backdoor set of F is a set X of variables such that there
exits a truth assignment t to X that reduces F to a satisfiable formula F[t]
that belongs to a polynomial-time decidable base class C. A strong backdoor set
is a set X of variables such that for all assignments t to X, the reduced
formula F[t] belongs to C.
We study the problem of finding backdoor sets of size at most k with respect
to the base class of CNF formulas with acyclic incidence graphs, taking k as
the parameter. We show that
1. the detection of weak backdoor sets is W[2]-hard in general but
fixed-parameter tractable for r-CNF formulas, for any fixed r>=3, and
2. the detection of strong backdoor sets is fixed-parameter approximable.
Result 1 is the the first positive one for a base class that does not have a
characterization with obstructions of bounded size. Result 2 is the first
positive one for a base class for which strong backdoor sets are more powerful
than deletion backdoor sets.
Not only SAT, but also #SAT can be solved in polynomial time for CNF formulas
with acyclic incidence graphs. Hence Result 2 establishes a new structural
parameter that makes #SAT fixed-parameter tractable and that is incomparable
with known parameters such as treewidth and clique-width.
We obtain the algorithms by a combination of an algorithmic version of the
Erd\"os-P\'osa Theorem, Courcelle's model checking for monadic second order
logic, and new combinatorial results on how disjoint cycles can interact with
the backdoor set.
|
[
{
"created": "Fri, 28 Oct 2011 16:10:32 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Oct 2011 15:09:42 GMT",
"version": "v2"
},
{
"created": "Tue, 21 Feb 2012 17:15:41 GMT",
"version": "v3"
}
] |
2012-02-22
|
[
[
"Gaspers",
"Serge",
""
],
[
"Szeider",
"Stefan",
""
]
] |
Backdoor sets, a notion introduced by Williams et al. in 2003, are certain sets of key variables of a CNF formula F that make it easy to solve the formula; by assigning truth values to the variables in a backdoor set, the formula gets reduced to one or several polynomial-time solvable formulas. More specifically, a weak backdoor set of F is a set X of variables such that there exits a truth assignment t to X that reduces F to a satisfiable formula F[t] that belongs to a polynomial-time decidable base class C. A strong backdoor set is a set X of variables such that for all assignments t to X, the reduced formula F[t] belongs to C. We study the problem of finding backdoor sets of size at most k with respect to the base class of CNF formulas with acyclic incidence graphs, taking k as the parameter. We show that 1. the detection of weak backdoor sets is W[2]-hard in general but fixed-parameter tractable for r-CNF formulas, for any fixed r>=3, and 2. the detection of strong backdoor sets is fixed-parameter approximable. Result 1 is the the first positive one for a base class that does not have a characterization with obstructions of bounded size. Result 2 is the first positive one for a base class for which strong backdoor sets are more powerful than deletion backdoor sets. Not only SAT, but also #SAT can be solved in polynomial time for CNF formulas with acyclic incidence graphs. Hence Result 2 establishes a new structural parameter that makes #SAT fixed-parameter tractable and that is incomparable with known parameters such as treewidth and clique-width. We obtain the algorithms by a combination of an algorithmic version of the Erd\"os-P\'osa Theorem, Courcelle's model checking for monadic second order logic, and new combinatorial results on how disjoint cycles can interact with the backdoor set.
|
2406.09815
|
Zhenrui Yue
|
Zhenrui Yue, Huimin Zeng, Lanyu Shang, Yifan Liu, Yang Zhang, Dong
Wang
|
Retrieval Augmented Fact Verification by Synthesizing Contrastive
Arguments
|
Accepted to ACL 2024
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid propagation of misinformation poses substantial risks to public
interest. To combat misinformation, large language models (LLMs) are adapted to
automatically verify claim credibility. Nevertheless, existing methods heavily
rely on the embedded knowledge within LLMs and / or black-box APIs for evidence
collection, leading to subpar performance with smaller LLMs or upon unreliable
context. In this paper, we propose retrieval augmented fact verification
through the synthesis of contrasting arguments (RAFTS). Upon input claims,
RAFTS starts with evidence retrieval, where we design a retrieval pipeline to
collect and re-rank relevant documents from verifiable sources. Then, RAFTS
forms contrastive arguments (i.e., supporting or refuting) conditioned on the
retrieved evidence. In addition, RAFTS leverages an embedding model to identify
informative demonstrations, followed by in-context prompting to generate the
prediction and explanation. Our method effectively retrieves relevant documents
as evidence and evaluates arguments from varying perspectives, incorporating
nuanced information for fine-grained decision-making. Combined with informative
in-context examples as prior, RAFTS achieves significant improvements to
supervised and LLM baselines without complex prompts. We demonstrate the
effectiveness of our method through extensive experiments, where RAFTS can
outperform GPT-based methods with a significantly smaller 7B LLM.
|
[
{
"created": "Fri, 14 Jun 2024 08:13:34 GMT",
"version": "v1"
}
] |
2024-06-17
|
[
[
"Yue",
"Zhenrui",
""
],
[
"Zeng",
"Huimin",
""
],
[
"Shang",
"Lanyu",
""
],
[
"Liu",
"Yifan",
""
],
[
"Zhang",
"Yang",
""
],
[
"Wang",
"Dong",
""
]
] |
The rapid propagation of misinformation poses substantial risks to public interest. To combat misinformation, large language models (LLMs) are adapted to automatically verify claim credibility. Nevertheless, existing methods heavily rely on the embedded knowledge within LLMs and / or black-box APIs for evidence collection, leading to subpar performance with smaller LLMs or upon unreliable context. In this paper, we propose retrieval augmented fact verification through the synthesis of contrasting arguments (RAFTS). Upon input claims, RAFTS starts with evidence retrieval, where we design a retrieval pipeline to collect and re-rank relevant documents from verifiable sources. Then, RAFTS forms contrastive arguments (i.e., supporting or refuting) conditioned on the retrieved evidence. In addition, RAFTS leverages an embedding model to identify informative demonstrations, followed by in-context prompting to generate the prediction and explanation. Our method effectively retrieves relevant documents as evidence and evaluates arguments from varying perspectives, incorporating nuanced information for fine-grained decision-making. Combined with informative in-context examples as prior, RAFTS achieves significant improvements to supervised and LLM baselines without complex prompts. We demonstrate the effectiveness of our method through extensive experiments, where RAFTS can outperform GPT-based methods with a significantly smaller 7B LLM.
|
1911.11494
|
Christopher Thraves Caro
|
Rosa Becerra and Christopher Thraves Caro
|
The Sitting Closer to Friends than Enemies Problem in Trees
|
10 pages, 5 figures
| null | null | null |
cs.DM cs.CG math.CO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A metric space $\mathcal{T}$ is a \emph{real tree} if for any pair of points
$x, y \in \mathcal{T}$ all topological embeddings $\sigma$ of the segment
$[0,1]$ into $\mathcal{T}$, such that $\sigma (0)=x$ and $\sigma (1)=y$, have
the same image (which is then a geodesic segment from $x$ to $y$). A
\emph{signed graph} is a graph where each edge has a positive or negative sign.
The \emph{Sitting Closer to Friends than Enemies} problem in trees has a signed
graph $S$ as an input. The purpose is to determine if there exists an injective
mapping (called \emph{valid distance drawing}) from $V(S)$ to the points of a
real tree such that, for every $u \in V(S)$, for every positive neighbor $v$ of
$u$, and negative neighbor $w$ of $u$, the distance between $v$ and $u$ is
smaller than the distance between $w$ and $u$.
In this work, we show that a complete signed graph has a valid distance
drawing in a real tree if and only if its subgraph composed of all (and only)
its positive edges has an intersection representation by unit balls in a real
tree. Besides, as an instrumental result, we show that a graph has an
intersection representation by unit balls in a real tree if and only if it has
an intersection representation by proper balls, and if and only if it has an
intersection representation by arbitrary balls in a real tree.
|
[
{
"created": "Tue, 26 Nov 2019 12:31:50 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Feb 2021 14:38:17 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Jul 2021 15:01:24 GMT",
"version": "v3"
}
] |
2021-07-06
|
[
[
"Becerra",
"Rosa",
""
],
[
"Caro",
"Christopher Thraves",
""
]
] |
A metric space $\mathcal{T}$ is a \emph{real tree} if for any pair of points $x, y \in \mathcal{T}$ all topological embeddings $\sigma$ of the segment $[0,1]$ into $\mathcal{T}$, such that $\sigma (0)=x$ and $\sigma (1)=y$, have the same image (which is then a geodesic segment from $x$ to $y$). A \emph{signed graph} is a graph where each edge has a positive or negative sign. The \emph{Sitting Closer to Friends than Enemies} problem in trees has a signed graph $S$ as an input. The purpose is to determine if there exists an injective mapping (called \emph{valid distance drawing}) from $V(S)$ to the points of a real tree such that, for every $u \in V(S)$, for every positive neighbor $v$ of $u$, and negative neighbor $w$ of $u$, the distance between $v$ and $u$ is smaller than the distance between $w$ and $u$. In this work, we show that a complete signed graph has a valid distance drawing in a real tree if and only if its subgraph composed of all (and only) its positive edges has an intersection representation by unit balls in a real tree. Besides, as an instrumental result, we show that a graph has an intersection representation by unit balls in a real tree if and only if it has an intersection representation by proper balls, and if and only if it has an intersection representation by arbitrary balls in a real tree.
|
1404.7335
|
Florent Jacquemard
|
Florent Jacquemard (Inria Paris-Rocquencourt, STMS), Cl\'ement
Poncelet Sanchez (Inria Paris-Rocquencourt, STMS)
|
Antescofo Intermediate Representation
|
RR-8520 (2014)
| null | null | null |
cs.MM cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe an intermediate language designed as a medium-level internal
representation of programs of the interactive music system Antescofo. This
representation is independent both of the Antescofo source language and of the
architecture of the execution platform. It is used in tasks such as
verification of timings, model-based conformance testing, static control-flow
analysis or simulation. This language is essentially a flat representation of
Antescofo's code, as a finite state machine extended with local and global
variables, with delays and with concurrent threads creation. It features a
small number of simple instructions which are either blocking (wait for
external event, signal or duration) or not (variable assignment, message
emission and control).
|
[
{
"created": "Tue, 29 Apr 2014 12:30:36 GMT",
"version": "v1"
}
] |
2014-04-30
|
[
[
"Jacquemard",
"Florent",
"",
"Inria Paris-Rocquencourt, STMS"
],
[
"Sanchez",
"Clément Poncelet",
"",
"Inria Paris-Rocquencourt, STMS"
]
] |
We describe an intermediate language designed as a medium-level internal representation of programs of the interactive music system Antescofo. This representation is independent both of the Antescofo source language and of the architecture of the execution platform. It is used in tasks such as verification of timings, model-based conformance testing, static control-flow analysis or simulation. This language is essentially a flat representation of Antescofo's code, as a finite state machine extended with local and global variables, with delays and with concurrent threads creation. It features a small number of simple instructions which are either blocking (wait for external event, signal or duration) or not (variable assignment, message emission and control).
|
2010.13903
|
Denghui Zhang
|
Denghui Zhang, Yanchi Liu, Wei Cheng, Bo Zong, Jingchao Ni, Zhengzhang
Chen, Haifeng Chen, Hui Xiong
|
T$^2$-Net: A Semi-supervised Deep Model for Turbulence Forecasting
|
Accepted by ICDM 2020
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Accurate air turbulence forecasting can help airlines avoid hazardous
turbulence, guide the routes that keep passengers safe, maximize efficiency,
and reduce costs. Traditional turbulence forecasting approaches heavily rely on
painstakingly customized turbulence indexes, which are less effective in
dynamic and complex weather conditions. The recent availability of
high-resolution weather data and turbulence records allows more accurate
forecasting of the turbulence in a data-driven way. However, it is a
non-trivial task for developing a machine learning based turbulence forecasting
system due to two challenges: (1) Complex spatio-temporal correlations,
turbulence is caused by air movement with complex spatio-temporal patterns, (2)
Label scarcity, very limited turbulence labels can be obtained. To this end, in
this paper, we develop a unified semi-supervised framework, T$^2$-Net, to
address the above challenges. Specifically, we first build an encoder-decoder
paradigm based on the convolutional LSTM to model the spatio-temporal
correlations. Then, to tackle the label scarcity problem, we propose a novel
Dual Label Guessing method to take advantage of massive unlabeled turbulence
data. It integrates complementary signals from the main Turbulence Forecasting
task and the auxiliary Turbulence Detection task to generate pseudo-labels,
which are dynamically utilized as additional training data. Finally, extensive
experimental results on a real-world turbulence dataset validate the
superiority of our method on turbulence forecasting.
|
[
{
"created": "Mon, 26 Oct 2020 21:14:15 GMT",
"version": "v1"
}
] |
2020-10-28
|
[
[
"Zhang",
"Denghui",
""
],
[
"Liu",
"Yanchi",
""
],
[
"Cheng",
"Wei",
""
],
[
"Zong",
"Bo",
""
],
[
"Ni",
"Jingchao",
""
],
[
"Chen",
"Zhengzhang",
""
],
[
"Chen",
"Haifeng",
""
],
[
"Xiong",
"Hui",
""
]
] |
Accurate air turbulence forecasting can help airlines avoid hazardous turbulence, guide the routes that keep passengers safe, maximize efficiency, and reduce costs. Traditional turbulence forecasting approaches heavily rely on painstakingly customized turbulence indexes, which are less effective in dynamic and complex weather conditions. The recent availability of high-resolution weather data and turbulence records allows more accurate forecasting of the turbulence in a data-driven way. However, it is a non-trivial task for developing a machine learning based turbulence forecasting system due to two challenges: (1) Complex spatio-temporal correlations, turbulence is caused by air movement with complex spatio-temporal patterns, (2) Label scarcity, very limited turbulence labels can be obtained. To this end, in this paper, we develop a unified semi-supervised framework, T$^2$-Net, to address the above challenges. Specifically, we first build an encoder-decoder paradigm based on the convolutional LSTM to model the spatio-temporal correlations. Then, to tackle the label scarcity problem, we propose a novel Dual Label Guessing method to take advantage of massive unlabeled turbulence data. It integrates complementary signals from the main Turbulence Forecasting task and the auxiliary Turbulence Detection task to generate pseudo-labels, which are dynamically utilized as additional training data. Finally, extensive experimental results on a real-world turbulence dataset validate the superiority of our method on turbulence forecasting.
|
1810.07273
|
R.Stuart Geiger
|
R. Stuart Geiger, Aaron Halfaker
|
Operationalizing Conflict and Cooperation between Automated Software
Agents in Wikipedia: A Replication and Expansion of 'Even Good Bots Fight'
|
33 pages. In ACM CSCW 2018
|
Proc ACM on Human Computer Interaction. 1(2), Article 49. CSCW
2018
|
10.1145/3134684
| null |
cs.CY cs.HC cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper replicates, extends, and refutes conclusions made in a study
published in PLoS ONE ("Even Good Bots Fight"), which claimed to identify
substantial levels of conflict between automated software agents (or bots) in
Wikipedia using purely quantitative methods. By applying an integrative
mixed-methods approach drawing on trace ethnography, we place these alleged
cases of bot-bot conflict into context and arrive at a better understanding of
these interactions. We found that overwhelmingly, the interactions previously
characterized as problematic instances of conflict are typically better
characterized as routine, productive, even collaborative work. These results
challenge past work and show the importance of qualitative/quantitative
collaboration. In our paper, we present quantitative metrics and qualitative
heuristics for operationalizing bot-bot conflict. We give thick descriptions of
kinds of events that present as bot-bot reverts, helping distinguish conflict
from non-conflict. We computationally classify these kinds of events through
patterns in edit summaries. By interpreting found/trace data in the
socio-technical contexts in which people give that data meaning, we gain more
from quantitative measurements, drawing deeper understandings about the
governance of algorithmic systems in Wikipedia. We have also released our data
collection, processing, and analysis pipeline, to facilitate computational
reproducibility of our findings and to help other researchers interested in
conducting similar mixed-method scholarship in other platforms and contexts.
|
[
{
"created": "Tue, 16 Oct 2018 20:59:19 GMT",
"version": "v1"
}
] |
2018-10-18
|
[
[
"Geiger",
"R. Stuart",
""
],
[
"Halfaker",
"Aaron",
""
]
] |
This paper replicates, extends, and refutes conclusions made in a study published in PLoS ONE ("Even Good Bots Fight"), which claimed to identify substantial levels of conflict between automated software agents (or bots) in Wikipedia using purely quantitative methods. By applying an integrative mixed-methods approach drawing on trace ethnography, we place these alleged cases of bot-bot conflict into context and arrive at a better understanding of these interactions. We found that overwhelmingly, the interactions previously characterized as problematic instances of conflict are typically better characterized as routine, productive, even collaborative work. These results challenge past work and show the importance of qualitative/quantitative collaboration. In our paper, we present quantitative metrics and qualitative heuristics for operationalizing bot-bot conflict. We give thick descriptions of kinds of events that present as bot-bot reverts, helping distinguish conflict from non-conflict. We computationally classify these kinds of events through patterns in edit summaries. By interpreting found/trace data in the socio-technical contexts in which people give that data meaning, we gain more from quantitative measurements, drawing deeper understandings about the governance of algorithmic systems in Wikipedia. We have also released our data collection, processing, and analysis pipeline, to facilitate computational reproducibility of our findings and to help other researchers interested in conducting similar mixed-method scholarship in other platforms and contexts.
|
2203.10470
|
Yuanming Ren
|
Yuanming Ren, Shihao Shen, Yanli Ju, Xiaofei Wang, Wenyu Wang, Victor
C.M. Leung
|
EdgeMatrix: A Resources Redefined Edge-Cloud System for Prioritized
Services
| null | null | null | null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The edge-cloud system has the potential to combine the advantages of
heterogeneous devices and truly realize ubiquitous computing. However, for
service providers to guarantee the Service-Level-Agreement (SLA) priorities,
the complex networked environment brings inherent challenges such as
multi-resource heterogeneity, resource competition, and networked system
dynamics. In this paper, we design a framework for the edge-cloud system,
namely EdgeMatrix, to maximize the throughput while guaranteeing various SLA
priorities. First, EdgeMatrix introduces Networked Multi-agent Actor-Critic
(NMAC) algorithm to redefine physical resources as logically isolated resource
combinations, i.e., resource cells. Then, we use a clustering algorithm to
group the cells with similar characteristics into various sets, i.e., resource
channels, for different channels can offer different SLA guarantees. Besides,
we design a multi-task mechanism to solve the problem of joint service
orchestration and request dispatch (JSORD) among edge-cloud clusters,
significantly reducing the runtime than traditional methods. To ensure
stability, EdgeMatrix adopts a two-time-scale framework, i.e., coordinating
resources and services at the large time scale and dispatching requests at the
small time scale. The real trace-based experimental results verify that
EdgeMatrix can improve system throughput in complex networked environments,
reduce SLA violations, and significantly reduce the runtime than traditional
methods.
|
[
{
"created": "Sun, 20 Mar 2022 06:47:34 GMT",
"version": "v1"
}
] |
2022-03-22
|
[
[
"Ren",
"Yuanming",
""
],
[
"Shen",
"Shihao",
""
],
[
"Ju",
"Yanli",
""
],
[
"Wang",
"Xiaofei",
""
],
[
"Wang",
"Wenyu",
""
],
[
"Leung",
"Victor C. M.",
""
]
] |
The edge-cloud system has the potential to combine the advantages of heterogeneous devices and truly realize ubiquitous computing. However, for service providers to guarantee the Service-Level-Agreement (SLA) priorities, the complex networked environment brings inherent challenges such as multi-resource heterogeneity, resource competition, and networked system dynamics. In this paper, we design a framework for the edge-cloud system, namely EdgeMatrix, to maximize the throughput while guaranteeing various SLA priorities. First, EdgeMatrix introduces Networked Multi-agent Actor-Critic (NMAC) algorithm to redefine physical resources as logically isolated resource combinations, i.e., resource cells. Then, we use a clustering algorithm to group the cells with similar characteristics into various sets, i.e., resource channels, for different channels can offer different SLA guarantees. Besides, we design a multi-task mechanism to solve the problem of joint service orchestration and request dispatch (JSORD) among edge-cloud clusters, significantly reducing the runtime than traditional methods. To ensure stability, EdgeMatrix adopts a two-time-scale framework, i.e., coordinating resources and services at the large time scale and dispatching requests at the small time scale. The real trace-based experimental results verify that EdgeMatrix can improve system throughput in complex networked environments, reduce SLA violations, and significantly reduce the runtime than traditional methods.
|
2210.02576
|
Yongbin Liu
|
Liu Yongbin, Liu Qingjie, Chen Jiaxin, Wang Yunhong
|
Reading Chinese in Natural Scenes with a Bag-of-Radicals Prior
|
Accepted by BMVC 2022
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene text recognition (STR) on Latin datasets has been extensively studied
in recent years, and state-of-the-art (SOTA) models often reach high accuracy.
However, the performance on non-Latin transcripts, such as Chinese, is not
satisfactory. In this paper, we collect six open-source Chinese STR datasets
and evaluate a series of classic methods performing well on Latin datasets,
finding a significant performance drop. To improve the performance on Chinese
datasets, we propose a novel radical-embedding (RE) representation to utilize
the ideographic descriptions of Chinese characters. The ideographic
descriptions of Chinese characters are firstly converted to bags of radicals
and then fused with learnable character embeddings by a
character-vector-fusion-module (CVFM). In addition, we utilize a bag of
radicals as supervision signals for multi-task training to improve the
ideographic structure perception of our model. Experiments show performance of
the model with RE + CVFM + multi-task training is superior compared with the
baseline on six Chinese STR datasets. In addition, we utilize a bag of radicals
as supervision signals for multi-task training to improve the ideographic
structure perception of our model. Experiments show performance of the model
with RE + CVFM + multi-task training is superior compared with the baseline on
six Chinese STR datasets.
|
[
{
"created": "Wed, 5 Oct 2022 21:56:09 GMT",
"version": "v1"
}
] |
2022-10-07
|
[
[
"Yongbin",
"Liu",
""
],
[
"Qingjie",
"Liu",
""
],
[
"Jiaxin",
"Chen",
""
],
[
"Yunhong",
"Wang",
""
]
] |
Scene text recognition (STR) on Latin datasets has been extensively studied in recent years, and state-of-the-art (SOTA) models often reach high accuracy. However, the performance on non-Latin transcripts, such as Chinese, is not satisfactory. In this paper, we collect six open-source Chinese STR datasets and evaluate a series of classic methods performing well on Latin datasets, finding a significant performance drop. To improve the performance on Chinese datasets, we propose a novel radical-embedding (RE) representation to utilize the ideographic descriptions of Chinese characters. The ideographic descriptions of Chinese characters are firstly converted to bags of radicals and then fused with learnable character embeddings by a character-vector-fusion-module (CVFM). In addition, we utilize a bag of radicals as supervision signals for multi-task training to improve the ideographic structure perception of our model. Experiments show performance of the model with RE + CVFM + multi-task training is superior compared with the baseline on six Chinese STR datasets. In addition, we utilize a bag of radicals as supervision signals for multi-task training to improve the ideographic structure perception of our model. Experiments show performance of the model with RE + CVFM + multi-task training is superior compared with the baseline on six Chinese STR datasets.
|
2302.02048
|
Junyuan Gao
|
Junyuan Gao, Yongpeng Wu, Tianya Li, and Wenjun Zhang
|
Energy Efficiency of MIMO Massive Unsourced Random Access with Finite
Blocklength
|
Accepted by IEEE Wireless Communications Letters
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper investigates the energy efficiency of massive unsourced random
access~(URA) in multiple-input multiple-output quasi-static Rayleigh fading
channels. Specifically, we derive achievability and converse bounds on the
minimum required energy-per-bit under the per-user probability of error
constraint, where the converse bounds contain two parts: one is general and the
other is a weaker ensemble bound. Numerical evaluation shows that the gap
between our achievability and converse bounds is less than $5$~dB in the
considered regime. Some practical schemes are energy-inefficient compared with
our bounds especially when there are many users. Moreover, we observe that in
contrast to the sourced random access paradigm, the URA paradigm achieves
higher spectral efficiency.
|
[
{
"created": "Sat, 4 Feb 2023 01:11:18 GMT",
"version": "v1"
}
] |
2023-02-07
|
[
[
"Gao",
"Junyuan",
""
],
[
"Wu",
"Yongpeng",
""
],
[
"Li",
"Tianya",
""
],
[
"Zhang",
"Wenjun",
""
]
] |
This paper investigates the energy efficiency of massive unsourced random access~(URA) in multiple-input multiple-output quasi-static Rayleigh fading channels. Specifically, we derive achievability and converse bounds on the minimum required energy-per-bit under the per-user probability of error constraint, where the converse bounds contain two parts: one is general and the other is a weaker ensemble bound. Numerical evaluation shows that the gap between our achievability and converse bounds is less than $5$~dB in the considered regime. Some practical schemes are energy-inefficient compared with our bounds especially when there are many users. Moreover, we observe that in contrast to the sourced random access paradigm, the URA paradigm achieves higher spectral efficiency.
|
2007.07155
|
Mohammad Shojaeshafiei
|
Mohammad Shojaeshafiei, Letha Etzkorn, and Michael Anderson
|
multiple layers of fuzzy logic to quantify vulnerabilies in iot
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantifying vulnerabilities of network systems has been a highly
controversial issue in the fields of network security and IoT. Much research
has been conducted on this purpose; however, these have many ambiguities and
uncertainties. In this paper, we investigate the quantification of
vulnerability in the Department of Transportation (DOT) as our proof of
concept. We initiate the analysis of security requirements, using Security
Quality Requirements Engineering (SQUARE) for security requirements
elicitation. Then we apply published security standards such as NIST SP-800 and
ISO 27001 to map our security factors and sub-factors. Finally, we propose our
Multi-layered Fuzzy Logic (MFL) approach based on Goal question Metrics (GQM)
to quantify network security and IoT (Mobile Devices) vulnerability in DOT.
|
[
{
"created": "Tue, 14 Jul 2020 16:14:51 GMT",
"version": "v1"
}
] |
2020-07-15
|
[
[
"Shojaeshafiei",
"Mohammad",
""
],
[
"Etzkorn",
"Letha",
""
],
[
"Anderson",
"Michael",
""
]
] |
Quantifying vulnerabilities of network systems has been a highly controversial issue in the fields of network security and IoT. Much research has been conducted on this purpose; however, these have many ambiguities and uncertainties. In this paper, we investigate the quantification of vulnerability in the Department of Transportation (DOT) as our proof of concept. We initiate the analysis of security requirements, using Security Quality Requirements Engineering (SQUARE) for security requirements elicitation. Then we apply published security standards such as NIST SP-800 and ISO 27001 to map our security factors and sub-factors. Finally, we propose our Multi-layered Fuzzy Logic (MFL) approach based on Goal question Metrics (GQM) to quantify network security and IoT (Mobile Devices) vulnerability in DOT.
|
2307.13661
|
Henry DeYoung
|
Henry DeYoung and Andreia Mordido and Frank Pfenning and Ankush Das
|
Parametric Subtyping for Structural Parametric Polymorphism
|
36 pages
| null | null | null |
cs.PL cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We study the interaction of structural subtyping with parametric polymorphism
and recursively defined type constructors. Although structural subtyping is
undecidable in this setting, we describe a notion of parametricity for type
constructors and then exploit it to define parametric subtyping, a conceptually
simple, decidable, and expressive fragment of structural subtyping that
strictly generalizes rigid subtyping. We present and prove correct an effective
saturation-based decision procedure for parametric subtyping, demonstrating its
applicability using a variety of examples. We also provide an implementation of
this decision procedure online.
|
[
{
"created": "Tue, 25 Jul 2023 17:14:49 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Oct 2023 15:55:47 GMT",
"version": "v2"
}
] |
2023-10-30
|
[
[
"DeYoung",
"Henry",
""
],
[
"Mordido",
"Andreia",
""
],
[
"Pfenning",
"Frank",
""
],
[
"Das",
"Ankush",
""
]
] |
We study the interaction of structural subtyping with parametric polymorphism and recursively defined type constructors. Although structural subtyping is undecidable in this setting, we describe a notion of parametricity for type constructors and then exploit it to define parametric subtyping, a conceptually simple, decidable, and expressive fragment of structural subtyping that strictly generalizes rigid subtyping. We present and prove correct an effective saturation-based decision procedure for parametric subtyping, demonstrating its applicability using a variety of examples. We also provide an implementation of this decision procedure online.
|
1310.1362
|
J. M. Landsberg
|
Fulvio Gesmundo, Jonathan Hauenstein, Christian Ikenmeyer, and JM
Landsberg
|
Complexity of linear circuits and geometry
|
29 pages, final version to appear in FOCM
| null | null | null |
cs.CC math.AG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We use algebraic geometry to study matrix rigidity, and more generally, the
complexity of computing a matrix-vector product, continuing a study initiated
by Kumar, et. al. We (i) exhibit many non-obvious equations testing for
(border) rigidity, (ii) compute degrees of varieties associated to rigidity,
(iii) describe algebraic varieties associated to families of matrices that are
expected to have super-linear rigidity, and (iv) prove results about the ideals
and degrees of cones that are of interest in their own right.
|
[
{
"created": "Fri, 4 Oct 2013 18:34:45 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Mar 2015 19:36:21 GMT",
"version": "v2"
}
] |
2015-03-11
|
[
[
"Gesmundo",
"Fulvio",
""
],
[
"Hauenstein",
"Jonathan",
""
],
[
"Ikenmeyer",
"Christian",
""
],
[
"Landsberg",
"JM",
""
]
] |
We use algebraic geometry to study matrix rigidity, and more generally, the complexity of computing a matrix-vector product, continuing a study initiated by Kumar, et. al. We (i) exhibit many non-obvious equations testing for (border) rigidity, (ii) compute degrees of varieties associated to rigidity, (iii) describe algebraic varieties associated to families of matrices that are expected to have super-linear rigidity, and (iv) prove results about the ideals and degrees of cones that are of interest in their own right.
|
2305.17179
|
Tomasz Limisiewicz
|
Tomasz Limisiewicz and Ji\v{r}\'i Balhar and David Mare\v{c}ek
|
Tokenization Impacts Multilingual Language Modeling: Assessing
Vocabulary Allocation and Overlap Across Languages
|
in ACL Findings 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Multilingual language models have recently gained attention as a promising
solution for representing multiple languages in a single model. In this paper,
we propose new criteria to evaluate the quality of lexical representation and
vocabulary overlap observed in sub-word tokenizers. Our findings show that the
overlap of vocabulary across languages can be actually detrimental to certain
downstream tasks (POS, dependency tree labeling). In contrast, NER and
sentence-level tasks (cross-lingual retrieval, NLI) benefit from sharing
vocabulary. We also observe that the coverage of the language-specific tokens
in the multilingual vocabulary significantly impacts the word-level tasks. Our
study offers a deeper understanding of the role of tokenizers in multilingual
language models and guidelines for future model developers to choose the most
suitable tokenizer for their specific application before undertaking costly
model pre-training
|
[
{
"created": "Fri, 26 May 2023 18:06:49 GMT",
"version": "v1"
}
] |
2023-05-30
|
[
[
"Limisiewicz",
"Tomasz",
""
],
[
"Balhar",
"Jiří",
""
],
[
"Mareček",
"David",
""
]
] |
Multilingual language models have recently gained attention as a promising solution for representing multiple languages in a single model. In this paper, we propose new criteria to evaluate the quality of lexical representation and vocabulary overlap observed in sub-word tokenizers. Our findings show that the overlap of vocabulary across languages can be actually detrimental to certain downstream tasks (POS, dependency tree labeling). In contrast, NER and sentence-level tasks (cross-lingual retrieval, NLI) benefit from sharing vocabulary. We also observe that the coverage of the language-specific tokens in the multilingual vocabulary significantly impacts the word-level tasks. Our study offers a deeper understanding of the role of tokenizers in multilingual language models and guidelines for future model developers to choose the most suitable tokenizer for their specific application before undertaking costly model pre-training
|
2108.00382
|
Matthew Andres Moreno
|
Matthew Andres Moreno, Santiago Rodriguez Papa, Alexander Lalejini,
Charles Ofria
|
SignalGP-Lite: Event Driven Genetic Programming Library for Large-Scale
Artificial Life Applications
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event-driven genetic programming representations have been shown to
outperform traditional imperative representations on interaction-intensive
problems. The event-driven approach organizes genome content into modules that
are triggered in response to environmental signals, simplifying simulation
design and implementation. Existing work developing event-driven genetic
programming methodology has largely used the SignalGP library, which caters to
traditional program synthesis applications. The SignalGP-Lite library enables
larger-scale artificial life experiments with streamlined agents by reducing
control flow overhead and trading run-time flexibility for better performance
due to compile-time configuration. Here, we report benchmarking experiments
that show an 8x to 30x speedup. We also report solution quality equivalent to
SignalGP on two benchmark problems originally developed to test the ability of
evolved programs to respond to a large number of signals and to modulate signal
response based on context.
|
[
{
"created": "Sun, 1 Aug 2021 07:20:49 GMT",
"version": "v1"
}
] |
2021-08-03
|
[
[
"Moreno",
"Matthew Andres",
""
],
[
"Papa",
"Santiago Rodriguez",
""
],
[
"Lalejini",
"Alexander",
""
],
[
"Ofria",
"Charles",
""
]
] |
Event-driven genetic programming representations have been shown to outperform traditional imperative representations on interaction-intensive problems. The event-driven approach organizes genome content into modules that are triggered in response to environmental signals, simplifying simulation design and implementation. Existing work developing event-driven genetic programming methodology has largely used the SignalGP library, which caters to traditional program synthesis applications. The SignalGP-Lite library enables larger-scale artificial life experiments with streamlined agents by reducing control flow overhead and trading run-time flexibility for better performance due to compile-time configuration. Here, we report benchmarking experiments that show an 8x to 30x speedup. We also report solution quality equivalent to SignalGP on two benchmark problems originally developed to test the ability of evolved programs to respond to a large number of signals and to modulate signal response based on context.
|
1706.09667
|
Maxinder S. Kanwal
|
Maxinder S. Kanwal, Joshua A. Grochow, Nihat Ay
|
Comparing Information-Theoretic Measures of Complexity in Boltzmann
Machines
|
16 pages, 7 figures; Appears in Entropy, Special Issue "Information
Geometry II"
|
Entropy (2017), 19(7), 310
|
10.3390/e19070310
| null |
cs.IT cs.NE math.IT q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
In the past three decades, many theoretical measures of complexity have been
proposed to help understand complex systems. In this work, for the first time,
we place these measures on a level playing field, to explore the qualitative
similarities and differences between them, and their shortcomings.
Specifically, using the Boltzmann machine architecture (a fully connected
recurrent neural network) with uniformly distributed weights as our model of
study, we numerically measure how complexity changes as a function of network
dynamics and network parameters. We apply an extension of one such
information-theoretic measure of complexity to understand incremental Hebbian
learning in Hopfield networks, a fully recurrent architecture model of
autoassociative memory. In the course of Hebbian learning, the total
information flow reflects a natural upward trend in complexity as the network
attempts to learn more and more patterns.
|
[
{
"created": "Thu, 29 Jun 2017 10:39:15 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Jul 2017 01:01:49 GMT",
"version": "v2"
}
] |
2017-08-01
|
[
[
"Kanwal",
"Maxinder S.",
""
],
[
"Grochow",
"Joshua A.",
""
],
[
"Ay",
"Nihat",
""
]
] |
In the past three decades, many theoretical measures of complexity have been proposed to help understand complex systems. In this work, for the first time, we place these measures on a level playing field, to explore the qualitative similarities and differences between them, and their shortcomings. Specifically, using the Boltzmann machine architecture (a fully connected recurrent neural network) with uniformly distributed weights as our model of study, we numerically measure how complexity changes as a function of network dynamics and network parameters. We apply an extension of one such information-theoretic measure of complexity to understand incremental Hebbian learning in Hopfield networks, a fully recurrent architecture model of autoassociative memory. In the course of Hebbian learning, the total information flow reflects a natural upward trend in complexity as the network attempts to learn more and more patterns.
|
1508.04228
|
Hyeji Kim
|
Hyeji Kim and Benjamin Nachman and Abbas El Gamal
|
Superposition Coding is Almost Always Optimal for the Poisson Broadcast
Channel
|
17 pages, 11 figures, submitted to IEEE Transactions on Information
Theory
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper shows that the capacity region of the continuous-time Poisson
broadcast channel is achieved via superposition coding for most channel
parameter values. Interestingly, the channel in some subset of these parameter
values does not belong to any of the existing classes of broadcast channels for
which superposition coding is optimal (e.g., degraded, less noisy, more
capable). In particular, we introduce the notion of effectively less noisy
broadcast channel and show that it implies less noisy but is not in general
implied by more capable. For the rest of the channel parameter values, we show
that there is a gap between Marton's inner bound and the UV outer bound.
|
[
{
"created": "Tue, 18 Aug 2015 06:49:33 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Aug 2015 23:11:04 GMT",
"version": "v2"
}
] |
2015-08-28
|
[
[
"Kim",
"Hyeji",
""
],
[
"Nachman",
"Benjamin",
""
],
[
"Gamal",
"Abbas El",
""
]
] |
This paper shows that the capacity region of the continuous-time Poisson broadcast channel is achieved via superposition coding for most channel parameter values. Interestingly, the channel in some subset of these parameter values does not belong to any of the existing classes of broadcast channels for which superposition coding is optimal (e.g., degraded, less noisy, more capable). In particular, we introduce the notion of effectively less noisy broadcast channel and show that it implies less noisy but is not in general implied by more capable. For the rest of the channel parameter values, we show that there is a gap between Marton's inner bound and the UV outer bound.
|
1807.11241
|
Mahendran Subramanian
|
Billy Woods, Mahendran Subramanian, Ali Shafti and A. Aldo Faisal
|
Mechanomyography based closed-loop Functional Electrical Stimulation
cycling system
|
Functional Electrical Stimulation Closed loop system, The 7th IEEE
RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics
2018
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Functional Electrical Stimulation (FES) systems are successful in restoring
motor function and supporting paralyzed users. Commercially available FES
products are open loop, meaning that the system is unable to adapt to changing
conditions with the user and their muscles which results in muscle fatigue and
poor stimulation protocols. This is because it is difficult to close the loop
between stimulation and monitoring of muscle contraction using adaptive
stimulation. FES causes electrical artefacts which make it challenging to
monitor muscle contractions with traditional methods such as electromyography
(EMG). We look to overcome this limitation by combining FES with novel
mechanomyographic (MMG) sensors to be able to monitor muscle activity during
stimulation in real time. To provide a meaningful task we built an FES cycling
rig with a software interface that enabled us to perform adaptive recording and
stimulation, and then combine this with sensors to record forces applied to the
pedals using force sensitive resistors (FSRs), crank angle position using a
magnetic incremental encoder and inputs from the user using switches and a
potentiometer. We illustrated this with a closed-loop stimulation algorithm
that used the inputs from the sensors to control the output of a programmable
RehaStim 1 FES stimulator (Hasomed) in real-time. This recumbent bicycle rig
was used as a testing platform for FES cycling. The algorithm was designed to
respond to a change in requested speed (RPM) from the user and change the
stimulation power (% of maximum current mA) until this speed was achieved and
then maintain it.
|
[
{
"created": "Mon, 30 Jul 2018 08:59:14 GMT",
"version": "v1"
}
] |
2018-07-31
|
[
[
"Woods",
"Billy",
""
],
[
"Subramanian",
"Mahendran",
""
],
[
"Shafti",
"Ali",
""
],
[
"Faisal",
"A. Aldo",
""
]
] |
Functional Electrical Stimulation (FES) systems are successful in restoring motor function and supporting paralyzed users. Commercially available FES products are open loop, meaning that the system is unable to adapt to changing conditions with the user and their muscles which results in muscle fatigue and poor stimulation protocols. This is because it is difficult to close the loop between stimulation and monitoring of muscle contraction using adaptive stimulation. FES causes electrical artefacts which make it challenging to monitor muscle contractions with traditional methods such as electromyography (EMG). We look to overcome this limitation by combining FES with novel mechanomyographic (MMG) sensors to be able to monitor muscle activity during stimulation in real time. To provide a meaningful task we built an FES cycling rig with a software interface that enabled us to perform adaptive recording and stimulation, and then combine this with sensors to record forces applied to the pedals using force sensitive resistors (FSRs), crank angle position using a magnetic incremental encoder and inputs from the user using switches and a potentiometer. We illustrated this with a closed-loop stimulation algorithm that used the inputs from the sensors to control the output of a programmable RehaStim 1 FES stimulator (Hasomed) in real-time. This recumbent bicycle rig was used as a testing platform for FES cycling. The algorithm was designed to respond to a change in requested speed (RPM) from the user and change the stimulation power (% of maximum current mA) until this speed was achieved and then maintain it.
|
1711.05611
|
David Bau iii
|
Bolei Zhou, David Bau, Aude Oliva, and Antonio Torralba
|
Interpreting Deep Visual Representations via Network Dissection
|
*B. Zhou and D. Bau contributed equally to this work. 15 pages, 27
figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The success of recent deep convolutional neural networks (CNNs) depends on
learning hidden representations that can summarize the important factors of
variation behind the data. However, CNNs often criticized as being black boxes
that lack interpretability, since they have millions of unexplained model
parameters. In this work, we describe Network Dissection, a method that
interprets networks by providing labels for the units of their deep visual
representations. The proposed method quantifies the interpretability of CNN
representations by evaluating the alignment between individual hidden units and
a set of visual semantic concepts. By identifying the best alignments, units
are given human interpretable labels across a range of objects, parts, scenes,
textures, materials, and colors. The method reveals that deep representations
are more transparent and interpretable than expected: we find that
representations are significantly more interpretable than they would be under a
random equivalently powerful basis. We apply the method to interpret and
compare the latent representations of various network architectures trained to
solve different supervised and self-supervised training tasks. We then examine
factors affecting the network interpretability such as the number of the
training iterations, regularizations, different initializations, and the
network depth and width. Finally we show that the interpreted units can be used
to provide explicit explanations of a prediction given by a CNN for an image.
Our results highlight that interpretability is an important property of deep
neural networks that provides new insights into their hierarchical structure.
|
[
{
"created": "Wed, 15 Nov 2017 15:05:25 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Jun 2018 15:38:31 GMT",
"version": "v2"
}
] |
2018-06-27
|
[
[
"Zhou",
"Bolei",
""
],
[
"Bau",
"David",
""
],
[
"Oliva",
"Aude",
""
],
[
"Torralba",
"Antonio",
""
]
] |
The success of recent deep convolutional neural networks (CNNs) depends on learning hidden representations that can summarize the important factors of variation behind the data. However, CNNs often criticized as being black boxes that lack interpretability, since they have millions of unexplained model parameters. In this work, we describe Network Dissection, a method that interprets networks by providing labels for the units of their deep visual representations. The proposed method quantifies the interpretability of CNN representations by evaluating the alignment between individual hidden units and a set of visual semantic concepts. By identifying the best alignments, units are given human interpretable labels across a range of objects, parts, scenes, textures, materials, and colors. The method reveals that deep representations are more transparent and interpretable than expected: we find that representations are significantly more interpretable than they would be under a random equivalently powerful basis. We apply the method to interpret and compare the latent representations of various network architectures trained to solve different supervised and self-supervised training tasks. We then examine factors affecting the network interpretability such as the number of the training iterations, regularizations, different initializations, and the network depth and width. Finally we show that the interpreted units can be used to provide explicit explanations of a prediction given by a CNN for an image. Our results highlight that interpretability is an important property of deep neural networks that provides new insights into their hierarchical structure.
|
1812.01167
|
Erik Demaine
|
Erik D. Demaine and Martin L. Demaine and David A. Huffman and Duks
Koschitz and Tomohiro Tachi
|
Conic Crease Patterns with Reflecting Rule Lines
|
17 pages, 12 figures. In Origami^7: Proceedings of the 7th
International Meeting on Origami in Science, Mathematics and Education
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We characterize when two conic curved creases are compatible with each other,
when the rule lines must converge to conic foci and reflect at the crease.
Namely, two conics are compatible (can be connected by rule segments in a
foldable curved crease pattern) if and only if they have equal or reciprocal
eccentricity. Thus, circles (eccentricity 0) and parabolas (eccentricity 1) are
compatible with only themselves (when scaled from a focus), and ellipses
(eccentricity strictly between 0 and 1) and hyperbolas (eccentricity above 1)
are compatible with themselves and each other (but only in specific pairings).
The foundation of this result is a general condition relating any two curved
creases connected by rule segments. We also use our characterization to analyze
several curved crease designs.
|
[
{
"created": "Tue, 4 Dec 2018 02:18:12 GMT",
"version": "v1"
}
] |
2018-12-05
|
[
[
"Demaine",
"Erik D.",
""
],
[
"Demaine",
"Martin L.",
""
],
[
"Huffman",
"David A.",
""
],
[
"Koschitz",
"Duks",
""
],
[
"Tachi",
"Tomohiro",
""
]
] |
We characterize when two conic curved creases are compatible with each other, when the rule lines must converge to conic foci and reflect at the crease. Namely, two conics are compatible (can be connected by rule segments in a foldable curved crease pattern) if and only if they have equal or reciprocal eccentricity. Thus, circles (eccentricity 0) and parabolas (eccentricity 1) are compatible with only themselves (when scaled from a focus), and ellipses (eccentricity strictly between 0 and 1) and hyperbolas (eccentricity above 1) are compatible with themselves and each other (but only in specific pairings). The foundation of this result is a general condition relating any two curved creases connected by rule segments. We also use our characterization to analyze several curved crease designs.
|
1507.08569
|
Mohsen Yaghoubi Suraki
|
Mohsen Yaghoubi Suraki, Morteza Yaghoubi Suraki, Leila SourakiAzad
|
HMIoT: A New Healthcare Model Based on Internet of Things
|
8 pages, 9 figures, Journal
|
IJCSI International Journal of Computer Science Issues, Volume 12,
Issue 1, No 1, January 2015 ISSN (Print): 1694-0814 | ISSN (Online):
1694-0784 www.IJCSI.org
| null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent century, with developing of equipment, using of the internet and
things connected to the internet is growing. Therefore, the need for informing
in the process of expanding the scope of its application is very necessary and
important. These days, using intelligent and autonomous devices in our daily
lives has become commonplace and the Internet is the most important part of the
relationship between these tools and even at close distances also. Things
connected to the Internet that are currently in use and can be inclusive of all
the sciences as a step to develop and coordinate of them. In this paper we
investigate application and using of Internet of things from the perspective of
various sciences. We show that how this phenomenon can influence on future
health of people.
|
[
{
"created": "Tue, 28 Jul 2015 20:18:18 GMT",
"version": "v1"
}
] |
2015-07-31
|
[
[
"Suraki",
"Mohsen Yaghoubi",
""
],
[
"Suraki",
"Morteza Yaghoubi",
""
],
[
"SourakiAzad",
"Leila",
""
]
] |
In recent century, with developing of equipment, using of the internet and things connected to the internet is growing. Therefore, the need for informing in the process of expanding the scope of its application is very necessary and important. These days, using intelligent and autonomous devices in our daily lives has become commonplace and the Internet is the most important part of the relationship between these tools and even at close distances also. Things connected to the Internet that are currently in use and can be inclusive of all the sciences as a step to develop and coordinate of them. In this paper we investigate application and using of Internet of things from the perspective of various sciences. We show that how this phenomenon can influence on future health of people.
|
2304.12931
|
Victor Jung
|
Victor J.B. Jung, Arne Symons, Linyan Mei, Marian Verhelst, Luca
Benini
|
SALSA: Simulated Annealing based Loop-Ordering Scheduler for DNN
Accelerators
|
5 pages, 6 figures, open-source at
https://github.com/ZigZag-Project/zigzag
| null | null | null |
cs.AR cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
To meet the growing need for computational power for DNNs, multiple
specialized hardware architectures have been proposed. Each DNN layer should be
mapped onto the hardware with the most efficient schedule, however, SotA
schedulers struggle to consistently provide optimum schedules in a reasonable
time across all DNN-HW combinations.
This paper proposes SALSA, a fast dual-engine scheduler to generate optimal
execution schedules for both even and uneven mapping. We introduce a new
strategy, combining exhaustive search with simulated annealing to address the
dynamic nature of the loop ordering design space size across layers. SALSA is
extensively benchmarked against two SotA schedulers, LOMA and Timeloop on 5
different DNNs, on average SALSA finds schedules with 11.9% and 7.6% lower
energy while speeding up the search by 1.7x and 24x compared to LOMA and
Timeloop, respectively.
|
[
{
"created": "Thu, 20 Apr 2023 12:00:08 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jun 2024 07:49:38 GMT",
"version": "v2"
}
] |
2024-06-17
|
[
[
"Jung",
"Victor J. B.",
""
],
[
"Symons",
"Arne",
""
],
[
"Mei",
"Linyan",
""
],
[
"Verhelst",
"Marian",
""
],
[
"Benini",
"Luca",
""
]
] |
To meet the growing need for computational power for DNNs, multiple specialized hardware architectures have been proposed. Each DNN layer should be mapped onto the hardware with the most efficient schedule, however, SotA schedulers struggle to consistently provide optimum schedules in a reasonable time across all DNN-HW combinations. This paper proposes SALSA, a fast dual-engine scheduler to generate optimal execution schedules for both even and uneven mapping. We introduce a new strategy, combining exhaustive search with simulated annealing to address the dynamic nature of the loop ordering design space size across layers. SALSA is extensively benchmarked against two SotA schedulers, LOMA and Timeloop on 5 different DNNs, on average SALSA finds schedules with 11.9% and 7.6% lower energy while speeding up the search by 1.7x and 24x compared to LOMA and Timeloop, respectively.
|
1405.0641
|
Xiaojun Wan
|
Xiaojun Wan
|
x-index: a fantastic new indicator for quantifying a scientist's
scientific impact
| null | null | null | null |
cs.DL physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
h-index has become the most popular indicator for quantifying a scientist's
scientific impact in various scientific fields. h-index is defined as the
largest number of papers with citation number larger than or equal to h and it
treats each citation equally. However, different citations usually come from
different papers with different influence and quality, and a citation from a
highly influential paper is a greater recognition of the target paper than a
citation from an ordinary paper. Based on this assumption, we proposed a new
indicator named x-index to quantify a scientist's scientific impact by
considering only the citations coming from influential papers. x-index is
defined as the largest number of papers with influential citation number larger
than or equal to x, where each influential citation comes from a paper for
which the average ACNPP (Average Citation Number Per Paper) of its authors
larger than or equal to x . Through analysis on the APS dataset, we find that
the proposed x-index has much better ability to discriminate between Physics
Prize Winners and ordinary physicists.
|
[
{
"created": "Sun, 4 May 2014 02:26:52 GMT",
"version": "v1"
}
] |
2014-05-06
|
[
[
"Wan",
"Xiaojun",
""
]
] |
h-index has become the most popular indicator for quantifying a scientist's scientific impact in various scientific fields. h-index is defined as the largest number of papers with citation number larger than or equal to h and it treats each citation equally. However, different citations usually come from different papers with different influence and quality, and a citation from a highly influential paper is a greater recognition of the target paper than a citation from an ordinary paper. Based on this assumption, we proposed a new indicator named x-index to quantify a scientist's scientific impact by considering only the citations coming from influential papers. x-index is defined as the largest number of papers with influential citation number larger than or equal to x, where each influential citation comes from a paper for which the average ACNPP (Average Citation Number Per Paper) of its authors larger than or equal to x . Through analysis on the APS dataset, we find that the proposed x-index has much better ability to discriminate between Physics Prize Winners and ordinary physicists.
|
2105.12990
|
Tianyi Zhang
|
Tianyi Zhang, Jie Lin, Peng Hu, Bin Zhao, Mohamed M. Sabry Aly
|
PSRR-MaxpoolNMS: Pyramid Shifted MaxpoolNMS with Relationship Recovery
|
Accepted by CVPR2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-maximum Suppression (NMS) is an essential postprocessing step in modern
convolutional neural networks for object detection. Unlike convolutions which
are inherently parallel, the de-facto standard for NMS, namely GreedyNMS,
cannot be easily parallelized and thus could be the performance bottleneck in
convolutional object detection pipelines. MaxpoolNMS is introduced as a
parallelizable alternative to GreedyNMS, which in turn enables faster speed
than GreedyNMS at comparable accuracy. However, MaxpoolNMS is only capable of
replacing the GreedyNMS at the first stage of two-stage detectors like
Faster-RCNN. There is a significant drop in accuracy when applying MaxpoolNMS
at the final detection stage, due to the fact that MaxpoolNMS fails to
approximate GreedyNMS precisely in terms of bounding box selection. In this
paper, we propose a general, parallelizable and configurable approach
PSRR-MaxpoolNMS, to completely replace GreedyNMS at all stages in all
detectors. By introducing a simple Relationship Recovery module and a Pyramid
Shifted MaxpoolNMS module, our PSRR-MaxpoolNMS is able to approximate GreedyNMS
more precisely than MaxpoolNMS. Comprehensive experiments show that our
approach outperforms MaxpoolNMS by a large margin, and it is proven faster than
GreedyNMS with comparable accuracy. For the first time, PSRR-MaxpoolNMS
provides a fully parallelizable solution for customized hardware design, which
can be reused for accelerating NMS everywhere.
|
[
{
"created": "Thu, 27 May 2021 08:24:21 GMT",
"version": "v1"
}
] |
2021-05-28
|
[
[
"Zhang",
"Tianyi",
""
],
[
"Lin",
"Jie",
""
],
[
"Hu",
"Peng",
""
],
[
"Zhao",
"Bin",
""
],
[
"Aly",
"Mohamed M. Sabry",
""
]
] |
Non-maximum Suppression (NMS) is an essential postprocessing step in modern convolutional neural networks for object detection. Unlike convolutions which are inherently parallel, the de-facto standard for NMS, namely GreedyNMS, cannot be easily parallelized and thus could be the performance bottleneck in convolutional object detection pipelines. MaxpoolNMS is introduced as a parallelizable alternative to GreedyNMS, which in turn enables faster speed than GreedyNMS at comparable accuracy. However, MaxpoolNMS is only capable of replacing the GreedyNMS at the first stage of two-stage detectors like Faster-RCNN. There is a significant drop in accuracy when applying MaxpoolNMS at the final detection stage, due to the fact that MaxpoolNMS fails to approximate GreedyNMS precisely in terms of bounding box selection. In this paper, we propose a general, parallelizable and configurable approach PSRR-MaxpoolNMS, to completely replace GreedyNMS at all stages in all detectors. By introducing a simple Relationship Recovery module and a Pyramid Shifted MaxpoolNMS module, our PSRR-MaxpoolNMS is able to approximate GreedyNMS more precisely than MaxpoolNMS. Comprehensive experiments show that our approach outperforms MaxpoolNMS by a large margin, and it is proven faster than GreedyNMS with comparable accuracy. For the first time, PSRR-MaxpoolNMS provides a fully parallelizable solution for customized hardware design, which can be reused for accelerating NMS everywhere.
|
1003.1940
|
Vamsi Kundeti
|
Vamsi Kundeti, Sanguthevar Rajasekaran, Hieu Dinh
|
Efficient Parallel and Out of Core Algorithms for Constructing Large
Bi-directed de Bruijn Graphs
| null | null | null | null |
cs.DS cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Assembling genomic sequences from a set of overlapping reads is one of the
most fundamental problems in computational biology. Algorithms addressing the
assembly problem fall into two broad categories -- based on the data structures
which they employ. The first class uses an overlap/string graph and the second
type uses a de Bruijn graph. However with the recent advances in short read
sequencing technology, de Bruijn graph based algorithms seem to play a vital
role in practice.
Efficient algorithms for building these massive de Bruijn graphs are very
essential in large sequencing projects based on short reads. In Jackson et. al.
ICPP-2008, an $O(n/p)$ time parallel algorithm has been given for this problem.
Here $n$ is the size of the input and $p$ is the number of processors. This
algorithm enumerates all possible bi-directed edges which can overlap with a
node and ends up generating $\Theta(n\Sigma)$ messages.
In this paper we present a $\Theta(n/p)$ time parallel algorithm with a
communication complexity equal to that of parallel sorting and is not sensitive
to $\Sigma$. The generality of our algorithm makes it very easy to extend it
even to the out-of-core model and in this case it has an optimal I/O complexity
of $\Theta(\frac{n\log(n/B)}{B\log(M/B)})$. We demonstrate the scalability of
our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm
with that of Jackson et. al. ICPP-2008 reveals that our algorithm is faster. We
also provide efficient algorithms for the bi-directed chain compaction problem.
|
[
{
"created": "Tue, 9 Mar 2010 17:54:01 GMT",
"version": "v1"
}
] |
2010-03-10
|
[
[
"Kundeti",
"Vamsi",
""
],
[
"Rajasekaran",
"Sanguthevar",
""
],
[
"Dinh",
"Hieu",
""
]
] |
Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories -- based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In Jackson et. al. ICPP-2008, an $O(n/p)$ time parallel algorithm has been given for this problem. Here $n$ is the size of the input and $p$ is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating $\Theta(n\Sigma)$ messages. In this paper we present a $\Theta(n/p)$ time parallel algorithm with a communication complexity equal to that of parallel sorting and is not sensitive to $\Sigma$. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of $\Theta(\frac{n\log(n/B)}{B\log(M/B)})$. We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with that of Jackson et. al. ICPP-2008 reveals that our algorithm is faster. We also provide efficient algorithms for the bi-directed chain compaction problem.
|
2401.00031
|
Xiaoqian Liu
|
Xiaoqian Liu, Jianbin Jiao, Junge Zhang
|
Self-supervised Pretraining for Decision Foundation Model: Formulation,
Pipeline and Challenges
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decision-making is a dynamic process requiring perception, memory, and
reasoning to make choices and find optimal policies. Traditional approaches to
decision-making suffer from sample efficiency and generalization, while
large-scale self-supervised pretraining has enabled fast adaptation with
fine-tuning or few-shot learning in language and vision. We thus argue to
integrate knowledge acquired from generic large-scale self-supervised
pretraining into downstream decision-making problems. We propose
Pretrain-Then-Adapt pipeline and survey recent work on data collection,
pretraining objectives and adaptation strategies for decision-making
pretraining and downstream inference. Finally, we identify critical challenges
and future directions for developing decision foundation model with the help of
generic and flexible self-supervised pretraining.
|
[
{
"created": "Fri, 29 Dec 2023 08:18:52 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jan 2024 07:21:06 GMT",
"version": "v2"
}
] |
2024-01-08
|
[
[
"Liu",
"Xiaoqian",
""
],
[
"Jiao",
"Jianbin",
""
],
[
"Zhang",
"Junge",
""
]
] |
Decision-making is a dynamic process requiring perception, memory, and reasoning to make choices and find optimal policies. Traditional approaches to decision-making suffer from sample efficiency and generalization, while large-scale self-supervised pretraining has enabled fast adaptation with fine-tuning or few-shot learning in language and vision. We thus argue to integrate knowledge acquired from generic large-scale self-supervised pretraining into downstream decision-making problems. We propose Pretrain-Then-Adapt pipeline and survey recent work on data collection, pretraining objectives and adaptation strategies for decision-making pretraining and downstream inference. Finally, we identify critical challenges and future directions for developing decision foundation model with the help of generic and flexible self-supervised pretraining.
|
1406.5153
|
Ioannis Giotis
|
Josep D\'iaz, Ioannis Giotis, Lefteris Kirousis, Yiannis Mourtos,
Maria J. Serna
|
Optimizing the Social Cost of Congestion Games by Imposing Variable
Delays
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a new coordination mechanism for non-atomic congestion games that
leads to a (selfish) social cost which is arbitrarily close to the non-selfish
optimal. This mechanism does not incur any additional extra cost, like tolls,
which are usually differentiated from the social cost as expressed in terms of
delays only.
|
[
{
"created": "Thu, 19 Jun 2014 19:05:40 GMT",
"version": "v1"
}
] |
2014-06-20
|
[
[
"Díaz",
"Josep",
""
],
[
"Giotis",
"Ioannis",
""
],
[
"Kirousis",
"Lefteris",
""
],
[
"Mourtos",
"Yiannis",
""
],
[
"Serna",
"Maria J.",
""
]
] |
We describe a new coordination mechanism for non-atomic congestion games that leads to a (selfish) social cost which is arbitrarily close to the non-selfish optimal. This mechanism does not incur any additional extra cost, like tolls, which are usually differentiated from the social cost as expressed in terms of delays only.
|
0901.0753
|
Sung-eok Jeon
|
Sung-eok Jeon and Chuanyi Ji
|
Distributed Preemption Decisions: Probabilistic Graphical Model,
Algorithm and Near-Optimality
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cooperative decision making is a vision of future network management and
control. Distributed connection preemption is an important example where nodes
can make intelligent decisions on allocating resources and controlling traffic
flows for multi-class service networks. A challenge is that nodal decisions are
spatially dependent as traffic flows trespass multiple nodes in a network.
Hence the performance-complexity trade-off becomes important, i.e., how
accurate decisions are versus how much information is exchanged among nodes.
Connection preemption is known to be NP-complete. Centralized preemption is
optimal but computationally intractable. Decentralized preemption is
computationally efficient but may result in a poor performance. This work
investigates distributed preemption where nodes decide whether and which flows
to preempt using only local information exchange with neighbors. We develop,
based on the probabilistic graphical models, a near-optimal distributed
algorithm. The algorithm is used by each node to make collectively near-optimal
preemption decisions. We study trade-offs between near-optimal performance and
complexity that corresponds to the amount of information-exchange of the
distributed algorithm. The algorithm is validated by both analysis and
simulation.
|
[
{
"created": "Wed, 7 Jan 2009 04:36:58 GMT",
"version": "v1"
}
] |
2009-01-08
|
[
[
"Jeon",
"Sung-eok",
""
],
[
"Ji",
"Chuanyi",
""
]
] |
Cooperative decision making is a vision of future network management and control. Distributed connection preemption is an important example where nodes can make intelligent decisions on allocating resources and controlling traffic flows for multi-class service networks. A challenge is that nodal decisions are spatially dependent as traffic flows trespass multiple nodes in a network. Hence the performance-complexity trade-off becomes important, i.e., how accurate decisions are versus how much information is exchanged among nodes. Connection preemption is known to be NP-complete. Centralized preemption is optimal but computationally intractable. Decentralized preemption is computationally efficient but may result in a poor performance. This work investigates distributed preemption where nodes decide whether and which flows to preempt using only local information exchange with neighbors. We develop, based on the probabilistic graphical models, a near-optimal distributed algorithm. The algorithm is used by each node to make collectively near-optimal preemption decisions. We study trade-offs between near-optimal performance and complexity that corresponds to the amount of information-exchange of the distributed algorithm. The algorithm is validated by both analysis and simulation.
|
2305.13067
|
Joe Stacey
|
Joe Stacey and Marek Rei
|
Distilling Robustness into Natural Language Inference Models with
Domain-Targeted Augmentation
|
Accepted at ACL Findings 2024
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge distillation optimises a smaller student model to behave similarly
to a larger teacher model, retaining some of the performance benefits. While
this method can improve results on in-distribution examples, it does not
necessarily generalise to out-of-distribution (OOD) settings. We investigate
two complementary methods for improving the robustness of the resulting student
models on OOD domains. The first approach augments the distillation with
generated unlabelled examples that match the target distribution. The second
method upsamples data points among the training set that are similar to the
target distribution. When applied on the task of natural language inference
(NLI), our experiments on MNLI show that distillation with these modifications
outperforms previous robustness solutions. We also find that these methods
improve performance on OOD domains even beyond the target domain.
|
[
{
"created": "Mon, 22 May 2023 14:37:05 GMT",
"version": "v1"
},
{
"created": "Thu, 30 May 2024 10:00:14 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Jul 2024 18:54:53 GMT",
"version": "v3"
}
] |
2024-07-26
|
[
[
"Stacey",
"Joe",
""
],
[
"Rei",
"Marek",
""
]
] |
Knowledge distillation optimises a smaller student model to behave similarly to a larger teacher model, retaining some of the performance benefits. While this method can improve results on in-distribution examples, it does not necessarily generalise to out-of-distribution (OOD) settings. We investigate two complementary methods for improving the robustness of the resulting student models on OOD domains. The first approach augments the distillation with generated unlabelled examples that match the target distribution. The second method upsamples data points among the training set that are similar to the target distribution. When applied on the task of natural language inference (NLI), our experiments on MNLI show that distillation with these modifications outperforms previous robustness solutions. We also find that these methods improve performance on OOD domains even beyond the target domain.
|
1403.2237
|
Li Li
|
Li Li, Jun Pang, Yang Liu, Jun Sun, Jin Song Dong
|
Stateful Security Protocol Verification
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A long-standing research problem in security protocol design is how to
efficiently verify security protocols with tamper-resistant global states. In
this paper, we address this problem by first proposing a protocol specification
framework, which explicitly represents protocol execution states and state
transformations. Secondly, we develop an algorithm for verifying security
properties by utilizing the key ingredients of the first-order reasoning for
reachability analysis, while tracking state transformation and checking the
validity of newly generated states. Our verification algorithm is proven to be
(partially) correct, if it terminates. We have implemented the proposed
framework and verification algorithms in a tool named SSPA, and evaluate it
using a number of stateful security protocols. The experimental results show
that our approach is not only feasible but also practically efficient. In
particular, we have found a security flaw on the digital envelope protocol,
which could not be detected by existing security protocol verifiers.
|
[
{
"created": "Mon, 10 Mar 2014 13:40:00 GMT",
"version": "v1"
}
] |
2014-03-11
|
[
[
"Li",
"Li",
""
],
[
"Pang",
"Jun",
""
],
[
"Liu",
"Yang",
""
],
[
"Sun",
"Jun",
""
],
[
"Dong",
"Jin Song",
""
]
] |
A long-standing research problem in security protocol design is how to efficiently verify security protocols with tamper-resistant global states. In this paper, we address this problem by first proposing a protocol specification framework, which explicitly represents protocol execution states and state transformations. Secondly, we develop an algorithm for verifying security properties by utilizing the key ingredients of the first-order reasoning for reachability analysis, while tracking state transformation and checking the validity of newly generated states. Our verification algorithm is proven to be (partially) correct, if it terminates. We have implemented the proposed framework and verification algorithms in a tool named SSPA, and evaluate it using a number of stateful security protocols. The experimental results show that our approach is not only feasible but also practically efficient. In particular, we have found a security flaw on the digital envelope protocol, which could not be detected by existing security protocol verifiers.
|
2312.01487
|
Lung-Pan Cheng
|
Eden Cong-He Xu, Lung-Pan Cheng
|
BetterMinton Service: Analyzing the Badminton Service using Open Kinetic
Chain
|
14 pages, The 9th Annual Conference of Taiwanese Association of
Computer-Human Interaction
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We present a badminton training system that focuses on the backhand short
service. Unlike the prior motor skill training systems which focus on the
trainee's posture, our system analyzes the process of moving joints with the
open kinetic chain (OKC), which helps align movement and minimize muscle use
for better joint control. We process the users' mocap data to visually show
their last service process comparing to 4 ideal OKC characteristics that we
collected from a 6-sub-elite formative study as well as recommended contact
posture. We validate our system through a 12-user study that measures serving
accuracy, qualitative feedback, and skeletal data with users at various skill
levels and open source our skeletal analysis model for future use. While the
participants' overall service accuracy was not significantly improved, our
results show that our system helps participants in the short term to fine-tune
their service motion closer to our ideal 4 OKC characteristics.
|
[
{
"created": "Sun, 3 Dec 2023 19:02:52 GMT",
"version": "v1"
}
] |
2023-12-05
|
[
[
"Xu",
"Eden Cong-He",
""
],
[
"Cheng",
"Lung-Pan",
""
]
] |
We present a badminton training system that focuses on the backhand short service. Unlike the prior motor skill training systems which focus on the trainee's posture, our system analyzes the process of moving joints with the open kinetic chain (OKC), which helps align movement and minimize muscle use for better joint control. We process the users' mocap data to visually show their last service process comparing to 4 ideal OKC characteristics that we collected from a 6-sub-elite formative study as well as recommended contact posture. We validate our system through a 12-user study that measures serving accuracy, qualitative feedback, and skeletal data with users at various skill levels and open source our skeletal analysis model for future use. While the participants' overall service accuracy was not significantly improved, our results show that our system helps participants in the short term to fine-tune their service motion closer to our ideal 4 OKC characteristics.
|
2103.08306
|
Bushra Sabir
|
Bushra Sabir, M. Ali Babar, Raj Gaire
|
ReinforceBug: A Framework to Generate Adversarial Textual Examples
|
Accepted in NAACL-HLT 2021
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Adversarial Examples (AEs) generated by perturbing original training examples
are useful in improving the robustness of Deep Learning (DL) based models. Most
prior works, generate AEs that are either unconscionable due to lexical errors
or semantically or functionally deviant from original examples. In this paper,
we present ReinforceBug, a reinforcement learning framework, that learns a
policy that is transferable on unseen datasets and generates utility-preserving
and transferable (on other models) AEs. Our results show that our method is on
average 10% more successful as compared to the state-of-the-art attack
TextFooler. Moreover, the target models have on average 73.64% confidence in
the wrong prediction, the generated AEs preserve the functional equivalence and
semantic similarity (83.38% ) to their original counterparts, and are
transferable on other models with an average success rate of 46%.
|
[
{
"created": "Thu, 11 Mar 2021 05:35:51 GMT",
"version": "v1"
}
] |
2021-03-16
|
[
[
"Sabir",
"Bushra",
""
],
[
"Babar",
"M. Ali",
""
],
[
"Gaire",
"Raj",
""
]
] |
Adversarial Examples (AEs) generated by perturbing original training examples are useful in improving the robustness of Deep Learning (DL) based models. Most prior works, generate AEs that are either unconscionable due to lexical errors or semantically or functionally deviant from original examples. In this paper, we present ReinforceBug, a reinforcement learning framework, that learns a policy that is transferable on unseen datasets and generates utility-preserving and transferable (on other models) AEs. Our results show that our method is on average 10% more successful as compared to the state-of-the-art attack TextFooler. Moreover, the target models have on average 73.64% confidence in the wrong prediction, the generated AEs preserve the functional equivalence and semantic similarity (83.38% ) to their original counterparts, and are transferable on other models with an average success rate of 46%.
|
1703.05060
|
Dave Zachariah
|
Dave Zachariah and Petre Stoica and Thomas B. Sch\"on
|
Online Learning for Distribution-Free Prediction
| null | null | null | null |
cs.LG stat.CO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop an online learning method for prediction, which is important in
problems with large and/or streaming data sets. We formulate the learning
approach using a covariance-fitting methodology, and show that the resulting
predictor has desirable computational and distribution-free properties: It is
implemented online with a runtime that scales linearly in the number of
samples; has a constant memory requirement; avoids local minima problems; and
prunes away redundant feature dimensions without relying on restrictive
assumptions on the data distribution. In conjunction with the split conformal
approach, it also produces distribution-free prediction confidence intervals in
a computationally efficient manner. The method is demonstrated on both real and
synthetic datasets.
|
[
{
"created": "Wed, 15 Mar 2017 10:20:32 GMT",
"version": "v1"
}
] |
2017-03-16
|
[
[
"Zachariah",
"Dave",
""
],
[
"Stoica",
"Petre",
""
],
[
"Schön",
"Thomas B.",
""
]
] |
We develop an online learning method for prediction, which is important in problems with large and/or streaming data sets. We formulate the learning approach using a covariance-fitting methodology, and show that the resulting predictor has desirable computational and distribution-free properties: It is implemented online with a runtime that scales linearly in the number of samples; has a constant memory requirement; avoids local minima problems; and prunes away redundant feature dimensions without relying on restrictive assumptions on the data distribution. In conjunction with the split conformal approach, it also produces distribution-free prediction confidence intervals in a computationally efficient manner. The method is demonstrated on both real and synthetic datasets.
|
1903.01003
|
Mohamed Akrout
|
Ismail Akrout, Amal Feriani, Mohamed Akrout
|
Hacking Google reCAPTCHA v3 using Reinforcement Learning
|
Accepted for the Conference on Reinforcement Learning and Decision
Making (RLDM) 2019
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a Reinforcement Learning (RL) methodology to bypass Google
reCAPTCHA v3. We formulate the problem as a grid world where the agent learns
how to move the mouse and click on the reCAPTCHA button to receive a high
score. We study the performance of the agent when we vary the cell size of the
grid world and show that the performance drops when the agent takes big steps
toward the goal. Finally, we used a divide and conquer strategy to defeat the
reCAPTCHA system for any grid resolution. Our proposed method achieves a
success rate of 97.4% on a 100x100 grid and 96.7% on a 1000x1000 screen
resolution.
|
[
{
"created": "Sun, 3 Mar 2019 22:10:47 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Mar 2019 05:22:08 GMT",
"version": "v2"
},
{
"created": "Thu, 18 Apr 2019 16:22:33 GMT",
"version": "v3"
}
] |
2019-04-19
|
[
[
"Akrout",
"Ismail",
""
],
[
"Feriani",
"Amal",
""
],
[
"Akrout",
"Mohamed",
""
]
] |
We present a Reinforcement Learning (RL) methodology to bypass Google reCAPTCHA v3. We formulate the problem as a grid world where the agent learns how to move the mouse and click on the reCAPTCHA button to receive a high score. We study the performance of the agent when we vary the cell size of the grid world and show that the performance drops when the agent takes big steps toward the goal. Finally, we used a divide and conquer strategy to defeat the reCAPTCHA system for any grid resolution. Our proposed method achieves a success rate of 97.4% on a 100x100 grid and 96.7% on a 1000x1000 screen resolution.
|
1711.01262
|
He Sun
|
He Sun and Luca Zanetti
|
Distributed Graph Clustering and Sparsification
| null | null | null | null |
cs.DS cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph clustering is a fundamental computational problem with a number of
applications in algorithm design, machine learning, data mining, and analysis
of social networks. Over the past decades, researchers have proposed a number
of algorithmic design methods for graph clustering. Most of these methods,
however, are based on complicated spectral techniques or convex optimisation,
and cannot be directly applied for clustering many networks that occur in
practice, whose information is often collected on different sites. Designing a
simple and distributed clustering algorithm is of great interest, and has wide
applications for processing big datasets.
In this paper we present a simple and distributed algorithm for graph
clustering: for a wide class of graphs that are characterised by a strong
cluster-structure, our algorithm finishes in a poly-logarithmic number of
rounds, and recovers a partition of the graph close to optimal. One of the main
components behind our algorithm is a sampling scheme that, given a dense graph
as input, produces a sparse subgraph that provably preserves the
cluster-structure of the input. Compared with previous sparsification
algorithms that require Laplacian solvers or involve combinatorial
constructions, this component is easy to implement in a distributed way and
runs fast in practice.
|
[
{
"created": "Fri, 3 Nov 2017 17:52:28 GMT",
"version": "v1"
}
] |
2017-11-06
|
[
[
"Sun",
"He",
""
],
[
"Zanetti",
"Luca",
""
]
] |
Graph clustering is a fundamental computational problem with a number of applications in algorithm design, machine learning, data mining, and analysis of social networks. Over the past decades, researchers have proposed a number of algorithmic design methods for graph clustering. Most of these methods, however, are based on complicated spectral techniques or convex optimisation, and cannot be directly applied for clustering many networks that occur in practice, whose information is often collected on different sites. Designing a simple and distributed clustering algorithm is of great interest, and has wide applications for processing big datasets. In this paper we present a simple and distributed algorithm for graph clustering: for a wide class of graphs that are characterised by a strong cluster-structure, our algorithm finishes in a poly-logarithmic number of rounds, and recovers a partition of the graph close to optimal. One of the main components behind our algorithm is a sampling scheme that, given a dense graph as input, produces a sparse subgraph that provably preserves the cluster-structure of the input. Compared with previous sparsification algorithms that require Laplacian solvers or involve combinatorial constructions, this component is easy to implement in a distributed way and runs fast in practice.
|
2004.00140
|
Ligong Han
|
Ligong Han, Robert F. Murphy, and Deva Ramanan
|
Learning Generative Models of Tissue Organization with Supervised GANs
|
Accepted at WACV-18
| null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
A key step in understanding the spatial organization of cells and tissues is
the ability to construct generative models that accurately reflect that
organization. In this paper, we focus on building generative models of electron
microscope (EM) images in which the positions of cell membranes and
mitochondria have been densely annotated, and propose a two-stage procedure
that produces realistic images using Generative Adversarial Networks (or GANs)
in a supervised way. In the first stage, we synthesize a label "image" given a
noise "image" as input, which then provides supervision for EM image synthesis
in the second stage. The full model naturally generates label-image pairs. We
show that accurate synthetic EM images are produced using assessment via (1)
shape features and global statistics, (2) segmentation accuracies, and (3) user
studies. We also demonstrate further improvements by enforcing a reconstruction
loss on intermediate synthetic labels and thus unifying the two stages into one
single end-to-end framework.
|
[
{
"created": "Tue, 31 Mar 2020 22:22:58 GMT",
"version": "v1"
}
] |
2020-04-02
|
[
[
"Han",
"Ligong",
""
],
[
"Murphy",
"Robert F.",
""
],
[
"Ramanan",
"Deva",
""
]
] |
A key step in understanding the spatial organization of cells and tissues is the ability to construct generative models that accurately reflect that organization. In this paper, we focus on building generative models of electron microscope (EM) images in which the positions of cell membranes and mitochondria have been densely annotated, and propose a two-stage procedure that produces realistic images using Generative Adversarial Networks (or GANs) in a supervised way. In the first stage, we synthesize a label "image" given a noise "image" as input, which then provides supervision for EM image synthesis in the second stage. The full model naturally generates label-image pairs. We show that accurate synthetic EM images are produced using assessment via (1) shape features and global statistics, (2) segmentation accuracies, and (3) user studies. We also demonstrate further improvements by enforcing a reconstruction loss on intermediate synthetic labels and thus unifying the two stages into one single end-to-end framework.
|
1510.03891
|
Juan-Pablo Ortega
|
Lyudmila Grigoryeva, Julie Henriques, Laurent Larger, and Juan-Pablo
Ortega
|
Nonlinear memory capacity of parallel time-delay reservoir computers in
the processing of multidimensional signals
|
24 pages, 6 figures
| null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the reservoir design problem in the context of
delay-based reservoir computers for multidimensional input signals, parallel
architectures, and real-time multitasking. First, an approximating reservoir
model is presented in those frameworks that provides an explicit functional
link between the reservoir parameters and architecture and its performance in
the execution of a specific task. Second, the inference properties of the ridge
regression estimator in the multivariate context is used to assess the impact
of finite sample training on the decrease of the reservoir capacity. Finally,
an empirical study is conducted that shows the adequacy of the theoretical
results with the empirical performances exhibited by various reservoir
architectures in the execution of several nonlinear tasks with multidimensional
inputs. Our results confirm the robustness properties of the parallel reservoir
architecture with respect to task misspecification and parameter choice that
had already been documented in the literature.
|
[
{
"created": "Tue, 13 Oct 2015 20:56:49 GMT",
"version": "v1"
}
] |
2015-10-15
|
[
[
"Grigoryeva",
"Lyudmila",
""
],
[
"Henriques",
"Julie",
""
],
[
"Larger",
"Laurent",
""
],
[
"Ortega",
"Juan-Pablo",
""
]
] |
This paper addresses the reservoir design problem in the context of delay-based reservoir computers for multidimensional input signals, parallel architectures, and real-time multitasking. First, an approximating reservoir model is presented in those frameworks that provides an explicit functional link between the reservoir parameters and architecture and its performance in the execution of a specific task. Second, the inference properties of the ridge regression estimator in the multivariate context is used to assess the impact of finite sample training on the decrease of the reservoir capacity. Finally, an empirical study is conducted that shows the adequacy of the theoretical results with the empirical performances exhibited by various reservoir architectures in the execution of several nonlinear tasks with multidimensional inputs. Our results confirm the robustness properties of the parallel reservoir architecture with respect to task misspecification and parameter choice that had already been documented in the literature.
|
2205.13430
|
Ian Hunter
|
Ian Frederick Vigogne Goodbody Hunter
|
GNOLL: Efficient Software for Real-World Dice Notation and Extensions
|
11 pages, 12 figures, Under Review for JCDCG^3 '22
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
GNOLL ("GNOLL's Not *OLL") is a software library for dice notation. Unlike
previous papers, GNOLL's dice notation syntax is focused on parsing a language
that tabletop role-players and board gamers are already used to for specifying
dice rolls in many popular software applications. Existing implementations of
such a syntax are either incomplete, fragile, or proprietary, meaning that
anyone hoping to use such syntax in their application likely needs to write
their own solution. GNOLL is an open-source project using the compilation tool
'YACC' and lexical tool 'LEX' which can be integrated into many applications
with relative ease. This paper explores GNOLL's extended dice notation syntax
and its competitive performance.
|
[
{
"created": "Thu, 26 May 2022 15:35:38 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Jul 2022 13:55:55 GMT",
"version": "v2"
}
] |
2022-07-05
|
[
[
"Hunter",
"Ian Frederick Vigogne Goodbody",
""
]
] |
GNOLL ("GNOLL's Not *OLL") is a software library for dice notation. Unlike previous papers, GNOLL's dice notation syntax is focused on parsing a language that tabletop role-players and board gamers are already used to for specifying dice rolls in many popular software applications. Existing implementations of such a syntax are either incomplete, fragile, or proprietary, meaning that anyone hoping to use such syntax in their application likely needs to write their own solution. GNOLL is an open-source project using the compilation tool 'YACC' and lexical tool 'LEX' which can be integrated into many applications with relative ease. This paper explores GNOLL's extended dice notation syntax and its competitive performance.
|
1611.05660
|
Henrik Barthels
|
Henrik Barthels, Paolo Bientinesi
|
The Matrix Chain Algorithm to Compile Linear Algebra Expressions
|
DSLDI 2016
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The matrix chain problem consists in finding the parenthesization of a matrix
product $M := A_1 A_2 \cdots A_n$ that minimizes the number of scalar
operations. In practical applications, however, one frequently encounters more
complicated scenarios, where expressions involve transposition, inversion,
matrices with given properties, and sequences. The computation of such
expressions makes use of a set of computational kernels that offer
functionality well beyond the simple matrix product. The challenge then shifts
from finding an optimal parenthesization to finding an optimal mapping of the
input expression to the available kernels. Furthermore, it is often the case
that a solution based on the minimization of scalar operations does not result
in the optimal solution in terms of execution time, and/or might be numerically
unstable. In this paper, we introduce a number of generalizations of the matrix
chain problem--including kernels, properties, sequences, and cost
functions--and present corresponding algorithmic solutions.
The motivation for this work comes from the fact that--despite great advances
in the development of compilers--the task of mapping linear algebra problems to
optimized kernels is still to be done manually. In order to relieve the user
from this complex task, new techniques for the compilation of linear algebra
expressions have to be developed.
|
[
{
"created": "Thu, 17 Nov 2016 12:44:15 GMT",
"version": "v1"
}
] |
2016-11-18
|
[
[
"Barthels",
"Henrik",
""
],
[
"Bientinesi",
"Paolo",
""
]
] |
The matrix chain problem consists in finding the parenthesization of a matrix product $M := A_1 A_2 \cdots A_n$ that minimizes the number of scalar operations. In practical applications, however, one frequently encounters more complicated scenarios, where expressions involve transposition, inversion, matrices with given properties, and sequences. The computation of such expressions makes use of a set of computational kernels that offer functionality well beyond the simple matrix product. The challenge then shifts from finding an optimal parenthesization to finding an optimal mapping of the input expression to the available kernels. Furthermore, it is often the case that a solution based on the minimization of scalar operations does not result in the optimal solution in terms of execution time, and/or might be numerically unstable. In this paper, we introduce a number of generalizations of the matrix chain problem--including kernels, properties, sequences, and cost functions--and present corresponding algorithmic solutions. The motivation for this work comes from the fact that--despite great advances in the development of compilers--the task of mapping linear algebra problems to optimized kernels is still to be done manually. In order to relieve the user from this complex task, new techniques for the compilation of linear algebra expressions have to be developed.
|
2311.10793
|
Chuang Yang
|
Chuang Yang, Kai Zhuang, Mulin Chen, Haozhao Ma, Xu Han, Tao Han,
Changxing Guo, Han Han, Bingxuan Zhao, and Qi Wang
|
Traffic Sign Interpretation in Real Road Scene
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing traffic sign-related works are dedicated to detecting and
recognizing part of traffic signs individually, which fails to analyze the
global semantic logic among signs and may convey inaccurate traffic
instruction. Following the above issues, we propose a traffic sign
interpretation (TSI) task, which aims to interpret global semantic interrelated
traffic signs (e.g.,~driving instruction-related texts, symbols, and guide
panels) into a natural language for providing accurate instruction support to
autonomous or assistant driving. Meanwhile, we design a multi-task learning
architecture for TSI, which is responsible for detecting and recognizing
various traffic signs and interpreting them into a natural language like a
human. Furthermore, the absence of a public TSI available dataset prompts us to
build a traffic sign interpretation dataset, namely TSI-CN. The dataset
consists of real road scene images, which are captured from the highway and the
urban way in China from a driver's perspective. It contains rich location
labels of texts, symbols, and guide panels, and the corresponding natural
language description labels. Experiments on TSI-CN demonstrate that the TSI
task is achievable and the TSI architecture can interpret traffic signs from
scenes successfully even if there is a complex semantic logic among signs. The
TSI-CN dataset and the source code of the TSI architecture will be publicly
available after the revision process.
|
[
{
"created": "Fri, 17 Nov 2023 02:30:36 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Nov 2023 10:23:46 GMT",
"version": "v2"
}
] |
2023-11-30
|
[
[
"Yang",
"Chuang",
""
],
[
"Zhuang",
"Kai",
""
],
[
"Chen",
"Mulin",
""
],
[
"Ma",
"Haozhao",
""
],
[
"Han",
"Xu",
""
],
[
"Han",
"Tao",
""
],
[
"Guo",
"Changxing",
""
],
[
"Han",
"Han",
""
],
[
"Zhao",
"Bingxuan",
""
],
[
"Wang",
"Qi",
""
]
] |
Most existing traffic sign-related works are dedicated to detecting and recognizing part of traffic signs individually, which fails to analyze the global semantic logic among signs and may convey inaccurate traffic instruction. Following the above issues, we propose a traffic sign interpretation (TSI) task, which aims to interpret global semantic interrelated traffic signs (e.g.,~driving instruction-related texts, symbols, and guide panels) into a natural language for providing accurate instruction support to autonomous or assistant driving. Meanwhile, we design a multi-task learning architecture for TSI, which is responsible for detecting and recognizing various traffic signs and interpreting them into a natural language like a human. Furthermore, the absence of a public TSI available dataset prompts us to build a traffic sign interpretation dataset, namely TSI-CN. The dataset consists of real road scene images, which are captured from the highway and the urban way in China from a driver's perspective. It contains rich location labels of texts, symbols, and guide panels, and the corresponding natural language description labels. Experiments on TSI-CN demonstrate that the TSI task is achievable and the TSI architecture can interpret traffic signs from scenes successfully even if there is a complex semantic logic among signs. The TSI-CN dataset and the source code of the TSI architecture will be publicly available after the revision process.
|
2311.18046
|
Srravya Chandhiramowuli
|
Srravya Chandhiramowuli, Alex Taylor, Sara Heitlinger, Ding Wang
|
Making Data Work Count
|
Accepted for publication at CSCW 2024. Forthcoming in the Proceedings
of the ACM on Human-Computer Interaction
| null | null | null |
cs.HC cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we examine the work of data annotation. Specifically, we focus
on the role of counting or quantification in organising annotation work. Based
on an ethnographic study of data annotation in two outsourcing centres in
India, we observe that counting practices and its associated logics are an
integral part of day-to-day annotation activities. In particular, we call
attention to the presumption of total countability observed in annotation - the
notion that everything, from tasks, datasets and deliverables, to workers, work
time, quality and performance, can be managed by applying the logics of
counting. To examine this, we draw on sociological and socio-technical
scholarship on quantification and develop the lens of a 'regime of counting'
that makes explicit the specific counts, practices, actors and structures that
underpin the pervasive counting in annotation. We find that within the AI
supply chain and data work, counting regimes aid the assertion of authority by
the AI clients (also called requesters) over annotation processes, constituting
them as reductive, standardised, and homogenous. We illustrate how this has
implications for i) how annotation work and workers get valued, ii) the role
human discretion plays in annotation, and iii) broader efforts to introduce
accountable and more just practices in AI. Through these implications, we
illustrate the limits of operating within the logic of total countability.
Instead, we argue for a view of counting as partial - located in distinct
geographies, shaped by specific interests and accountable in only limited ways.
This, we propose, sets the stage for a fundamentally different orientation to
counting and what counts in data annotation.
|
[
{
"created": "Wed, 29 Nov 2023 19:45:14 GMT",
"version": "v1"
}
] |
2023-12-01
|
[
[
"Chandhiramowuli",
"Srravya",
""
],
[
"Taylor",
"Alex",
""
],
[
"Heitlinger",
"Sara",
""
],
[
"Wang",
"Ding",
""
]
] |
In this paper, we examine the work of data annotation. Specifically, we focus on the role of counting or quantification in organising annotation work. Based on an ethnographic study of data annotation in two outsourcing centres in India, we observe that counting practices and its associated logics are an integral part of day-to-day annotation activities. In particular, we call attention to the presumption of total countability observed in annotation - the notion that everything, from tasks, datasets and deliverables, to workers, work time, quality and performance, can be managed by applying the logics of counting. To examine this, we draw on sociological and socio-technical scholarship on quantification and develop the lens of a 'regime of counting' that makes explicit the specific counts, practices, actors and structures that underpin the pervasive counting in annotation. We find that within the AI supply chain and data work, counting regimes aid the assertion of authority by the AI clients (also called requesters) over annotation processes, constituting them as reductive, standardised, and homogenous. We illustrate how this has implications for i) how annotation work and workers get valued, ii) the role human discretion plays in annotation, and iii) broader efforts to introduce accountable and more just practices in AI. Through these implications, we illustrate the limits of operating within the logic of total countability. Instead, we argue for a view of counting as partial - located in distinct geographies, shaped by specific interests and accountable in only limited ways. This, we propose, sets the stage for a fundamentally different orientation to counting and what counts in data annotation.
|
2208.13446
|
Sabine Cornelsen
|
Sabine Cornelsen and Gregor Diatzko
|
Planar Confluent Orthogonal Drawings of 4-Modal Digraphs
|
Appears in the Proceedings of the 30th International Symposium on
Graph Drawing and Network Visualization (GD 2022)
| null | null | null |
cs.CG cs.DS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In a planar confluent orthogonal drawing (PCOD) of a directed graph (digraph)
vertices are drawn as points in the plane and edges as orthogonal polylines
starting with a vertical segment and ending with a horizontal segment. Edges
may overlap in their first or last segment, but must not intersect otherwise.
PCODs can be seen as a directed variant of Kandinsky drawings or as planar
L-drawings of subdivisions of digraphs. The maximum number of subdivision
vertices in an edge is then the split complexity. A PCOD is upward if each edge
is drawn with monotonically increasing y-coordinates and quasi-upward if no
edge starts with decreasing y-coordinates. We study the split complexity of
PCODs and (quasi-)upward PCODs for various classes of graphs.
|
[
{
"created": "Mon, 29 Aug 2022 09:28:49 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Aug 2022 09:26:55 GMT",
"version": "v2"
}
] |
2022-09-01
|
[
[
"Cornelsen",
"Sabine",
""
],
[
"Diatzko",
"Gregor",
""
]
] |
In a planar confluent orthogonal drawing (PCOD) of a directed graph (digraph) vertices are drawn as points in the plane and edges as orthogonal polylines starting with a vertical segment and ending with a horizontal segment. Edges may overlap in their first or last segment, but must not intersect otherwise. PCODs can be seen as a directed variant of Kandinsky drawings or as planar L-drawings of subdivisions of digraphs. The maximum number of subdivision vertices in an edge is then the split complexity. A PCOD is upward if each edge is drawn with monotonically increasing y-coordinates and quasi-upward if no edge starts with decreasing y-coordinates. We study the split complexity of PCODs and (quasi-)upward PCODs for various classes of graphs.
|
2302.13408
|
Lingjie Kong
|
Lingjie Kong, Pankaj Rajak, and Siamak Shakeri
|
Generative Models for 3D Point Clouds
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point clouds are rich geometric data structures, where their three
dimensional structure offers an excellent domain for understanding the
representation learning and generative modeling in 3D space. In this work, we
aim to improve the performance of point cloud latent-space generative models by
experimenting with transformer encoders, latent-space flow models, and
autoregressive decoders. We analyze and compare both generation and
reconstruction performance of these models on various object types.
|
[
{
"created": "Sun, 26 Feb 2023 21:34:19 GMT",
"version": "v1"
}
] |
2023-02-28
|
[
[
"Kong",
"Lingjie",
""
],
[
"Rajak",
"Pankaj",
""
],
[
"Shakeri",
"Siamak",
""
]
] |
Point clouds are rich geometric data structures, where their three dimensional structure offers an excellent domain for understanding the representation learning and generative modeling in 3D space. In this work, we aim to improve the performance of point cloud latent-space generative models by experimenting with transformer encoders, latent-space flow models, and autoregressive decoders. We analyze and compare both generation and reconstruction performance of these models on various object types.
|
2203.12707
|
Yanwu Xu
|
Yanwu Xu, Shaoan Xie, Wenhao Wu, Kun Zhang, Mingming Gong and Kayhan
Batmanghelich
|
Maximum Spatial Perturbation Consistency for Unpaired Image-to-Image
Translation
|
CVPR 2022 accepted paper
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Unpaired image-to-image translation (I2I) is an ill-posed problem, as an
infinite number of translation functions can map the source domain distribution
to the target distribution. Therefore, much effort has been put into designing
suitable constraints, e.g., cycle consistency (CycleGAN), geometry consistency
(GCGAN), and contrastive learning-based constraints (CUTGAN), that help better
pose the problem. However, these well-known constraints have limitations: (1)
they are either too restrictive or too weak for specific I2I tasks; (2) these
methods result in content distortion when there is a significant spatial
variation between the source and target domains. This paper proposes a
universal regularization technique called maximum spatial perturbation
consistency (MSPC), which enforces a spatial perturbation function (T ) and the
translation operator (G) to be commutative (i.e., TG = GT ). In addition, we
introduce two adversarial training components for learning the spatial
perturbation function. The first one lets T compete with G to achieve maximum
perturbation. The second one lets G and T compete with discriminators to align
the spatial variations caused by the change of object size, object distortion,
background interruptions, etc. Our method outperforms the state-of-the-art
methods on most I2I benchmarks. We also introduce a new benchmark, namely the
front face to profile face dataset, to emphasize the underlying challenges of
I2I for real-world applications. We finally perform ablation experiments to
study the sensitivity of our method to the severity of spatial perturbation and
its effectiveness for distribution alignment.
|
[
{
"created": "Wed, 23 Mar 2022 19:59:04 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Mar 2022 15:16:29 GMT",
"version": "v2"
}
] |
2022-03-30
|
[
[
"Xu",
"Yanwu",
""
],
[
"Xie",
"Shaoan",
""
],
[
"Wu",
"Wenhao",
""
],
[
"Zhang",
"Kun",
""
],
[
"Gong",
"Mingming",
""
],
[
"Batmanghelich",
"Kayhan",
""
]
] |
Unpaired image-to-image translation (I2I) is an ill-posed problem, as an infinite number of translation functions can map the source domain distribution to the target distribution. Therefore, much effort has been put into designing suitable constraints, e.g., cycle consistency (CycleGAN), geometry consistency (GCGAN), and contrastive learning-based constraints (CUTGAN), that help better pose the problem. However, these well-known constraints have limitations: (1) they are either too restrictive or too weak for specific I2I tasks; (2) these methods result in content distortion when there is a significant spatial variation between the source and target domains. This paper proposes a universal regularization technique called maximum spatial perturbation consistency (MSPC), which enforces a spatial perturbation function (T ) and the translation operator (G) to be commutative (i.e., TG = GT ). In addition, we introduce two adversarial training components for learning the spatial perturbation function. The first one lets T compete with G to achieve maximum perturbation. The second one lets G and T compete with discriminators to align the spatial variations caused by the change of object size, object distortion, background interruptions, etc. Our method outperforms the state-of-the-art methods on most I2I benchmarks. We also introduce a new benchmark, namely the front face to profile face dataset, to emphasize the underlying challenges of I2I for real-world applications. We finally perform ablation experiments to study the sensitivity of our method to the severity of spatial perturbation and its effectiveness for distribution alignment.
|
2303.15429
|
Okko Makkonen
|
Okko Makkonen, Elif Sa\c{c}{\i}kara, Camilla Hollanti
|
Algebraic Geometry Codes for Secure Distributed Matrix Multiplication
|
16 pages, 1 figure
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel construction for secure distributed matrix
multiplication (SDMM) based on algebraic geometry (AG) codes, which we call the
PoleGap SDMM scheme. The proposed construction is inspired by the GASP code,
where so-called gaps in a certain polynomial are utilized to achieve higher
communication rates. Our construction considers the gaps in a Weierstrass
semigroup of a rational place in an algebraic function field to achieve a
similar increase in the rate. This construction shows that there is potential
in utilizing AG codes and their subcodes in SDMM since we demonstrate a better
performance compared to state-of-the-art schemes in some parameter regimes.
|
[
{
"created": "Mon, 27 Mar 2023 17:53:25 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Jun 2023 10:05:44 GMT",
"version": "v2"
}
] |
2023-06-12
|
[
[
"Makkonen",
"Okko",
""
],
[
"Saçıkara",
"Elif",
""
],
[
"Hollanti",
"Camilla",
""
]
] |
In this paper, we propose a novel construction for secure distributed matrix multiplication (SDMM) based on algebraic geometry (AG) codes, which we call the PoleGap SDMM scheme. The proposed construction is inspired by the GASP code, where so-called gaps in a certain polynomial are utilized to achieve higher communication rates. Our construction considers the gaps in a Weierstrass semigroup of a rational place in an algebraic function field to achieve a similar increase in the rate. This construction shows that there is potential in utilizing AG codes and their subcodes in SDMM since we demonstrate a better performance compared to state-of-the-art schemes in some parameter regimes.
|
1812.02969
|
Delcho Donev
|
Delcho Donev and Georg B\"ocherer
|
Polar-Coded Pulse Position Modulation for the Poisson Channel
|
7 pages, 9 figures
|
2018 9th Advanced Satellite Multimedia Systems Conference and the
15th Signal Processing for Space Communications Workshop (ASMS/SPSC)
|
10.1109/ASMS-SPSC.2018.8510721
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A polar-coded modulation scheme for deep-space optical communication is
proposed. The photon counting Poisson channel with pulse position modulation
(PPM) is considered. We use the fact that PPM is particularly well suited to be
used with multilevel codes to design a polar-coded modulation scheme for the
system in consideration. The construction of polar codes for the Poisson
channel based on Gaussian approximation is demonstrated to be accurate. The
proposed scheme uses a cyclic redundancy check outer code and a successive
cancellation decoder with list decoding and it is shown that it outperforms the
competing schemes.
|
[
{
"created": "Fri, 7 Dec 2018 10:34:56 GMT",
"version": "v1"
}
] |
2018-12-10
|
[
[
"Donev",
"Delcho",
""
],
[
"Böcherer",
"Georg",
""
]
] |
A polar-coded modulation scheme for deep-space optical communication is proposed. The photon counting Poisson channel with pulse position modulation (PPM) is considered. We use the fact that PPM is particularly well suited to be used with multilevel codes to design a polar-coded modulation scheme for the system in consideration. The construction of polar codes for the Poisson channel based on Gaussian approximation is demonstrated to be accurate. The proposed scheme uses a cyclic redundancy check outer code and a successive cancellation decoder with list decoding and it is shown that it outperforms the competing schemes.
|
1904.06962
|
Manohar Kuse
|
Manohar Kuse, Shaojie Shen
|
Learning Whole-Image Descriptors for Real-time Loop Detection andKidnap
Recovery under Large Viewpoint Difference
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a real-time stereo visual-inertial-SLAM system which is able to
recover from complicatedkidnap scenarios and failures online in realtime. We
propose to learn the whole-image-descriptorin a weakly supervised manner based
on NetVLAD and decoupled convolutions. We analyse thetraining difficulties in
using standard loss formulations and propose an allpairloss and show itseffect
through extensive experiments. Compared to standard NetVLAD, our network takes
an orderof magnitude fewer computations and model parameters, as a result runs
about three times faster.We evaluate the representation power of our descriptor
on standard datasets with precision-recall.Unlike previous loop detection
methods which have been evaluated only on fronto-parallel revisits,we evaluate
the performace of our method with competing methods on scenarios involving
largeviewpoint difference. Finally, we present the fully functional system with
relative computation andhandling of multiple world co-ordinate system which is
able to reduce odometry drift, recover fromcomplicated kidnap scenarios and
random odometry failures. We open source our fully functional system as an
add-on for the popular VINS-Fusion.
|
[
{
"created": "Mon, 15 Apr 2019 11:01:04 GMT",
"version": "v1"
}
] |
2019-04-16
|
[
[
"Kuse",
"Manohar",
""
],
[
"Shen",
"Shaojie",
""
]
] |
We present a real-time stereo visual-inertial-SLAM system which is able to recover from complicatedkidnap scenarios and failures online in realtime. We propose to learn the whole-image-descriptorin a weakly supervised manner based on NetVLAD and decoupled convolutions. We analyse thetraining difficulties in using standard loss formulations and propose an allpairloss and show itseffect through extensive experiments. Compared to standard NetVLAD, our network takes an orderof magnitude fewer computations and model parameters, as a result runs about three times faster.We evaluate the representation power of our descriptor on standard datasets with precision-recall.Unlike previous loop detection methods which have been evaluated only on fronto-parallel revisits,we evaluate the performace of our method with competing methods on scenarios involving largeviewpoint difference. Finally, we present the fully functional system with relative computation andhandling of multiple world co-ordinate system which is able to reduce odometry drift, recover fromcomplicated kidnap scenarios and random odometry failures. We open source our fully functional system as an add-on for the popular VINS-Fusion.
|
2101.00008
|
Farah Shamout
|
Munachiso Nwadike, Takumi Miyawaki, Esha Sarkar, Michail Maniatakos,
Farah Shamout
|
Explainability Matters: Backdoor Attacks on Medical Imaging
| null | null | null | null |
cs.CR cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks have been shown to be vulnerable to backdoor attacks,
which could be easily introduced to the training set prior to model training.
Recent work has focused on investigating backdoor attacks on natural images or
toy datasets. Consequently, the exact impact of backdoors is not yet fully
understood in complex real-world applications, such as in medical imaging where
misdiagnosis can be very costly. In this paper, we explore the impact of
backdoor attacks on a multi-label disease classification task using chest
radiography, with the assumption that the attacker can manipulate the training
dataset to execute the attack. Extensive evaluation of a state-of-the-art
architecture demonstrates that by introducing images with few-pixel
perturbations into the training set, an attacker can execute the backdoor
successfully without having to be involved with the training procedure. A
simple 3$\times$3 pixel trigger can achieve up to 1.00 Area Under the Receiver
Operating Characteristic (AUROC) curve on the set of infected images. In the
set of clean images, the backdoored neural network could still achieve up to
0.85 AUROC, highlighting the stealthiness of the attack. As the use of deep
learning based diagnostic systems proliferates in clinical practice, we also
show how explainability is indispensable in this context, as it can identify
spatially localized backdoors in inference time.
|
[
{
"created": "Wed, 30 Dec 2020 09:41:19 GMT",
"version": "v1"
}
] |
2021-01-05
|
[
[
"Nwadike",
"Munachiso",
""
],
[
"Miyawaki",
"Takumi",
""
],
[
"Sarkar",
"Esha",
""
],
[
"Maniatakos",
"Michail",
""
],
[
"Shamout",
"Farah",
""
]
] |
Deep neural networks have been shown to be vulnerable to backdoor attacks, which could be easily introduced to the training set prior to model training. Recent work has focused on investigating backdoor attacks on natural images or toy datasets. Consequently, the exact impact of backdoors is not yet fully understood in complex real-world applications, such as in medical imaging where misdiagnosis can be very costly. In this paper, we explore the impact of backdoor attacks on a multi-label disease classification task using chest radiography, with the assumption that the attacker can manipulate the training dataset to execute the attack. Extensive evaluation of a state-of-the-art architecture demonstrates that by introducing images with few-pixel perturbations into the training set, an attacker can execute the backdoor successfully without having to be involved with the training procedure. A simple 3$\times$3 pixel trigger can achieve up to 1.00 Area Under the Receiver Operating Characteristic (AUROC) curve on the set of infected images. In the set of clean images, the backdoored neural network could still achieve up to 0.85 AUROC, highlighting the stealthiness of the attack. As the use of deep learning based diagnostic systems proliferates in clinical practice, we also show how explainability is indispensable in this context, as it can identify spatially localized backdoors in inference time.
|
2308.12539
|
Vipul Gupta
|
Vipul Gupta, Pranav Narayanan Venkit, Hugo Lauren\c{c}on, Shomir
Wilson, Rebecca J. Passonneau
|
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language
Model Bias
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
As language models (LMs) become increasingly powerful and widely used, it is
important to quantify them for sociodemographic bias with potential for harm.
Prior measures of bias are sensitive to perturbations in the templates designed
to compare performance across social groups, due to factors such as low
diversity or limited number of templates. Also, most previous work considers
only one NLP task. We introduce Comprehensive Assessment of Language Models
(CALM) for robust measurement of two types of universally relevant
sociodemographic bias, gender and race. CALM integrates sixteen datasets for
question-answering, sentiment analysis and natural language inference. Examples
from each dataset are filtered to produce 224 templates with high diversity
(e.g., length, vocabulary). We assemble 50 highly frequent person names for
each of seven distinct demographic groups to generate 78,400 prompts covering
the three NLP tasks. Our empirical evaluation shows that CALM bias scores are
more robust and far less sensitive than previous bias measurements to
perturbations in the templates, such as synonym substitution, or to random
subset selection of templates. We apply CALM to 20 large language models, and
find that for 2 language model series, larger parameter models tend to be more
biased than smaller ones. The T0 series is the least biased model families, of
the 20 LLMs investigated here. The code is available at
https://github.com/vipulgupta1011/CALM.
|
[
{
"created": "Thu, 24 Aug 2023 03:53:55 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Jan 2024 01:09:01 GMT",
"version": "v2"
},
{
"created": "Thu, 8 Aug 2024 03:20:17 GMT",
"version": "v3"
}
] |
2024-08-09
|
[
[
"Gupta",
"Vipul",
""
],
[
"Venkit",
"Pranav Narayanan",
""
],
[
"Laurençon",
"Hugo",
""
],
[
"Wilson",
"Shomir",
""
],
[
"Passonneau",
"Rebecca J.",
""
]
] |
As language models (LMs) become increasingly powerful and widely used, it is important to quantify them for sociodemographic bias with potential for harm. Prior measures of bias are sensitive to perturbations in the templates designed to compare performance across social groups, due to factors such as low diversity or limited number of templates. Also, most previous work considers only one NLP task. We introduce Comprehensive Assessment of Language Models (CALM) for robust measurement of two types of universally relevant sociodemographic bias, gender and race. CALM integrates sixteen datasets for question-answering, sentiment analysis and natural language inference. Examples from each dataset are filtered to produce 224 templates with high diversity (e.g., length, vocabulary). We assemble 50 highly frequent person names for each of seven distinct demographic groups to generate 78,400 prompts covering the three NLP tasks. Our empirical evaluation shows that CALM bias scores are more robust and far less sensitive than previous bias measurements to perturbations in the templates, such as synonym substitution, or to random subset selection of templates. We apply CALM to 20 large language models, and find that for 2 language model series, larger parameter models tend to be more biased than smaller ones. The T0 series is the least biased model families, of the 20 LLMs investigated here. The code is available at https://github.com/vipulgupta1011/CALM.
|
2205.08441
|
Max Argus
|
Sergio Izquierdo, Max Argus, Thomas Brox
|
Conditional Visual Servoing for Multi-Step Tasks
| null | null | null | null |
cs.RO cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Visual Servoing has been effectively used to move a robot into specific
target locations or to track a recorded demonstration. It does not require
manual programming, but it is typically limited to settings where one
demonstration maps to one environment state. We propose a modular approach to
extend visual servoing to scenarios with multiple demonstration sequences. We
call this conditional servoing, as we choose the next demonstration conditioned
on the observation of the robot. This method presents an appealing strategy to
tackle multi-step problems, as individual demonstrations can be combined
flexibly into a control policy. We propose different selection functions and
compare them on a shape-sorting task in simulation. With the reprojection error
yielding the best overall results, we implement this selection function on a
real robot and show the efficacy of the proposed conditional servoing. For
videos of our experiments, please check out our project page:
https://lmb.informatik.uni-freiburg.de/projects/conditional_servoing/
|
[
{
"created": "Tue, 17 May 2022 15:34:54 GMT",
"version": "v1"
}
] |
2022-05-18
|
[
[
"Izquierdo",
"Sergio",
""
],
[
"Argus",
"Max",
""
],
[
"Brox",
"Thomas",
""
]
] |
Visual Servoing has been effectively used to move a robot into specific target locations or to track a recorded demonstration. It does not require manual programming, but it is typically limited to settings where one demonstration maps to one environment state. We propose a modular approach to extend visual servoing to scenarios with multiple demonstration sequences. We call this conditional servoing, as we choose the next demonstration conditioned on the observation of the robot. This method presents an appealing strategy to tackle multi-step problems, as individual demonstrations can be combined flexibly into a control policy. We propose different selection functions and compare them on a shape-sorting task in simulation. With the reprojection error yielding the best overall results, we implement this selection function on a real robot and show the efficacy of the proposed conditional servoing. For videos of our experiments, please check out our project page: https://lmb.informatik.uni-freiburg.de/projects/conditional_servoing/
|
1602.07029
|
Siddharth Reddy
|
Siddharth Reddy, Igor Labutov, Thorsten Joachims
|
Latent Skill Embedding for Personalized Lesson Sequence Recommendation
|
Under review by the ACM SIGKDD Conference on Knowledge Discovery and
Data Mining
| null | null | null |
cs.LG cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Students in online courses generate large amounts of data that can be used to
personalize the learning process and improve quality of education. In this
paper, we present the Latent Skill Embedding (LSE), a probabilistic model of
students and educational content that can be used to recommend personalized
sequences of lessons with the goal of helping students prepare for specific
assessments. Akin to collaborative filtering for recommender systems, the
algorithm does not require students or content to be described by features, but
it learns a representation using access traces. We formulate this problem as a
regularized maximum-likelihood embedding of students, lessons, and assessments
from historical student-content interactions. An empirical evaluation on
large-scale data from Knewton, an adaptive learning technology company, shows
that this approach predicts assessment results competitively with benchmark
models and is able to discriminate between lesson sequences that lead to
mastery and failure.
|
[
{
"created": "Tue, 23 Feb 2016 04:20:40 GMT",
"version": "v1"
}
] |
2016-02-24
|
[
[
"Reddy",
"Siddharth",
""
],
[
"Labutov",
"Igor",
""
],
[
"Joachims",
"Thorsten",
""
]
] |
Students in online courses generate large amounts of data that can be used to personalize the learning process and improve quality of education. In this paper, we present the Latent Skill Embedding (LSE), a probabilistic model of students and educational content that can be used to recommend personalized sequences of lessons with the goal of helping students prepare for specific assessments. Akin to collaborative filtering for recommender systems, the algorithm does not require students or content to be described by features, but it learns a representation using access traces. We formulate this problem as a regularized maximum-likelihood embedding of students, lessons, and assessments from historical student-content interactions. An empirical evaluation on large-scale data from Knewton, an adaptive learning technology company, shows that this approach predicts assessment results competitively with benchmark models and is able to discriminate between lesson sequences that lead to mastery and failure.
|
2402.17615
|
Artur Gaspar Da Silva
|
M\'ario S. Alvim, Artur Gaspar da Silva, Sophia Knight, and Frank
Valencia
|
A Multi-Agent Model for Opinion Evolution under Cognitive Biases
| null | null | null | null |
cs.MA cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
We generalize the DeGroot model for opinion dynamics to better capture
realistic social scenarios. We introduce a model where each agent has their own
individual cognitive biases. Society is represented as a directed graph whose
edges indicate how much agents influence one another. Biases are represented as
the functions in the square region $[-1,1]^2$ and categorized into four
sub-regions based on the potential reactions they may elicit in an agent during
instances of opinion disagreement. Under the assumption that each bias of every
agent is a continuous function within the region of receptive but resistant
reactions ($\mathbf{R}$), we show that the society converges to a consensus if
the graph is strongly connected. Under the same assumption, we also establish
that the entire society converges to a unanimous opinion if and only if the
source components of the graph-namely, strongly connected components with no
external influence-converge to that opinion. We illustrate that convergence is
not guaranteed for strongly connected graphs when biases are either
discontinuous functions in $\mathbf{R}$ or not included in $\mathbf{R}$. We
showcase our model through a series of examples and simulations, offering
insights into how opinions form in social networks under cognitive biases.
|
[
{
"created": "Tue, 27 Feb 2024 15:44:12 GMT",
"version": "v1"
}
] |
2024-02-28
|
[
[
"Alvim",
"Mário S.",
""
],
[
"da Silva",
"Artur Gaspar",
""
],
[
"Knight",
"Sophia",
""
],
[
"Valencia",
"Frank",
""
]
] |
We generalize the DeGroot model for opinion dynamics to better capture realistic social scenarios. We introduce a model where each agent has their own individual cognitive biases. Society is represented as a directed graph whose edges indicate how much agents influence one another. Biases are represented as the functions in the square region $[-1,1]^2$ and categorized into four sub-regions based on the potential reactions they may elicit in an agent during instances of opinion disagreement. Under the assumption that each bias of every agent is a continuous function within the region of receptive but resistant reactions ($\mathbf{R}$), we show that the society converges to a consensus if the graph is strongly connected. Under the same assumption, we also establish that the entire society converges to a unanimous opinion if and only if the source components of the graph-namely, strongly connected components with no external influence-converge to that opinion. We illustrate that convergence is not guaranteed for strongly connected graphs when biases are either discontinuous functions in $\mathbf{R}$ or not included in $\mathbf{R}$. We showcase our model through a series of examples and simulations, offering insights into how opinions form in social networks under cognitive biases.
|
2010.09254
|
Jingang Wang
|
Yang Yang, Junmei Hao, Canjia Li, Zili Wang, Jingang Wang, Fuzheng
Zhang, Rao Fu, Peixu Hou, Gong Zhang, Zhongyuan Wang
|
Query-aware Tip Generation for Vertical Search
|
Accepted By CIKM 2020 Applied Research Track
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a concise form of user reviews, tips have unique advantages to explain the
search results, assist users' decision making, and further improve user
experience in vertical search scenarios. Existing work on tip generation does
not take query into consideration, which limits the impact of tips in search
scenarios. To address this issue, this paper proposes a query-aware tip
generation framework, integrating query information into encoding and
subsequent decoding processes. Two specific adaptations of Transformer and
Recurrent Neural Network (RNN) are proposed. For Transformer, the query impact
is incorporated into the self-attention computation of both the encoder and the
decoder. As for RNN, the query-aware encoder adopts a selective network to
distill query-relevant information from the review, while the query-aware
decoder integrates the query information into the attention computation during
decoding. The framework consistently outperforms the competing methods on both
public and real-world industrial datasets. Last but not least, online
deployment experiments on Dianping demonstrate the advantage of the proposed
framework for tip generation as well as its online business values.
|
[
{
"created": "Mon, 19 Oct 2020 06:48:40 GMT",
"version": "v1"
}
] |
2020-10-20
|
[
[
"Yang",
"Yang",
""
],
[
"Hao",
"Junmei",
""
],
[
"Li",
"Canjia",
""
],
[
"Wang",
"Zili",
""
],
[
"Wang",
"Jingang",
""
],
[
"Zhang",
"Fuzheng",
""
],
[
"Fu",
"Rao",
""
],
[
"Hou",
"Peixu",
""
],
[
"Zhang",
"Gong",
""
],
[
"Wang",
"Zhongyuan",
""
]
] |
As a concise form of user reviews, tips have unique advantages to explain the search results, assist users' decision making, and further improve user experience in vertical search scenarios. Existing work on tip generation does not take query into consideration, which limits the impact of tips in search scenarios. To address this issue, this paper proposes a query-aware tip generation framework, integrating query information into encoding and subsequent decoding processes. Two specific adaptations of Transformer and Recurrent Neural Network (RNN) are proposed. For Transformer, the query impact is incorporated into the self-attention computation of both the encoder and the decoder. As for RNN, the query-aware encoder adopts a selective network to distill query-relevant information from the review, while the query-aware decoder integrates the query information into the attention computation during decoding. The framework consistently outperforms the competing methods on both public and real-world industrial datasets. Last but not least, online deployment experiments on Dianping demonstrate the advantage of the proposed framework for tip generation as well as its online business values.
|
0812.2990
|
Frederic Mazoit
|
Fr\'ed\'eric Mazoit (LaBRI)
|
Tree-width of hypergraphs and surface duality
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In Graph Minor III, Robertson and Seymour conjecture that the tree-width of a
planar graph and that of its dual differ by at most one. We prove that given a
hypergraph H on a surface of Euler genus k, the tree-width of H^* is at most
the maximum of tw(H) + 1 + k and the maximum size of a hyperedge of H^*.
|
[
{
"created": "Tue, 16 Dec 2008 07:47:50 GMT",
"version": "v1"
}
] |
2008-12-17
|
[
[
"Mazoit",
"Frédéric",
"",
"LaBRI"
]
] |
In Graph Minor III, Robertson and Seymour conjecture that the tree-width of a planar graph and that of its dual differ by at most one. We prove that given a hypergraph H on a surface of Euler genus k, the tree-width of H^* is at most the maximum of tw(H) + 1 + k and the maximum size of a hyperedge of H^*.
|
2306.17000
|
Ce Zhang Dr.
|
Ce Zhang, Chengjie Zhang, Yiluan Guo, Lingji Chen, Michael Happold
|
MotionTrack: End-to-End Transformer-based Multi-Object Tracing with
LiDAR-Camera Fusion
|
This paper is accepted by CVPR WAD 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Multiple Object Tracking (MOT) is crucial to autonomous vehicle perception.
End-to-end transformer-based algorithms, which detect and track objects
simultaneously, show great potential for the MOT task. However, most existing
methods focus on image-based tracking with a single object category. In this
paper, we propose an end-to-end transformer-based MOT algorithm (MotionTrack)
with multi-modality sensor inputs to track objects with multiple classes. Our
objective is to establish a transformer baseline for the MOT in an autonomous
driving environment. The proposed algorithm consists of a transformer-based
data association (DA) module and a transformer-based query enhancement module
to achieve MOT and Multiple Object Detection (MOD) simultaneously. The
MotionTrack and its variations achieve better results (AMOTA score at 0.55) on
the nuScenes dataset compared with other classical baseline models, such as the
AB3DMOT, the CenterTrack, and the probabilistic 3D Kalman filter. In addition,
we prove that a modified attention mechanism can be utilized for DA to
accomplish the MOT, and aggregate history features to enhance the MOD
performance.
|
[
{
"created": "Thu, 29 Jun 2023 15:00:12 GMT",
"version": "v1"
}
] |
2023-06-30
|
[
[
"Zhang",
"Ce",
""
],
[
"Zhang",
"Chengjie",
""
],
[
"Guo",
"Yiluan",
""
],
[
"Chen",
"Lingji",
""
],
[
"Happold",
"Michael",
""
]
] |
Multiple Object Tracking (MOT) is crucial to autonomous vehicle perception. End-to-end transformer-based algorithms, which detect and track objects simultaneously, show great potential for the MOT task. However, most existing methods focus on image-based tracking with a single object category. In this paper, we propose an end-to-end transformer-based MOT algorithm (MotionTrack) with multi-modality sensor inputs to track objects with multiple classes. Our objective is to establish a transformer baseline for the MOT in an autonomous driving environment. The proposed algorithm consists of a transformer-based data association (DA) module and a transformer-based query enhancement module to achieve MOT and Multiple Object Detection (MOD) simultaneously. The MotionTrack and its variations achieve better results (AMOTA score at 0.55) on the nuScenes dataset compared with other classical baseline models, such as the AB3DMOT, the CenterTrack, and the probabilistic 3D Kalman filter. In addition, we prove that a modified attention mechanism can be utilized for DA to accomplish the MOT, and aggregate history features to enhance the MOD performance.
|
2402.05657
|
Antoine Renard
|
Antoine Renard, Michel Rigo, Markus A. Whiteland
|
q-Parikh Matrices and q-deformed binomial coefficients of words
|
26 pages, submitted
| null | null | null |
cs.FL cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
We have introduced a q-deformation, i.e., a polynomial in q with natural
coefficients, of the binomial coefficient of two finite words u and v counting
the number of occurrences of v as a subword of u. In this paper, we examine the
q-deformation of Parikh matrices as introduced by E\u{g}ecio\u{g}lu in 2004.
Many classical results concerning Parikh matrices generalize to this new
framework: Our first important observation is that the elements of such a
matrix are in fact q-deformations of binomial coefficients of words. We also
study their inverses and as an application, we obtain new identities about
q-binomials.
For a finite word z and for the sequence $(p_n)_{n\ge 0}$ of prefixes of an
infinite word, we show that the polynomial sequence $\binom{p_n}{z}_q$
converges to a formal series. We present links with additive number theory and
k-regular sequences. In the case of a periodic word $u^\omega$, we generalize a
result of Salomaa: the sequence $\binom{u^n}{z}_q$ satisfies a linear
recurrence relation with polynomial coefficients. Related to the theory of
integer partition, we describe the growth and the zero set of the coefficients
of the series associated with $u^\omega$.
Finally, we show that the minors of a q-Parikh matrix are polynomials with
natural coefficients and consider a generalization of Cauchy's inequality. We
also compare q-Parikh matrices associated with an arbitrary word with those
associated with a canonical word $12\cdots k$ made of pairwise distinct
symbols.
|
[
{
"created": "Thu, 8 Feb 2024 13:21:26 GMT",
"version": "v1"
}
] |
2024-02-09
|
[
[
"Renard",
"Antoine",
""
],
[
"Rigo",
"Michel",
""
],
[
"Whiteland",
"Markus A.",
""
]
] |
We have introduced a q-deformation, i.e., a polynomial in q with natural coefficients, of the binomial coefficient of two finite words u and v counting the number of occurrences of v as a subword of u. In this paper, we examine the q-deformation of Parikh matrices as introduced by E\u{g}ecio\u{g}lu in 2004. Many classical results concerning Parikh matrices generalize to this new framework: Our first important observation is that the elements of such a matrix are in fact q-deformations of binomial coefficients of words. We also study their inverses and as an application, we obtain new identities about q-binomials. For a finite word z and for the sequence $(p_n)_{n\ge 0}$ of prefixes of an infinite word, we show that the polynomial sequence $\binom{p_n}{z}_q$ converges to a formal series. We present links with additive number theory and k-regular sequences. In the case of a periodic word $u^\omega$, we generalize a result of Salomaa: the sequence $\binom{u^n}{z}_q$ satisfies a linear recurrence relation with polynomial coefficients. Related to the theory of integer partition, we describe the growth and the zero set of the coefficients of the series associated with $u^\omega$. Finally, we show that the minors of a q-Parikh matrix are polynomials with natural coefficients and consider a generalization of Cauchy's inequality. We also compare q-Parikh matrices associated with an arbitrary word with those associated with a canonical word $12\cdots k$ made of pairwise distinct symbols.
|
2003.00888
|
Guus Engels
|
Guus Engels, Nerea Aranjuelo, Ignacio Arganda-Carreras, Marcos Nieto
and Oihana Otaegui
|
3D Object Detection From LiDAR Data Using Distance Dependent Feature
Extraction
|
10 pages, 8 figures, 6th International Conference on Vehicle
Technology and Intelligent Transport Systems (VEHITS 2020)
| null |
10.5220/0009330402890300
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a new approach to 3D object detection that leverages the
properties of the data obtained by a LiDAR sensor. State-of-the-art detectors
use neural network architectures based on assumptions valid for camera images.
However, point clouds obtained from LiDAR are fundamentally different. Most
detectors use shared filter kernels to extract features which do not take into
account the range dependent nature of the point cloud features. To show this,
different detectors are trained on two splits of the KITTI dataset: close range
(objects up to 25 meters from LiDAR) and long-range. Top view images are
generated from point clouds as input for the networks. Combined results
outperform the baseline network trained on the full dataset with a single
backbone. Additional research compares the effect of using different input
features when converting the point cloud to image. The results indicate that
the network focuses on the shape and structure of the objects, rather than
exact values of the input. This work proposes an improvement for 3D object
detectors by taking into account the properties of LiDAR point clouds over
distance. Results show that training separate networks for close-range and
long-range objects boosts performance for all KITTI benchmark difficulties.
|
[
{
"created": "Mon, 2 Mar 2020 13:16:35 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Mar 2020 07:47:20 GMT",
"version": "v2"
}
] |
2021-04-09
|
[
[
"Engels",
"Guus",
""
],
[
"Aranjuelo",
"Nerea",
""
],
[
"Arganda-Carreras",
"Ignacio",
""
],
[
"Nieto",
"Marcos",
""
],
[
"Otaegui",
"Oihana",
""
]
] |
This paper presents a new approach to 3D object detection that leverages the properties of the data obtained by a LiDAR sensor. State-of-the-art detectors use neural network architectures based on assumptions valid for camera images. However, point clouds obtained from LiDAR are fundamentally different. Most detectors use shared filter kernels to extract features which do not take into account the range dependent nature of the point cloud features. To show this, different detectors are trained on two splits of the KITTI dataset: close range (objects up to 25 meters from LiDAR) and long-range. Top view images are generated from point clouds as input for the networks. Combined results outperform the baseline network trained on the full dataset with a single backbone. Additional research compares the effect of using different input features when converting the point cloud to image. The results indicate that the network focuses on the shape and structure of the objects, rather than exact values of the input. This work proposes an improvement for 3D object detectors by taking into account the properties of LiDAR point clouds over distance. Results show that training separate networks for close-range and long-range objects boosts performance for all KITTI benchmark difficulties.
|
2407.19951
|
Muhammad Rashid
|
Muhammad Rashid, Elvio Amparore, Enrico Ferrari, Damiano Verda
|
Can I trust my anomaly detection system? A case study based on
explainable AI
|
World Conference on eXplainable Artificial Intelligence
| null |
10.1007/978-3-031-63803-9_13
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Generative models based on variational autoencoders are a popular technique
for detecting anomalies in images in a semi-supervised context. A common
approach employs the anomaly score to detect the presence of anomalies, and it
is known to reach high level of accuracy on benchmark datasets. However, since
anomaly scores are computed from reconstruction disparities, they often obscure
the detection of various spurious features, raising concerns regarding their
actual efficacy. This case study explores the robustness of an anomaly
detection system based on variational autoencoder generative models through the
use of eXplainable AI methods. The goal is to get a different perspective on
the real performances of anomaly detectors that use reconstruction differences.
In our case study we discovered that, in many cases, samples are detected as
anomalous for the wrong or misleading factors.
|
[
{
"created": "Mon, 29 Jul 2024 12:39:07 GMT",
"version": "v1"
}
] |
2024-07-30
|
[
[
"Rashid",
"Muhammad",
""
],
[
"Amparore",
"Elvio",
""
],
[
"Ferrari",
"Enrico",
""
],
[
"Verda",
"Damiano",
""
]
] |
Generative models based on variational autoencoders are a popular technique for detecting anomalies in images in a semi-supervised context. A common approach employs the anomaly score to detect the presence of anomalies, and it is known to reach high level of accuracy on benchmark datasets. However, since anomaly scores are computed from reconstruction disparities, they often obscure the detection of various spurious features, raising concerns regarding their actual efficacy. This case study explores the robustness of an anomaly detection system based on variational autoencoder generative models through the use of eXplainable AI methods. The goal is to get a different perspective on the real performances of anomaly detectors that use reconstruction differences. In our case study we discovered that, in many cases, samples are detected as anomalous for the wrong or misleading factors.
|
2202.09039
|
Manojkumar Parmar
|
Kanak Tekwani, Manojkumar Parmar
|
Critical Checkpoints for Evaluating Defence Models Against Adversarial
Attack and Robustness
|
16 pages, 8 figures
| null | null | null |
cs.CR cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
From past couple of years there is a cycle of researchers proposing a defence
model for adversaries in machine learning which is arguably defensible to most
of the existing attacks in restricted condition (they evaluate on some bounded
inputs or datasets). And then shortly another set of researcher finding the
vulnerabilities in that defence model and breaking it by proposing a stronger
attack model. Some common flaws are been noticed in the past defence models
that were broken in very short time. Defence models being broken so easily is a
point of concern as decision of many crucial activities are taken with the help
of machine learning models. So there is an utter need of some defence
checkpoints that any researcher should keep in mind while evaluating the
soundness of technique and declaring it to be decent defence technique. In this
paper, we have suggested few checkpoints that should be taken into
consideration while building and evaluating the soundness of defence models.
All these points are recommended after observing why some past defence models
failed and how some model remained adamant and proved their soundness against
some of the very strong attacks.
|
[
{
"created": "Fri, 18 Feb 2022 06:15:49 GMT",
"version": "v1"
}
] |
2022-02-21
|
[
[
"Tekwani",
"Kanak",
""
],
[
"Parmar",
"Manojkumar",
""
]
] |
From past couple of years there is a cycle of researchers proposing a defence model for adversaries in machine learning which is arguably defensible to most of the existing attacks in restricted condition (they evaluate on some bounded inputs or datasets). And then shortly another set of researcher finding the vulnerabilities in that defence model and breaking it by proposing a stronger attack model. Some common flaws are been noticed in the past defence models that were broken in very short time. Defence models being broken so easily is a point of concern as decision of many crucial activities are taken with the help of machine learning models. So there is an utter need of some defence checkpoints that any researcher should keep in mind while evaluating the soundness of technique and declaring it to be decent defence technique. In this paper, we have suggested few checkpoints that should be taken into consideration while building and evaluating the soundness of defence models. All these points are recommended after observing why some past defence models failed and how some model remained adamant and proved their soundness against some of the very strong attacks.
|
1802.09119
|
Ruggiero Lovreglio
|
Ruggiero Lovreglio, Vicente Gonzalez, Zhenan Feng, Robert Amor,
Michael Spearpoint, Jared Thomas, Margaret Trotter, Rafael Sacks
|
Prototyping Virtual Reality Serious Games for Building Earthquake
Preparedness: The Auckland City Hospital Case Study
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enhancing evacuee safety is a key factor in reducing the number of injuries
and deaths that result from earthquakes. One way this can be achieved is by
training occupants. Virtual Reality (VR) and Serious Games (SGs), represent
novel techniques that may overcome the limitations of traditional training
approaches. VR and SGs have been examined in the fire emergency context,
however, their application to earthquake preparedness has not yet been
extensively examined. We provide a theoretical discussion of the advantages and
limitations of using VR SGs to investigate how building occupants behave during
earthquake evacuations and to train building occupants to cope with such
emergencies. We explore key design components for developing a VR SG framework:
(a) what features constitute an earthquake event, (b) which building types can
be selected and represented within the VR environment, (c) how damage to the
building can be determined and represented, (d) how non-player characters (NPC)
can be designed, and (e) what level of interaction there can be between NPC and
the human participants. We illustrate the above by presenting the Auckland City
Hospital, New Zealand as a case study, and propose a possible VR SG training
tool to enhance earthquake preparedness in public buildings.
|
[
{
"created": "Mon, 26 Feb 2018 01:08:51 GMT",
"version": "v1"
}
] |
2018-02-27
|
[
[
"Lovreglio",
"Ruggiero",
""
],
[
"Gonzalez",
"Vicente",
""
],
[
"Feng",
"Zhenan",
""
],
[
"Amor",
"Robert",
""
],
[
"Spearpoint",
"Michael",
""
],
[
"Thomas",
"Jared",
""
],
[
"Trotter",
"Margaret",
""
],
[
"Sacks",
"Rafael",
""
]
] |
Enhancing evacuee safety is a key factor in reducing the number of injuries and deaths that result from earthquakes. One way this can be achieved is by training occupants. Virtual Reality (VR) and Serious Games (SGs), represent novel techniques that may overcome the limitations of traditional training approaches. VR and SGs have been examined in the fire emergency context, however, their application to earthquake preparedness has not yet been extensively examined. We provide a theoretical discussion of the advantages and limitations of using VR SGs to investigate how building occupants behave during earthquake evacuations and to train building occupants to cope with such emergencies. We explore key design components for developing a VR SG framework: (a) what features constitute an earthquake event, (b) which building types can be selected and represented within the VR environment, (c) how damage to the building can be determined and represented, (d) how non-player characters (NPC) can be designed, and (e) what level of interaction there can be between NPC and the human participants. We illustrate the above by presenting the Auckland City Hospital, New Zealand as a case study, and propose a possible VR SG training tool to enhance earthquake preparedness in public buildings.
|
1911.07988
|
Ashwin Kallingal Joshy
|
Ashwin Kallingal Joshy, Wei Le
|
Invariant Diffs
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software development is inherently incremental. Nowadays, many software
companies adopt an agile process and a shorter release cycle, where software
needs to be delivered faster with quality assurances. On the other hand, the
majority of existing program analysis tools still target single versions of
programs and are slow and inflexible to handle changes. In the popular version
control systems such as git, the program changes are still presented using
source code diffs. It is hard to understand what program conditions are changed
and which source code lines cause them. In this paper, we propose to compute
"invariant diffs" to specify changes. Similar to source diffs that report
common code and code churns, we define version invariants to represent program
conditions that are common across versions, and invariant churns to show the
changes of program conditions between versions. We designed a static
demand-driven, path-sensitive analysis to compute and compare invariants for
multiple versions of programs using multiversion control flow graphs. We report
invariant diffs at the matched program points where comparing invariants are
meaningful. Importantly, our analysis correlates source diffs with invariant
diffs to explain what source code changes lead to the property changes. We
implemented our algorithms in a tool called $H_2$ and performed experiments on
104 versions of programs. Our results show that we are able to compute
invariant diffs correctly within reasonable amount of time. The version
invariants can capture the common properties of program versions even
constructed by different persons, and the invariant churns can specify the
semantics of changes such as how a patch changed a buggy condition to a correct
condition.
|
[
{
"created": "Mon, 18 Nov 2019 22:39:38 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jun 2020 02:16:21 GMT",
"version": "v2"
}
] |
2020-07-01
|
[
[
"Joshy",
"Ashwin Kallingal",
""
],
[
"Le",
"Wei",
""
]
] |
Software development is inherently incremental. Nowadays, many software companies adopt an agile process and a shorter release cycle, where software needs to be delivered faster with quality assurances. On the other hand, the majority of existing program analysis tools still target single versions of programs and are slow and inflexible to handle changes. In the popular version control systems such as git, the program changes are still presented using source code diffs. It is hard to understand what program conditions are changed and which source code lines cause them. In this paper, we propose to compute "invariant diffs" to specify changes. Similar to source diffs that report common code and code churns, we define version invariants to represent program conditions that are common across versions, and invariant churns to show the changes of program conditions between versions. We designed a static demand-driven, path-sensitive analysis to compute and compare invariants for multiple versions of programs using multiversion control flow graphs. We report invariant diffs at the matched program points where comparing invariants are meaningful. Importantly, our analysis correlates source diffs with invariant diffs to explain what source code changes lead to the property changes. We implemented our algorithms in a tool called $H_2$ and performed experiments on 104 versions of programs. Our results show that we are able to compute invariant diffs correctly within reasonable amount of time. The version invariants can capture the common properties of program versions even constructed by different persons, and the invariant churns can specify the semantics of changes such as how a patch changed a buggy condition to a correct condition.
|
1202.2820
|
Christina Boucher
|
Christina Boucher, Gad M. Landau, Avivit Levy, David Pritchard and
Oren Weimann
|
On Approximating String Selection Problems with Outliers
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many problems in bioinformatics are about finding strings that approximately
represent a collection of given strings. We look at more general problems where
some input strings can be classified as outliers. The Close to Most Strings
problem is, given a set S of same-length strings, and a parameter d, find a
string x that maximizes the number of "non-outliers" within Hamming distance d
of x. We prove this problem has no PTAS unless ZPP=NP, correcting a decade-old
mistake. The Most Strings with Few Bad Columns problem is to find a
maximum-size subset of input strings so that the number of non-identical
positions is at most k; we show it has no PTAS unless P=NP. We also observe
Closest to k Strings has no EPTAS unless W[1]=FPT. In sum, outliers help model
problems associated with using biological data, but we show the problem of
finding an approximate solution is computationally difficult.
|
[
{
"created": "Mon, 13 Feb 2012 19:09:26 GMT",
"version": "v1"
}
] |
2012-02-14
|
[
[
"Boucher",
"Christina",
""
],
[
"Landau",
"Gad M.",
""
],
[
"Levy",
"Avivit",
""
],
[
"Pritchard",
"David",
""
],
[
"Weimann",
"Oren",
""
]
] |
Many problems in bioinformatics are about finding strings that approximately represent a collection of given strings. We look at more general problems where some input strings can be classified as outliers. The Close to Most Strings problem is, given a set S of same-length strings, and a parameter d, find a string x that maximizes the number of "non-outliers" within Hamming distance d of x. We prove this problem has no PTAS unless ZPP=NP, correcting a decade-old mistake. The Most Strings with Few Bad Columns problem is to find a maximum-size subset of input strings so that the number of non-identical positions is at most k; we show it has no PTAS unless P=NP. We also observe Closest to k Strings has no EPTAS unless W[1]=FPT. In sum, outliers help model problems associated with using biological data, but we show the problem of finding an approximate solution is computationally difficult.
|
2206.08800
|
Rasmus Haugaard
|
Rasmus Laurvig Haugaard, Anders Glent Buch, Thorbj{\o}rn Mosekj{\ae}r
Iversen
|
Self-supervised deep visual servoing for high precision peg-in-hole
insertion
|
Accepted at IEEE CASE 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many industrial assembly tasks involve peg-in-hole like insertions with
sub-millimeter tolerances which are challenging, even in highly calibrated
robot cells. Visual servoing can be employed to increase the robustness towards
uncertainties in the system, however, state of the art methods either rely on
accurate 3D models for synthetic renderings or manual involvement in
acquisition of training data. We present a novel self-supervised visual
servoing method for high precision peg-in-hole insertion, which is fully
automated and does not rely on synthetic data. We demonstrate its applicability
for insertion of electronic components into a printed circuit board with tight
tolerances. We show that peg-in-hole insertion can be drastically sped up by
preceding a robust but slow force-based insertion strategy with our proposed
visual servoing method, the configuration of which is fully autonomous.
|
[
{
"created": "Fri, 17 Jun 2022 14:29:21 GMT",
"version": "v1"
}
] |
2022-06-20
|
[
[
"Haugaard",
"Rasmus Laurvig",
""
],
[
"Buch",
"Anders Glent",
""
],
[
"Iversen",
"Thorbjørn Mosekjær",
""
]
] |
Many industrial assembly tasks involve peg-in-hole like insertions with sub-millimeter tolerances which are challenging, even in highly calibrated robot cells. Visual servoing can be employed to increase the robustness towards uncertainties in the system, however, state of the art methods either rely on accurate 3D models for synthetic renderings or manual involvement in acquisition of training data. We present a novel self-supervised visual servoing method for high precision peg-in-hole insertion, which is fully automated and does not rely on synthetic data. We demonstrate its applicability for insertion of electronic components into a printed circuit board with tight tolerances. We show that peg-in-hole insertion can be drastically sped up by preceding a robust but slow force-based insertion strategy with our proposed visual servoing method, the configuration of which is fully autonomous.
|
2305.06898
|
Yujie Zeng
|
Yujie Zeng, Yiming Huang, Xiao-Long Ren, Linyuan L\"u
|
Identifying vital nodes through augmented random walks on higher-order
networks
| null | null | null | null |
cs.SI physics.soc-ph stat.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Empirical networks possess considerable heterogeneity of node connections,
resulting in a small portion of nodes playing crucial roles in network
structure and function. Yet, how to characterize nodes' influence and identify
vital nodes is by far still unclear in the study of networks with higher-order
interactions. In this paper, we introduce a multi-order graph obtained by
incorporating the higher-order bipartite graph and the classical pairwise
graph, and propose a Higher-order Augmented Random Walk (HoRW) model through
random walking on it. This representation preserves as much information about
the higher-interacting network as possible. The results indicate that the
proposed method effectively addresses the localization problem of certain
classical centralities. In contrast to random walks along pairwise interactions
only, performing more walks along higher-order interactions assists in not only
identifying the most important nodes but also distinguishing nodes that ranked
in the middle and bottom. Our method outperforms classical centralities in
identifying vital nodes and can scale to various tasks in networks, including
information spread maximization and network dismantling problems. The proposed
higher-order representation and the random walk model provide novel insights
and potent tools for studying higher-order mechanisms and functionality.
|
[
{
"created": "Thu, 11 May 2023 15:39:47 GMT",
"version": "v1"
},
{
"created": "Sat, 25 Nov 2023 08:40:04 GMT",
"version": "v2"
},
{
"created": "Sun, 3 Dec 2023 13:49:00 GMT",
"version": "v3"
}
] |
2023-12-05
|
[
[
"Zeng",
"Yujie",
""
],
[
"Huang",
"Yiming",
""
],
[
"Ren",
"Xiao-Long",
""
],
[
"Lü",
"Linyuan",
""
]
] |
Empirical networks possess considerable heterogeneity of node connections, resulting in a small portion of nodes playing crucial roles in network structure and function. Yet, how to characterize nodes' influence and identify vital nodes is by far still unclear in the study of networks with higher-order interactions. In this paper, we introduce a multi-order graph obtained by incorporating the higher-order bipartite graph and the classical pairwise graph, and propose a Higher-order Augmented Random Walk (HoRW) model through random walking on it. This representation preserves as much information about the higher-interacting network as possible. The results indicate that the proposed method effectively addresses the localization problem of certain classical centralities. In contrast to random walks along pairwise interactions only, performing more walks along higher-order interactions assists in not only identifying the most important nodes but also distinguishing nodes that ranked in the middle and bottom. Our method outperforms classical centralities in identifying vital nodes and can scale to various tasks in networks, including information spread maximization and network dismantling problems. The proposed higher-order representation and the random walk model provide novel insights and potent tools for studying higher-order mechanisms and functionality.
|
2106.09242
|
Song Wang
|
Moshi Wei, Yuchao Huang, Jinqiu Yang, Junjie Wang, Song Wang
|
CoCoFuzzing: Testing Neural Code Models with Coverage-Guided Fuzzing
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Deep learning-based code processing models have shown good performance for
tasks such as predicting method names, summarizing programs, and comment
generation. However, despite the tremendous progress, deep learning models are
often prone to adversarial attacks, which can significantly threaten the
robustness and generalizability of these models by leading them to
misclassification with unexpected inputs. To address the above issue, many deep
learning testing approaches have been proposed, however, these approaches
mainly focus on testing deep learning applications in the domains of image,
audio, and text analysis, etc., which cannot be directly applied to neural
models for code due to the unique properties of programs. In this paper, we
propose a coverage-based fuzzing framework, CoCoFuzzing, for testing deep
learning-based code processing models. In particular, we first propose ten
mutation operators to automatically generate valid and semantically preserving
source code examples as tests; then we propose a neuron coverage-based approach
to guide the generation of tests. We investigate the performance of CoCoFuzzing
on three state-of-the-art neural code models, i.e., NeuralCodeSum, CODE2SEQ,
and CODE2VEC. Our experiment results demonstrate that CoCoFuzzing can generate
valid and semantically preserving source code examples for testing the
robustness and generalizability of these models and improve the neuron
coverage. Moreover, these tests can be used to improve the performance of the
target neural code models through adversarial retraining.
|
[
{
"created": "Thu, 17 Jun 2021 04:33:37 GMT",
"version": "v1"
}
] |
2021-06-18
|
[
[
"Wei",
"Moshi",
""
],
[
"Huang",
"Yuchao",
""
],
[
"Yang",
"Jinqiu",
""
],
[
"Wang",
"Junjie",
""
],
[
"Wang",
"Song",
""
]
] |
Deep learning-based code processing models have shown good performance for tasks such as predicting method names, summarizing programs, and comment generation. However, despite the tremendous progress, deep learning models are often prone to adversarial attacks, which can significantly threaten the robustness and generalizability of these models by leading them to misclassification with unexpected inputs. To address the above issue, many deep learning testing approaches have been proposed, however, these approaches mainly focus on testing deep learning applications in the domains of image, audio, and text analysis, etc., which cannot be directly applied to neural models for code due to the unique properties of programs. In this paper, we propose a coverage-based fuzzing framework, CoCoFuzzing, for testing deep learning-based code processing models. In particular, we first propose ten mutation operators to automatically generate valid and semantically preserving source code examples as tests; then we propose a neuron coverage-based approach to guide the generation of tests. We investigate the performance of CoCoFuzzing on three state-of-the-art neural code models, i.e., NeuralCodeSum, CODE2SEQ, and CODE2VEC. Our experiment results demonstrate that CoCoFuzzing can generate valid and semantically preserving source code examples for testing the robustness and generalizability of these models and improve the neuron coverage. Moreover, these tests can be used to improve the performance of the target neural code models through adversarial retraining.
|
2406.14015
|
Qingpeng Cai
|
Qingpeng Cai, Kaiping Zheng, H.V. Jagadish, Beng Chin Ooi and James
Yip
|
CohortNet: Empowering Cohort Discovery for Interpretable Healthcare
Analytics
|
10 pages, 12 figures
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Cohort studies are of significant importance in the field of healthcare
analysis. However, existing methods typically involve manual, labor-intensive,
and expert-driven pattern definitions or rely on simplistic clustering
techniques that lack medical relevance. Automating cohort studies with
interpretable patterns has great potential to facilitate healthcare analysis
but remains an unmet need in prior research efforts. In this paper, we propose
a cohort auto-discovery model, CohortNet, for interpretable healthcare
analysis, focusing on the effective identification, representation, and
exploitation of cohorts characterized by medically meaningful patterns.
CohortNet initially learns fine-grained patient representations by separately
processing each feature, considering both individual feature trends and feature
interactions at each time step. Subsequently, it classifies each feature into
distinct states and employs a heuristic cohort exploration strategy to
effectively discover substantial cohorts with concrete patterns. For each
identified cohort, it learns comprehensive cohort representations with credible
evidence through associated patient retrieval. Ultimately, given a new patient,
CohortNet can leverage relevant cohorts with distinguished importance, which
can provide a more holistic understanding of the patient's conditions.
Extensive experiments on three real-world datasets demonstrate that it
consistently outperforms state-of-the-art approaches and offers interpretable
insights from diverse perspectives in a top-down fashion.
|
[
{
"created": "Thu, 20 Jun 2024 06:12:23 GMT",
"version": "v1"
}
] |
2024-06-21
|
[
[
"Cai",
"Qingpeng",
""
],
[
"Zheng",
"Kaiping",
""
],
[
"Jagadish",
"H. V.",
""
],
[
"Ooi",
"Beng Chin",
""
],
[
"Yip",
"James",
""
]
] |
Cohort studies are of significant importance in the field of healthcare analysis. However, existing methods typically involve manual, labor-intensive, and expert-driven pattern definitions or rely on simplistic clustering techniques that lack medical relevance. Automating cohort studies with interpretable patterns has great potential to facilitate healthcare analysis but remains an unmet need in prior research efforts. In this paper, we propose a cohort auto-discovery model, CohortNet, for interpretable healthcare analysis, focusing on the effective identification, representation, and exploitation of cohorts characterized by medically meaningful patterns. CohortNet initially learns fine-grained patient representations by separately processing each feature, considering both individual feature trends and feature interactions at each time step. Subsequently, it classifies each feature into distinct states and employs a heuristic cohort exploration strategy to effectively discover substantial cohorts with concrete patterns. For each identified cohort, it learns comprehensive cohort representations with credible evidence through associated patient retrieval. Ultimately, given a new patient, CohortNet can leverage relevant cohorts with distinguished importance, which can provide a more holistic understanding of the patient's conditions. Extensive experiments on three real-world datasets demonstrate that it consistently outperforms state-of-the-art approaches and offers interpretable insights from diverse perspectives in a top-down fashion.
|
2406.02481
|
Jakub Ho\'sci{\l}owicz
|
Jakub Hoscilowicz, Pawel Popiolek, Jan Rudkowski, Jedrzej Bieniasz,
Artur Janicki
|
Large Language Models as Carriers of Hidden Messages
|
Work in progress. Code is available at
https://github.com/j-hoscilowic/zurek-stegano
| null | null | null |
cs.CL cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
With the help of simple fine-tuning, one can artificially embed hidden text
into large language models (LLMs). This text is revealed only when triggered by
a specific query to the LLM. Two primary applications are LLM fingerprinting
and steganography. In the context of LLM fingerprinting, a unique text
identifier (fingerprint) is embedded within the model to verify licensing
compliance. In the context of steganography, the LLM serves as a carrier for
hidden messages that can be disclosed through a chosen trigger question.
Our work demonstrates that embedding hidden text in the LLM via fine-tuning,
though seemingly secure due to the vast number of potential triggers (any
sequence of characters or tokens could serve as a trigger), is susceptible to
extraction through analysis of the LLM's output decoding process. We propose an
extraction attack called Unconditional Token Forcing (UTF). It is premised on
the hypothesis that iteratively feeding each token from the LLM's vocabulary
into the model should reveal output sequences with abnormally high token
probabilities, indicating potential hidden text candidates. We also present a
defense method to hide text in such a way that it is resistant to both UTF and
attacks based on sampling decoding methods, which we named Unconditional Token
Forcing Confusion (UTFC). To the best of our knowledge, there is no attack
method that can extract text hidden with UTFC. UTFC has both benign
applications (improving LLM fingerprinting) and malign applications (using LLMs
to create covert communication channels). Code is available at
github.com/j-hoscilowic/zurek-stegano
|
[
{
"created": "Tue, 4 Jun 2024 16:49:06 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Jul 2024 16:30:17 GMT",
"version": "v2"
}
] |
2024-07-30
|
[
[
"Hoscilowicz",
"Jakub",
""
],
[
"Popiolek",
"Pawel",
""
],
[
"Rudkowski",
"Jan",
""
],
[
"Bieniasz",
"Jedrzej",
""
],
[
"Janicki",
"Artur",
""
]
] |
With the help of simple fine-tuning, one can artificially embed hidden text into large language models (LLMs). This text is revealed only when triggered by a specific query to the LLM. Two primary applications are LLM fingerprinting and steganography. In the context of LLM fingerprinting, a unique text identifier (fingerprint) is embedded within the model to verify licensing compliance. In the context of steganography, the LLM serves as a carrier for hidden messages that can be disclosed through a chosen trigger question. Our work demonstrates that embedding hidden text in the LLM via fine-tuning, though seemingly secure due to the vast number of potential triggers (any sequence of characters or tokens could serve as a trigger), is susceptible to extraction through analysis of the LLM's output decoding process. We propose an extraction attack called Unconditional Token Forcing (UTF). It is premised on the hypothesis that iteratively feeding each token from the LLM's vocabulary into the model should reveal output sequences with abnormally high token probabilities, indicating potential hidden text candidates. We also present a defense method to hide text in such a way that it is resistant to both UTF and attacks based on sampling decoding methods, which we named Unconditional Token Forcing Confusion (UTFC). To the best of our knowledge, there is no attack method that can extract text hidden with UTFC. UTFC has both benign applications (improving LLM fingerprinting) and malign applications (using LLMs to create covert communication channels). Code is available at github.com/j-hoscilowic/zurek-stegano
|
2402.17767
|
Arjun Gupta
|
Arjun Gupta, Michelle Zhang, Rishik Sathua, Saurabh Gupta
|
Opening Cabinets and Drawers in the Real World using a Commodity Mobile
Manipulator
|
Project webpage:
https://arjung128.github.io/opening-cabinets-and-drawers
| null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pulling open cabinets and drawers presents many difficult technical
challenges in perception (inferring articulation parameters for objects from
onboard sensors), planning (producing motion plans that conform to tight task
constraints), and control (making and maintaining contact while applying forces
on the environment). In this work, we build an end-to-end system that enables a
commodity mobile manipulator (Stretch RE2) to pull open cabinets and drawers in
diverse previously unseen real world environments. We conduct 4 days of real
world testing of this system spanning 31 different objects from across 13
different real world environments. Our system achieves a success rate of 61% on
opening novel cabinets and drawers in unseen environments zero-shot. An
analysis of the failure modes suggests that errors in perception are the most
significant challenge for our system. We will open source code and models for
others to replicate and build upon our system.
|
[
{
"created": "Tue, 27 Feb 2024 18:58:54 GMT",
"version": "v1"
}
] |
2024-02-28
|
[
[
"Gupta",
"Arjun",
""
],
[
"Zhang",
"Michelle",
""
],
[
"Sathua",
"Rishik",
""
],
[
"Gupta",
"Saurabh",
""
]
] |
Pulling open cabinets and drawers presents many difficult technical challenges in perception (inferring articulation parameters for objects from onboard sensors), planning (producing motion plans that conform to tight task constraints), and control (making and maintaining contact while applying forces on the environment). In this work, we build an end-to-end system that enables a commodity mobile manipulator (Stretch RE2) to pull open cabinets and drawers in diverse previously unseen real world environments. We conduct 4 days of real world testing of this system spanning 31 different objects from across 13 different real world environments. Our system achieves a success rate of 61% on opening novel cabinets and drawers in unseen environments zero-shot. An analysis of the failure modes suggests that errors in perception are the most significant challenge for our system. We will open source code and models for others to replicate and build upon our system.
|
1707.06841
|
Youmna Farag
|
Youmna Farag, Marek Rei, Ted Briscoe
|
An Error-Oriented Approach to Word Embedding Pre-Training
|
10 pages, 2 figures, 4 tables, BEA 2017
|
The 12th Workshop on Innovative Use of NLP for Building
Educational Applications (BEA 2017)
| null | null |
cs.CL cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel word embedding pre-training approach that exploits writing
errors in learners' scripts. We compare our method to previous models that tune
the embeddings based on script scores and the discrimination between correct
and corrupt word contexts in addition to the generic commonly-used embeddings
pre-trained on large corpora. The comparison is achieved by using the
aforementioned models to bootstrap a neural network that learns to predict a
holistic score for scripts. Furthermore, we investigate augmenting our model
with error corrections and monitor the impact on performance. Our results show
that our error-oriented approach outperforms other comparable ones which is
further demonstrated when training on more data. Additionally, extending the
model with corrections provides further performance gains when data sparsity is
an issue.
|
[
{
"created": "Fri, 21 Jul 2017 11:06:12 GMT",
"version": "v1"
}
] |
2019-07-05
|
[
[
"Farag",
"Youmna",
""
],
[
"Rei",
"Marek",
""
],
[
"Briscoe",
"Ted",
""
]
] |
We propose a novel word embedding pre-training approach that exploits writing errors in learners' scripts. We compare our method to previous models that tune the embeddings based on script scores and the discrimination between correct and corrupt word contexts in addition to the generic commonly-used embeddings pre-trained on large corpora. The comparison is achieved by using the aforementioned models to bootstrap a neural network that learns to predict a holistic score for scripts. Furthermore, we investigate augmenting our model with error corrections and monitor the impact on performance. Our results show that our error-oriented approach outperforms other comparable ones which is further demonstrated when training on more data. Additionally, extending the model with corrections provides further performance gains when data sparsity is an issue.
|
2312.16812
|
Zhang Chen
|
Zhan Li, Zhang Chen, Zhong Li, Yi Xu
|
Spacetime Gaussian Feature Splatting for Real-Time Dynamic View
Synthesis
|
Accepted to CVPR 2024. Project page:
https://oppo-us-research.github.io/SpacetimeGaussians-website/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Novel view synthesis of dynamic scenes has been an intriguing yet challenging
problem. Despite recent advancements, simultaneously achieving high-resolution
photorealistic results, real-time rendering, and compact storage remains a
formidable task. To address these challenges, we propose Spacetime Gaussian
Feature Splatting as a novel dynamic scene representation, composed of three
pivotal components. First, we formulate expressive Spacetime Gaussians by
enhancing 3D Gaussians with temporal opacity and parametric motion/rotation.
This enables Spacetime Gaussians to capture static, dynamic, as well as
transient content within a scene. Second, we introduce splatted feature
rendering, which replaces spherical harmonics with neural features. These
features facilitate the modeling of view- and time-dependent appearance while
maintaining small size. Third, we leverage the guidance of training error and
coarse depth to sample new Gaussians in areas that are challenging to converge
with existing pipelines. Experiments on several established real-world datasets
demonstrate that our method achieves state-of-the-art rendering quality and
speed, while retaining compact storage. At 8K resolution, our lite-version
model can render at 60 FPS on an Nvidia RTX 4090 GPU. Our code is available at
https://github.com/oppo-us-research/SpacetimeGaussians.
|
[
{
"created": "Thu, 28 Dec 2023 04:14:55 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Apr 2024 22:31:18 GMT",
"version": "v2"
}
] |
2024-04-08
|
[
[
"Li",
"Zhan",
""
],
[
"Chen",
"Zhang",
""
],
[
"Li",
"Zhong",
""
],
[
"Xu",
"Yi",
""
]
] |
Novel view synthesis of dynamic scenes has been an intriguing yet challenging problem. Despite recent advancements, simultaneously achieving high-resolution photorealistic results, real-time rendering, and compact storage remains a formidable task. To address these challenges, we propose Spacetime Gaussian Feature Splatting as a novel dynamic scene representation, composed of three pivotal components. First, we formulate expressive Spacetime Gaussians by enhancing 3D Gaussians with temporal opacity and parametric motion/rotation. This enables Spacetime Gaussians to capture static, dynamic, as well as transient content within a scene. Second, we introduce splatted feature rendering, which replaces spherical harmonics with neural features. These features facilitate the modeling of view- and time-dependent appearance while maintaining small size. Third, we leverage the guidance of training error and coarse depth to sample new Gaussians in areas that are challenging to converge with existing pipelines. Experiments on several established real-world datasets demonstrate that our method achieves state-of-the-art rendering quality and speed, while retaining compact storage. At 8K resolution, our lite-version model can render at 60 FPS on an Nvidia RTX 4090 GPU. Our code is available at https://github.com/oppo-us-research/SpacetimeGaussians.
|
2407.09902
|
Ian Miller
|
Ian D. Miller, Fernando Cladera, Trey Smith, Camillo Jose Taylor,
Vijay Kumar
|
Air-Ground Collaboration with SPOMP: Semantic Panoramic Online Mapping
and Planning
|
Video: https://www.youtube.com/watch?v=ieNYH40buBo
|
IEEE Transactions on Field Robotics (2024)
|
10.1109/TFR.2024.3424748
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mapping and navigation have gone hand-in-hand since long before robots
existed. Maps are a key form of communication, allowing someone who has never
been somewhere to nonetheless navigate that area successfully. In the context
of multi-robot systems, the maps and information that flow between robots are
necessary for effective collaboration, whether those robots are operating
concurrently, sequentially, or completely asynchronously. In this paper, we
argue that maps must go beyond encoding purely geometric or visual information
to enable increasingly complex autonomy, particularly between robots. We
propose a framework for multi-robot autonomy, focusing in particular on air and
ground robots operating in outdoor 2.5D environments. We show that semantic
maps can enable the specification, planning, and execution of complex
collaborative missions, including localization in GPS-denied settings. A
distinguishing characteristic of this work is that we strongly emphasize field
experiments and testing, and by doing so demonstrate that these ideas can work
at scale in the real world. We also perform extensive simulation experiments to
validate our ideas at even larger scales. We believe these experiments and the
experimental results constitute a significant step forward toward advancing the
state-of-the-art of large-scale, collaborative multi-robot systems operating
with real communication, navigation, and perception constraints.
|
[
{
"created": "Sat, 13 Jul 2024 14:37:44 GMT",
"version": "v1"
}
] |
2024-07-16
|
[
[
"Miller",
"Ian D.",
""
],
[
"Cladera",
"Fernando",
""
],
[
"Smith",
"Trey",
""
],
[
"Taylor",
"Camillo Jose",
""
],
[
"Kumar",
"Vijay",
""
]
] |
Mapping and navigation have gone hand-in-hand since long before robots existed. Maps are a key form of communication, allowing someone who has never been somewhere to nonetheless navigate that area successfully. In the context of multi-robot systems, the maps and information that flow between robots are necessary for effective collaboration, whether those robots are operating concurrently, sequentially, or completely asynchronously. In this paper, we argue that maps must go beyond encoding purely geometric or visual information to enable increasingly complex autonomy, particularly between robots. We propose a framework for multi-robot autonomy, focusing in particular on air and ground robots operating in outdoor 2.5D environments. We show that semantic maps can enable the specification, planning, and execution of complex collaborative missions, including localization in GPS-denied settings. A distinguishing characteristic of this work is that we strongly emphasize field experiments and testing, and by doing so demonstrate that these ideas can work at scale in the real world. We also perform extensive simulation experiments to validate our ideas at even larger scales. We believe these experiments and the experimental results constitute a significant step forward toward advancing the state-of-the-art of large-scale, collaborative multi-robot systems operating with real communication, navigation, and perception constraints.
|
1504.06746
|
Francisco Monteiro
|
Jo\~ao S. Lemos, Francisco Ros\'ario, Francisco A. Monteiro, Jo\~ao
Xavier, Ant\'onio Rodrigues
|
Massive MIMO Full-Duplex Relaying with Optimal Power Allocation for
Independent Multipairs
|
Accepted to the 16th IEEE International Workshop on Signal Processing
Advances in Wireless Communications - SPAWC, Stockholm, Sweden 2015
| null |
10.1109/SPAWC.2015.7227049
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the help of an in-band full-duplex relay station, it is possible to
simultaneously transmit and receive signals from multiple users. The
performance of such system can be greatly increased when the relay station is
equipped with a large number of antennas on both transmitter and receiver
sides. In this paper, we exploit the use of massive arrays to effectively
suppress the loopback interference (LI) of a decode-and-forward relay (DF) and
evaluate the performance of the end-to-end (e2e) transmission. This paper
assumes imperfect channel state information is available at the relay and
designs a minimum mean-square error (MMSE) filter to mitigate the interference.
Subsequently, we adopt zero-forcing (ZF) filters for both detection and
beamforming. The performance of such system is evaluated in terms of bit error
rate (BER) at both relay and destinations, and an optimal choice for the
transmission power at the relay is shown. We then propose a complexity
efficient optimal power allocation (OPA) algorithm that, using the channel
statistics, computes the minimum power that satisfies the rate constraints of
each pair. The results obtained via simulation show that when both MMSE
filtering and OPA method are used, better values for the energy efficiency are
attained.
|
[
{
"created": "Sat, 25 Apr 2015 18:39:57 GMT",
"version": "v1"
},
{
"created": "Tue, 12 May 2015 17:58:40 GMT",
"version": "v2"
}
] |
2016-11-17
|
[
[
"Lemos",
"João S.",
""
],
[
"Rosário",
"Francisco",
""
],
[
"Monteiro",
"Francisco A.",
""
],
[
"Xavier",
"João",
""
],
[
"Rodrigues",
"António",
""
]
] |
With the help of an in-band full-duplex relay station, it is possible to simultaneously transmit and receive signals from multiple users. The performance of such system can be greatly increased when the relay station is equipped with a large number of antennas on both transmitter and receiver sides. In this paper, we exploit the use of massive arrays to effectively suppress the loopback interference (LI) of a decode-and-forward relay (DF) and evaluate the performance of the end-to-end (e2e) transmission. This paper assumes imperfect channel state information is available at the relay and designs a minimum mean-square error (MMSE) filter to mitigate the interference. Subsequently, we adopt zero-forcing (ZF) filters for both detection and beamforming. The performance of such system is evaluated in terms of bit error rate (BER) at both relay and destinations, and an optimal choice for the transmission power at the relay is shown. We then propose a complexity efficient optimal power allocation (OPA) algorithm that, using the channel statistics, computes the minimum power that satisfies the rate constraints of each pair. The results obtained via simulation show that when both MMSE filtering and OPA method are used, better values for the energy efficiency are attained.
|
2404.14519
|
Della Hendrickson
|
MIT Hardness Group, Della Hendrickson, Andy Tockman
|
Complexity of Planar Graph Orientation Consistency, Promise-Inference,
and Uniqueness, with Applications to Minesweeper Variants
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study three problems related to the computational complexity of the
popular game Minesweeper. The first is consistency: given a set of clues, is
there any arrangement of mines that satisfies it? This problem has been known
to be NP-complete since 2000, but our framework proves it as a side effect. The
second is inference: given a set of clues, is there any cell that the player
can prove is safe? The coNP-completeness of this problem has been in the
literature since 2011, but we discovered a flaw that we believe is present in
all published results, and we provide a fixed proof. Finally, the third is
solvability: given the full state of a Minesweeper game, can the player win the
game by safely clicking all non-mine cells? This problem has not yet been
studied, and we prove that it is coNP-complete.
|
[
{
"created": "Mon, 22 Apr 2024 18:38:36 GMT",
"version": "v1"
}
] |
2024-04-24
|
[
[
"MIT Hardness Group",
"",
""
],
[
"Hendrickson",
"Della",
""
],
[
"Tockman",
"Andy",
""
]
] |
We study three problems related to the computational complexity of the popular game Minesweeper. The first is consistency: given a set of clues, is there any arrangement of mines that satisfies it? This problem has been known to be NP-complete since 2000, but our framework proves it as a side effect. The second is inference: given a set of clues, is there any cell that the player can prove is safe? The coNP-completeness of this problem has been in the literature since 2011, but we discovered a flaw that we believe is present in all published results, and we provide a fixed proof. Finally, the third is solvability: given the full state of a Minesweeper game, can the player win the game by safely clicking all non-mine cells? This problem has not yet been studied, and we prove that it is coNP-complete.
|
1711.09594
|
Alan Lukezic
|
Alan Luke\v{z}i\v{c}, Luka \v{C}ehovin Zajc, Tom\'a\v{s} Voj\'i\v{r},
Ji\v{r}\'i Matas, Matej Kristan
|
FuCoLoT -- A Fully-Correlational Long-Term Tracker
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose FuCoLoT -- a Fully Correlational Long-term Tracker. It exploits
the novel DCF constrained filter learning method to design a detector that is
able to re-detect the target in the whole image efficiently. FuCoLoT maintains
several correlation filters trained on different time scales that act as the
detector components. A novel mechanism based on the correlation response is
used for tracking failure estimation. FuCoLoT achieves state-of-the-art results
on standard short-term benchmarks and it outperforms the current
best-performing tracker on the long-term UAV20L benchmark by over 19%. It has
an order of magnitude smaller memory footprint than its best-performing
competitors and runs at 15fps in a single CPU thread.
|
[
{
"created": "Mon, 27 Nov 2017 09:31:05 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Jan 2019 09:32:02 GMT",
"version": "v2"
}
] |
2019-01-15
|
[
[
"Lukežič",
"Alan",
""
],
[
"Zajc",
"Luka Čehovin",
""
],
[
"Vojíř",
"Tomáš",
""
],
[
"Matas",
"Jiří",
""
],
[
"Kristan",
"Matej",
""
]
] |
We propose FuCoLoT -- a Fully Correlational Long-term Tracker. It exploits the novel DCF constrained filter learning method to design a detector that is able to re-detect the target in the whole image efficiently. FuCoLoT maintains several correlation filters trained on different time scales that act as the detector components. A novel mechanism based on the correlation response is used for tracking failure estimation. FuCoLoT achieves state-of-the-art results on standard short-term benchmarks and it outperforms the current best-performing tracker on the long-term UAV20L benchmark by over 19%. It has an order of magnitude smaller memory footprint than its best-performing competitors and runs at 15fps in a single CPU thread.
|
2309.13939
|
Irene Celino
|
Irene Celino and Heiko Paulheim
|
The Time Traveler's Guide to Semantic Web Research: Analyzing Fictitious
Research Themes in the ESWC "Next 20 Years" Track
|
13 pages, 8 figures, 2 tables
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
What will Semantic Web research focus on in 20 years from now? We asked this
question to the community and collected their visions in the "Next 20 years"
track of ESWC 2023. We challenged the participants to submit "future" research
papers, as if they were submitting to the 2043 edition of the conference. The
submissions - entirely fictitious - were expected to be full scientific papers,
with research questions, state of the art references, experimental results and
future work, with the goal to get an idea of the research agenda for the late
2040s and early 2050s. We received ten submissions, eight of which were
accepted for presentation at the conference, that mixed serious ideas of
potential future research themes and discussion topics with some fun and irony.
In this paper, we intend to provide a survey of those "science fiction"
papers, considering the emerging research themes and topics, analysing the
research methods applied by the authors in these very special submissions, and
investigating also the most fictitious parts (e.g., neologisms, fabricated
references). Our goal is twofold: on the one hand, we investigate what this
special track tells us about the Semantic Web community and, on the other hand,
we aim at getting some insights on future research practices and directions.
|
[
{
"created": "Mon, 25 Sep 2023 08:20:06 GMT",
"version": "v1"
}
] |
2023-09-26
|
[
[
"Celino",
"Irene",
""
],
[
"Paulheim",
"Heiko",
""
]
] |
What will Semantic Web research focus on in 20 years from now? We asked this question to the community and collected their visions in the "Next 20 years" track of ESWC 2023. We challenged the participants to submit "future" research papers, as if they were submitting to the 2043 edition of the conference. The submissions - entirely fictitious - were expected to be full scientific papers, with research questions, state of the art references, experimental results and future work, with the goal to get an idea of the research agenda for the late 2040s and early 2050s. We received ten submissions, eight of which were accepted for presentation at the conference, that mixed serious ideas of potential future research themes and discussion topics with some fun and irony. In this paper, we intend to provide a survey of those "science fiction" papers, considering the emerging research themes and topics, analysing the research methods applied by the authors in these very special submissions, and investigating also the most fictitious parts (e.g., neologisms, fabricated references). Our goal is twofold: on the one hand, we investigate what this special track tells us about the Semantic Web community and, on the other hand, we aim at getting some insights on future research practices and directions.
|
2404.08888
|
Yue Zhou
|
Yue Zhou, Barbara Di Eugenio, Brian Ziebart, Lisa Sharp, Bing Liu, Ben
Gerber, Nikolaos Agadakos and Shweta Yadav
|
Towards Enhancing Health Coaching Dialogue in Low-Resource Settings
|
Accepted to the main conference of COLING 2022
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Health coaching helps patients identify and accomplish lifestyle-related
goals, effectively improving the control of chronic diseases and mitigating
mental health conditions. However, health coaching is cost-prohibitive due to
its highly personalized and labor-intensive nature. In this paper, we propose
to build a dialogue system that converses with the patients, helps them create
and accomplish specific goals, and can address their emotions with empathy.
However, building such a system is challenging since real-world health coaching
datasets are limited and empathy is subtle. Thus, we propose a modularized
health coaching dialogue system with simplified NLU and NLG frameworks combined
with mechanism-conditioned empathetic response generation. Through automatic
and human evaluation, we show that our system generates more empathetic,
fluent, and coherent responses and outperforms the state-of-the-art in NLU
tasks while requiring less annotation. We view our approach as a key step
towards building automated and more accessible health coaching systems.
|
[
{
"created": "Sat, 13 Apr 2024 03:23:15 GMT",
"version": "v1"
}
] |
2024-04-16
|
[
[
"Zhou",
"Yue",
""
],
[
"Di Eugenio",
"Barbara",
""
],
[
"Ziebart",
"Brian",
""
],
[
"Sharp",
"Lisa",
""
],
[
"Liu",
"Bing",
""
],
[
"Gerber",
"Ben",
""
],
[
"Agadakos",
"Nikolaos",
""
],
[
"Yadav",
"Shweta",
""
]
] |
Health coaching helps patients identify and accomplish lifestyle-related goals, effectively improving the control of chronic diseases and mitigating mental health conditions. However, health coaching is cost-prohibitive due to its highly personalized and labor-intensive nature. In this paper, we propose to build a dialogue system that converses with the patients, helps them create and accomplish specific goals, and can address their emotions with empathy. However, building such a system is challenging since real-world health coaching datasets are limited and empathy is subtle. Thus, we propose a modularized health coaching dialogue system with simplified NLU and NLG frameworks combined with mechanism-conditioned empathetic response generation. Through automatic and human evaluation, we show that our system generates more empathetic, fluent, and coherent responses and outperforms the state-of-the-art in NLU tasks while requiring less annotation. We view our approach as a key step towards building automated and more accessible health coaching systems.
|
2206.01010
|
Weide Liu
|
Weide Liu, Zhonghua Wu, Yiming Wang, Henghui Ding, Fayao Liu, Jie Lin
and Guosheng Lin
|
Long-tailed Recognition by Learning from Latent Categories
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we address the challenging task of long-tailed image
recognition. Previous long-tailed recognition methods commonly focus on the
data augmentation or re-balancing strategy of the tail classes to give more
attention to tail classes during the model training. However, due to the
limited training images for tail classes, the diversity of tail class images is
still restricted, which results in poor feature representations. In this work,
we hypothesize that common latent features among the head and tail classes can
be used to give better feature representation. Motivated by this, we introduce
a Latent Categories based long-tail Recognition (LCReg) method. Specifically,
we propose to learn a set of class-agnostic latent features shared among the
head and tail classes. Then, we implicitly enrich the training sample diversity
via applying semantic data augmentation to the latent features. Extensive
experiments on five long-tailed image recognition datasets demonstrate that our
proposed LCReg is able to significantly outperform previous methods and achieve
state-of-the-art results.
|
[
{
"created": "Thu, 2 Jun 2022 12:19:51 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Aug 2022 07:27:42 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Sep 2022 07:05:51 GMT",
"version": "v3"
}
] |
2022-09-13
|
[
[
"Liu",
"Weide",
""
],
[
"Wu",
"Zhonghua",
""
],
[
"Wang",
"Yiming",
""
],
[
"Ding",
"Henghui",
""
],
[
"Liu",
"Fayao",
""
],
[
"Lin",
"Jie",
""
],
[
"Lin",
"Guosheng",
""
]
] |
In this work, we address the challenging task of long-tailed image recognition. Previous long-tailed recognition methods commonly focus on the data augmentation or re-balancing strategy of the tail classes to give more attention to tail classes during the model training. However, due to the limited training images for tail classes, the diversity of tail class images is still restricted, which results in poor feature representations. In this work, we hypothesize that common latent features among the head and tail classes can be used to give better feature representation. Motivated by this, we introduce a Latent Categories based long-tail Recognition (LCReg) method. Specifically, we propose to learn a set of class-agnostic latent features shared among the head and tail classes. Then, we implicitly enrich the training sample diversity via applying semantic data augmentation to the latent features. Extensive experiments on five long-tailed image recognition datasets demonstrate that our proposed LCReg is able to significantly outperform previous methods and achieve state-of-the-art results.
|
2209.10307
|
Dong Zhang
|
Dong Zhang, Yi Lin, Hao Chen, Zhuotao Tian, Xin Yang, Jinhui Tang,
Kwang Ting Cheng
|
Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions
|
Under submission
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the past few years, the rapid development of deep learning technologies
for computer vision has significantly improved the performance of medical image
segmentation (MedISeg). However, the diverse implementation strategies of
various models have led to an extremely complex MedISeg system, resulting in a
potential problem of unfair result comparisons. In this paper, we collect a
series of MedISeg tricks for different model implementation phases (i.e.,
pre-training model, data pre-processing, data augmentation, model
implementation, model inference, and result post-processing), and
experimentally explore the effectiveness of these tricks on consistent
baselines. With the extensive experimental results on both the representative
2D and 3D medical image datasets, we explicitly clarify the effect of these
tricks. Moreover, based on the surveyed tricks, we also open-sourced a strong
MedISeg repository, where each component has the advantage of plug-and-play. We
believe that this milestone work not only completes a comprehensive and
complementary survey of the state-of-the-art MedISeg approaches, but also
offers a practical guide for addressing the future medical image processing
challenges including but not limited to small dataset, class imbalance
learning, multi-modality learning, and domain adaptation. The code and training
weights have been released at: https://github.com/hust-linyi/seg_trick.
|
[
{
"created": "Wed, 21 Sep 2022 12:30:05 GMT",
"version": "v1"
},
{
"created": "Mon, 8 May 2023 10:23:24 GMT",
"version": "v2"
}
] |
2023-05-09
|
[
[
"Zhang",
"Dong",
""
],
[
"Lin",
"Yi",
""
],
[
"Chen",
"Hao",
""
],
[
"Tian",
"Zhuotao",
""
],
[
"Yang",
"Xin",
""
],
[
"Tang",
"Jinhui",
""
],
[
"Cheng",
"Kwang Ting",
""
]
] |
Over the past few years, the rapid development of deep learning technologies for computer vision has significantly improved the performance of medical image segmentation (MedISeg). However, the diverse implementation strategies of various models have led to an extremely complex MedISeg system, resulting in a potential problem of unfair result comparisons. In this paper, we collect a series of MedISeg tricks for different model implementation phases (i.e., pre-training model, data pre-processing, data augmentation, model implementation, model inference, and result post-processing), and experimentally explore the effectiveness of these tricks on consistent baselines. With the extensive experimental results on both the representative 2D and 3D medical image datasets, we explicitly clarify the effect of these tricks. Moreover, based on the surveyed tricks, we also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play. We believe that this milestone work not only completes a comprehensive and complementary survey of the state-of-the-art MedISeg approaches, but also offers a practical guide for addressing the future medical image processing challenges including but not limited to small dataset, class imbalance learning, multi-modality learning, and domain adaptation. The code and training weights have been released at: https://github.com/hust-linyi/seg_trick.
|
1609.02305
|
Klaus-Tycho Foerster
|
Klaus-Tycho Foerster, Stefan Schmid, Stefano Vissicchio
|
Survey of Consistent Software-Defined Network Updates
| null |
IEEE Communications Surveys & Tutorials 2019
|
10.1109/COMST.2018.2876749
| null |
cs.NI cs.DC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computer networks have become a critical infrastructure. In fact, networks
should not only meet strict requirements in terms of correctness, availability,
and performance, but they should also be very flexible and support fast
updates, e.g., due to policy changes, increasing traffic, or failures. This
paper presents a structured survey of mechanism and protocols to update
computer networks in a fast and consistent manner. In particular, we identify
and discuss the different desirable consistency properties that should be
provided throughout a network update, the algorithmic techniques which are
needed to meet these consistency properties, and the implications on the speed
and costs at which updates can be performed. We also explain the relationship
between consistent network update problems and classic algorithmic optimization
ones. While our survey is mainly motivated by the advent of Software-Defined
Networks (SDNs) and their primary need for correct and efficient update
techniques, the fundamental underlying problems are not new, and we provide a
historical perspective of the subject as well.
|
[
{
"created": "Thu, 8 Sep 2016 07:34:39 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Feb 2018 15:42:18 GMT",
"version": "v2"
},
{
"created": "Tue, 26 Mar 2019 13:33:38 GMT",
"version": "v3"
}
] |
2019-03-27
|
[
[
"Foerster",
"Klaus-Tycho",
""
],
[
"Schmid",
"Stefan",
""
],
[
"Vissicchio",
"Stefano",
""
]
] |
Computer networks have become a critical infrastructure. In fact, networks should not only meet strict requirements in terms of correctness, availability, and performance, but they should also be very flexible and support fast updates, e.g., due to policy changes, increasing traffic, or failures. This paper presents a structured survey of mechanism and protocols to update computer networks in a fast and consistent manner. In particular, we identify and discuss the different desirable consistency properties that should be provided throughout a network update, the algorithmic techniques which are needed to meet these consistency properties, and the implications on the speed and costs at which updates can be performed. We also explain the relationship between consistent network update problems and classic algorithmic optimization ones. While our survey is mainly motivated by the advent of Software-Defined Networks (SDNs) and their primary need for correct and efficient update techniques, the fundamental underlying problems are not new, and we provide a historical perspective of the subject as well.
|
1912.05291
|
Michael Horowitz
|
Michael C. Horowitz, Paul Scharre, and Alexander Velez-Green
|
A Stable Nuclear Future? The Impact of Autonomous Systems and Artificial
Intelligence
| null | null | null | null |
cs.CY cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The potential for advances in information-age technologies to undermine
nuclear deterrence and influence the potential for nuclear escalation
represents a critical question for international politics. One challenge is
that uncertainty about the trajectory of technologies such as autonomous
systems and artificial intelligence (AI) makes assessments difficult. This
paper evaluates the relative impact of autonomous systems and artificial
intelligence in three areas: nuclear command and control, nuclear delivery
platforms and vehicles, and conventional applications of autonomous systems
with consequences for nuclear stability. We argue that countries may be more
likely to use risky forms of autonomy when they fear that their second-strike
capabilities will be undermined. Additionally, the potential deployment of
uninhabited, autonomous nuclear delivery platforms and vehicles could raise the
prospect for accidents and miscalculation. Conventional military applications
of autonomous systems could simultaneously influence nuclear force postures and
first-strike stability in previously unanticipated ways. In particular, the
need to fight at machine speed and the cognitive risk introduced by automation
bias could increase the risk of unintended escalation. Finally, used properly,
there should be many applications of more autonomous systems in nuclear
operations that can increase reliability, reduce the risk of accidents, and buy
more time for decision-makers in a crisis.
|
[
{
"created": "Wed, 11 Dec 2019 13:35:36 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Dec 2019 18:37:36 GMT",
"version": "v2"
}
] |
2019-12-28
|
[
[
"Horowitz",
"Michael C.",
""
],
[
"Scharre",
"Paul",
""
],
[
"Velez-Green",
"Alexander",
""
]
] |
The potential for advances in information-age technologies to undermine nuclear deterrence and influence the potential for nuclear escalation represents a critical question for international politics. One challenge is that uncertainty about the trajectory of technologies such as autonomous systems and artificial intelligence (AI) makes assessments difficult. This paper evaluates the relative impact of autonomous systems and artificial intelligence in three areas: nuclear command and control, nuclear delivery platforms and vehicles, and conventional applications of autonomous systems with consequences for nuclear stability. We argue that countries may be more likely to use risky forms of autonomy when they fear that their second-strike capabilities will be undermined. Additionally, the potential deployment of uninhabited, autonomous nuclear delivery platforms and vehicles could raise the prospect for accidents and miscalculation. Conventional military applications of autonomous systems could simultaneously influence nuclear force postures and first-strike stability in previously unanticipated ways. In particular, the need to fight at machine speed and the cognitive risk introduced by automation bias could increase the risk of unintended escalation. Finally, used properly, there should be many applications of more autonomous systems in nuclear operations that can increase reliability, reduce the risk of accidents, and buy more time for decision-makers in a crisis.
|
1904.00158
|
Peipei Li
|
Peipei Li, Huaibo Huang, Yibo Hu, Xiang Wu, Ran He and Zhenan Sun
|
UVA: A Universal Variational Framework for Continuous Age Analysis
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conventional methods for facial age analysis tend to utilize accurate age
labels in a supervised way. However, existing age datasets lies in a limited
range of ages, leading to a long-tailed distribution. To alleviate the problem,
this paper proposes a Universal Variational Aging (UVA) framework to formulate
facial age priors in a disentangling manner. Benefiting from the variational
evidence lower bound, the facial images are encoded and disentangled into an
age-irrelevant distribution and an age-related distribution in the latent
space. A conditional introspective adversarial learning mechanism is introduced
to boost the image quality. In this way, when manipulating the age-related
distribution, UVA can achieve age translation with arbitrary ages. Further, by
sampling noise from the age-irrelevant distribution, we can generate
photorealistic facial images with a specific age. Moreover, given an input face
image, the mean value of age-related distribution can be treated as an age
estimator. These indicate that UVA can efficiently and accurately estimate the
age-related distribution by a disentangling manner, even if the training
dataset performs a long-tailed age distribution. UVA is the first attempt to
achieve facial age analysis tasks, including age translation, age generation
and age estimation, in a universal framework. The qualitative and quantitative
experiments demonstrate the superiority of UVA on five popular datasets,
including CACD2000, Morph, UTKFace, MegaAge-Asian and FG-NET.
|
[
{
"created": "Sat, 30 Mar 2019 07:07:06 GMT",
"version": "v1"
}
] |
2019-04-02
|
[
[
"Li",
"Peipei",
""
],
[
"Huang",
"Huaibo",
""
],
[
"Hu",
"Yibo",
""
],
[
"Wu",
"Xiang",
""
],
[
"He",
"Ran",
""
],
[
"Sun",
"Zhenan",
""
]
] |
Conventional methods for facial age analysis tend to utilize accurate age labels in a supervised way. However, existing age datasets lies in a limited range of ages, leading to a long-tailed distribution. To alleviate the problem, this paper proposes a Universal Variational Aging (UVA) framework to formulate facial age priors in a disentangling manner. Benefiting from the variational evidence lower bound, the facial images are encoded and disentangled into an age-irrelevant distribution and an age-related distribution in the latent space. A conditional introspective adversarial learning mechanism is introduced to boost the image quality. In this way, when manipulating the age-related distribution, UVA can achieve age translation with arbitrary ages. Further, by sampling noise from the age-irrelevant distribution, we can generate photorealistic facial images with a specific age. Moreover, given an input face image, the mean value of age-related distribution can be treated as an age estimator. These indicate that UVA can efficiently and accurately estimate the age-related distribution by a disentangling manner, even if the training dataset performs a long-tailed age distribution. UVA is the first attempt to achieve facial age analysis tasks, including age translation, age generation and age estimation, in a universal framework. The qualitative and quantitative experiments demonstrate the superiority of UVA on five popular datasets, including CACD2000, Morph, UTKFace, MegaAge-Asian and FG-NET.
|
2309.08475
|
Japneet Singh
|
Anuran Makur, Japneet Singh
|
Doeblin Coefficients and Related Measures
|
26 pages, 1 figure
|
IEEE Transactions on Information Theory, vol. 70, no. 7, July 2024
|
10.1109/TIT.2024.3367856
| null |
cs.IT math.IT math.PR math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Doeblin coefficients are a classical tool for analyzing the ergodicity and
exponential convergence rates of Markov chains. Propelled by recent works on
contraction coefficients of strong data processing inequalities, we investigate
whether Doeblin coefficients also exhibit some of the notable properties of
canonical contraction coefficients. In this paper, we present several new
structural and geometric properties of Doeblin coefficients. Specifically, we
show that Doeblin coefficients form a multi-way divergence, exhibit
tensorization, and possess an extremal trace characterization. We then show
that they also have extremal coupling and simultaneously maximal coupling
characterizations. By leveraging these characterizations, we demonstrate that
Doeblin coefficients act as a nice generalization of the well-known total
variation (TV) distance to a multi-way divergence, enabling us to measure the
"distance" between multiple distributions rather than just two. We then prove
that Doeblin coefficients exhibit contraction properties over Bayesian networks
similar to other canonical contraction coefficients. We additionally derive
some other results and discuss an application of Doeblin coefficients to
distribution fusion. Finally, in a complementary vein, we introduce and discuss
three new quantities: max-Doeblin coefficient, max-DeGroot distance, and
min-DeGroot distance. The max-Doeblin coefficient shares a connection with the
concept of maximal leakage in information security; we explore its properties
and provide a coupling characterization. On the other hand, the max-DeGroot and
min-DeGroot measures extend the concept of DeGroot distance to multiple
distributions.
|
[
{
"created": "Fri, 15 Sep 2023 15:31:07 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Jul 2024 06:36:33 GMT",
"version": "v2"
}
] |
2024-07-03
|
[
[
"Makur",
"Anuran",
""
],
[
"Singh",
"Japneet",
""
]
] |
Doeblin coefficients are a classical tool for analyzing the ergodicity and exponential convergence rates of Markov chains. Propelled by recent works on contraction coefficients of strong data processing inequalities, we investigate whether Doeblin coefficients also exhibit some of the notable properties of canonical contraction coefficients. In this paper, we present several new structural and geometric properties of Doeblin coefficients. Specifically, we show that Doeblin coefficients form a multi-way divergence, exhibit tensorization, and possess an extremal trace characterization. We then show that they also have extremal coupling and simultaneously maximal coupling characterizations. By leveraging these characterizations, we demonstrate that Doeblin coefficients act as a nice generalization of the well-known total variation (TV) distance to a multi-way divergence, enabling us to measure the "distance" between multiple distributions rather than just two. We then prove that Doeblin coefficients exhibit contraction properties over Bayesian networks similar to other canonical contraction coefficients. We additionally derive some other results and discuss an application of Doeblin coefficients to distribution fusion. Finally, in a complementary vein, we introduce and discuss three new quantities: max-Doeblin coefficient, max-DeGroot distance, and min-DeGroot distance. The max-Doeblin coefficient shares a connection with the concept of maximal leakage in information security; we explore its properties and provide a coupling characterization. On the other hand, the max-DeGroot and min-DeGroot measures extend the concept of DeGroot distance to multiple distributions.
|
1307.8029
|
EPTCS
|
Adel Bouhoula, Tetsuo Ida, Fairouz Kamareddine
|
Proceedings Fourth International Symposium on Symbolic Computation in
Software Science
| null |
EPTCS 122, 2013
|
10.4204/EPTCS.122
| null |
cs.SC cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Symbolic computation is the science of computing with symbolic objects
(terms, formulae, programs, algebraic objects, geometrical objects, etc).
Powerful symbolic algorithms have been developed during the past decades and
have played an influential role in theorem proving, automated reasoning,
software verification, model checking, rewriting, formalisation of mathematics,
network security, Groebner bases, characteristic sets, etc.
The international Symposium on "Symbolic Computation in Software Science" is
the fourth in the SCSS workshop series. SCSS 2008 and 2010 took place at the
Research Institute for Symbolic Computation (RISC), Hagenberg, Austria, and,
SCSS 2009 took place in Gammarth, Tunisia. These symposium grew out of internal
workshops that bring together researchers from: a) SCORE (Symbolic Computation
Research Group) at the University of Tsukuba, Japan, b) Theorema Group at the
Research Institute for Symbolic Computation, Johannes Kepler University Linz,
Austria, c) SSFG (Software Science Foundation Group) at Kyoto University,
Japan, and d) Sup'Com (Higher School of Communication of Tunis) at the
University of Carthage, Tunisia.
|
[
{
"created": "Tue, 30 Jul 2013 16:01:33 GMT",
"version": "v1"
}
] |
2013-07-31
|
[
[
"Bouhoula",
"Adel",
""
],
[
"Ida",
"Tetsuo",
""
],
[
"Kamareddine",
"Fairouz",
""
]
] |
Symbolic computation is the science of computing with symbolic objects (terms, formulae, programs, algebraic objects, geometrical objects, etc). Powerful symbolic algorithms have been developed during the past decades and have played an influential role in theorem proving, automated reasoning, software verification, model checking, rewriting, formalisation of mathematics, network security, Groebner bases, characteristic sets, etc. The international Symposium on "Symbolic Computation in Software Science" is the fourth in the SCSS workshop series. SCSS 2008 and 2010 took place at the Research Institute for Symbolic Computation (RISC), Hagenberg, Austria, and, SCSS 2009 took place in Gammarth, Tunisia. These symposium grew out of internal workshops that bring together researchers from: a) SCORE (Symbolic Computation Research Group) at the University of Tsukuba, Japan, b) Theorema Group at the Research Institute for Symbolic Computation, Johannes Kepler University Linz, Austria, c) SSFG (Software Science Foundation Group) at Kyoto University, Japan, and d) Sup'Com (Higher School of Communication of Tunis) at the University of Carthage, Tunisia.
|
1611.07610
|
Marianna Rapoport
|
Marianna Rapoport and Ond\v{r}ej Lhot\'ak
|
Mutable WadlerFest DOT
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Dependent Object Types (DOT) calculus aims to model the essence of Scala,
with a focus on abstract type members, path-dependent types, and subtyping.
Other Scala features could be defined by translation to DOT. Mutation is a
fundamental feature of Scala currently missing in DOT. Mutation in DOT is
needed not only to model effectful computation and mutation in Scala programs,
but even to precisely specify how Scala initializes immutable variables and
fields (vals). We present an extension to DOT that adds typed mutable reference
cells. We have proven the extension sound with a mechanized proof in Coq. We
present the key features of our extended calculus and its soundness proof, and
discuss the challenges that we encountered in our search for a sound design and
the alternative solutions that we considered.
|
[
{
"created": "Wed, 23 Nov 2016 02:37:51 GMT",
"version": "v1"
}
] |
2016-11-24
|
[
[
"Rapoport",
"Marianna",
""
],
[
"Lhoták",
"Ondřej",
""
]
] |
The Dependent Object Types (DOT) calculus aims to model the essence of Scala, with a focus on abstract type members, path-dependent types, and subtyping. Other Scala features could be defined by translation to DOT. Mutation is a fundamental feature of Scala currently missing in DOT. Mutation in DOT is needed not only to model effectful computation and mutation in Scala programs, but even to precisely specify how Scala initializes immutable variables and fields (vals). We present an extension to DOT that adds typed mutable reference cells. We have proven the extension sound with a mechanized proof in Coq. We present the key features of our extended calculus and its soundness proof, and discuss the challenges that we encountered in our search for a sound design and the alternative solutions that we considered.
|
2108.07996
|
Shubhangi Agarwal
|
Shubhangi Agarwal, Sourav Dutta, Arnab Bhattacharya
|
VerSaChI: Finding Statistically Significant Subgraph Matches using
Chebyshev's Inequality
| null | null |
10.1145/3459637.3482217
| null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Approximate subgraph matching, which is an important primitive for many
applications like question answering, community detection, and motif discovery,
often involves large labeled graphs such as knowledge graphs, social networks,
and protein sequences. Effective methods for extracting matching subgraphs, in
terms of label and structural similarities to a query, should depict accuracy,
computational efficiency, and robustness to noise. In this paper, we propose
VerSaChI for finding the top-k most similar subgraphs based on 2-hop label and
structural overlap similarity with the query. The similarity is characterized
using Chebyshev's inequality to compute the chi-square statistical significance
for measuring the degree of matching of the subgraphs. Experiments on real-life
graph datasets showcase significant improvements in terms of accuracy compared
to state-of-the-art methods, as well as robustness to noise.
|
[
{
"created": "Wed, 18 Aug 2021 06:53:39 GMT",
"version": "v1"
}
] |
2021-08-19
|
[
[
"Agarwal",
"Shubhangi",
""
],
[
"Dutta",
"Sourav",
""
],
[
"Bhattacharya",
"Arnab",
""
]
] |
Approximate subgraph matching, which is an important primitive for many applications like question answering, community detection, and motif discovery, often involves large labeled graphs such as knowledge graphs, social networks, and protein sequences. Effective methods for extracting matching subgraphs, in terms of label and structural similarities to a query, should depict accuracy, computational efficiency, and robustness to noise. In this paper, we propose VerSaChI for finding the top-k most similar subgraphs based on 2-hop label and structural overlap similarity with the query. The similarity is characterized using Chebyshev's inequality to compute the chi-square statistical significance for measuring the degree of matching of the subgraphs. Experiments on real-life graph datasets showcase significant improvements in terms of accuracy compared to state-of-the-art methods, as well as robustness to noise.
|
1710.00208
|
Mohammad Etemad
|
Mohammad Etemad and Alptekin K\"up\c{c}\"u and Charalampos Papamanthou
and David Evans
|
Efficient Dynamic Searchable Encryption with Forward Privacy
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Searchable symmetric encryption (SSE) enables a client to perform searches
over its outsourced encrypted files while preserving privacy of the files and
queries. Dynamic schemes, where files can be added or removed, leak more
information than static schemes. For dynamic schemes, forward privacy requires
that a newly added file cannot be linked to previous searches. We present a new
dynamic SSE scheme that achieves forward privacy by replacing the keys revealed
to the server on each search. Our scheme is efficient and parallelizable and
outperforms the best previous schemes providing forward privacy, and achieves
competitive performance with dynamic schemes without forward privacy. We
provide a full security proof in the random oracle model. In our experiments on
the Wikipedia archive of about four million pages, the server takes one second
to perform a search with 100,000 results.
|
[
{
"created": "Sat, 30 Sep 2017 14:45:06 GMT",
"version": "v1"
}
] |
2017-10-03
|
[
[
"Etemad",
"Mohammad",
""
],
[
"Küpçü",
"Alptekin",
""
],
[
"Papamanthou",
"Charalampos",
""
],
[
"Evans",
"David",
""
]
] |
Searchable symmetric encryption (SSE) enables a client to perform searches over its outsourced encrypted files while preserving privacy of the files and queries. Dynamic schemes, where files can be added or removed, leak more information than static schemes. For dynamic schemes, forward privacy requires that a newly added file cannot be linked to previous searches. We present a new dynamic SSE scheme that achieves forward privacy by replacing the keys revealed to the server on each search. Our scheme is efficient and parallelizable and outperforms the best previous schemes providing forward privacy, and achieves competitive performance with dynamic schemes without forward privacy. We provide a full security proof in the random oracle model. In our experiments on the Wikipedia archive of about four million pages, the server takes one second to perform a search with 100,000 results.
|
1403.2009
|
Olawale Hassan
|
Olawale Hassan (1), Iyad Kanj (1), Daniel Lokshtanov (2), and Ljubomir
Perkovi\'c (1) ((1) School of Computing, DePaul University, (2) Department of
Informatics, University of Bergen, Bergen, Norway)
|
On the Ordered List Subgraph Embedding Problems
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the (parameterized) Ordered List Subgraph Embedding problem (p-OLSE) we
are given two graphs $G$ and $H$, each with a linear order defined on its
vertices, a function $L$ that associates with every vertex in $G$ a list of
vertices in $H$, and a parameter $k$. The question is to decide if we can embed
(one-to-one) a subgraph $S$ of $G$ of $k$ vertices into $H$ such that: (1)
every vertex of $S$ is mapped to a vertex from its associated list, (2) the
linear orders inherited by $S$ and its image under the embedding are respected,
and (3) if there is an edge between two vertices in $S$ then there is an edge
between their images. If we require the subgraph $S$ to be embedded as an
induced subgraph, we obtain the Ordered List Induced Subgraph Embedding problem
(p-OLISE). The p-OLSE and p-OLISE problems model various problems in
Bioinformatics related to structural comparison/alignment of proteins.
We investigate the complexity of p-OLSE and p-OLISE with respect to the
following structural parameters: the width $\Delta_L$ of the function $L$ (size
of the largest list), and the maximum degree $\Delta_H$ of $H$ and $\Delta_G$
of $G$. In terms of the structural parameters under consideration, we draw a
complete complexity landscape of p-OLSE and p-OLISE (and their optimization
versions) with respect to the computational frameworks of classical complexity,
parameterized complexity, and approximation.
|
[
{
"created": "Sat, 8 Mar 2014 22:10:25 GMT",
"version": "v1"
}
] |
2014-03-11
|
[
[
"Hassan",
"Olawale",
""
],
[
"Kanj",
"Iyad",
""
],
[
"Lokshtanov",
"Daniel",
""
],
[
"Perković",
"Ljubomir",
""
]
] |
In the (parameterized) Ordered List Subgraph Embedding problem (p-OLSE) we are given two graphs $G$ and $H$, each with a linear order defined on its vertices, a function $L$ that associates with every vertex in $G$ a list of vertices in $H$, and a parameter $k$. The question is to decide if we can embed (one-to-one) a subgraph $S$ of $G$ of $k$ vertices into $H$ such that: (1) every vertex of $S$ is mapped to a vertex from its associated list, (2) the linear orders inherited by $S$ and its image under the embedding are respected, and (3) if there is an edge between two vertices in $S$ then there is an edge between their images. If we require the subgraph $S$ to be embedded as an induced subgraph, we obtain the Ordered List Induced Subgraph Embedding problem (p-OLISE). The p-OLSE and p-OLISE problems model various problems in Bioinformatics related to structural comparison/alignment of proteins. We investigate the complexity of p-OLSE and p-OLISE with respect to the following structural parameters: the width $\Delta_L$ of the function $L$ (size of the largest list), and the maximum degree $\Delta_H$ of $H$ and $\Delta_G$ of $G$. In terms of the structural parameters under consideration, we draw a complete complexity landscape of p-OLSE and p-OLISE (and their optimization versions) with respect to the computational frameworks of classical complexity, parameterized complexity, and approximation.
|
2011.07631
|
Debesh Jha
|
Debesh Jha, Sharib Ali, Nikhil Kumar Tomar, H{\aa}vard D. Johansen,
Dag D. Johansen, Jens Rittscher, Michael A. Riegler, and P{\aa}l Halvorsen
|
Real-Time Polyp Detection, Localization and Segmentation in Colonoscopy
Using Deep Learning
| null |
Published in: IEEE Access, Page(s): 40496 - 40510, Date of
Publication: 04 March 2021, Electronic ISSN: 2169-3536, PubMed ID: 33747684
Publisher: IEEE
|
10.1109/ACCESS.2021.3063716
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Computer-aided detection, localisation, and segmentation methods can help
improve colonoscopy procedures. Even though many methods have been built to
tackle automatic detection and segmentation of polyps, benchmarking of
state-of-the-art methods still remains an open problem. This is due to the
increasing number of researched computer vision methods that can be applied to
polyp datasets. Benchmarking of novel methods can provide a direction to the
development of automated polyp detection and segmentation tasks. Furthermore,
it ensures that the produced results in the community are reproducible and
provide a fair comparison of developed methods. In this paper, we benchmark
several recent state-of-the-art methods using Kvasir-SEG, an open-access
dataset of colonoscopy images for polyp detection, localisation, and
segmentation evaluating both method accuracy and speed. Whilst, most methods in
literature have competitive performance over accuracy, we show that the
proposed ColonSegNet achieved a better trade-off between an average precision
of 0.8000 and mean IoU of 0.8100, and the fastest speed of 180 frames per
second for the detection and localisation task. Likewise, the proposed
ColonSegNet achieved a competitive dice coefficient of 0.8206 and the best
average speed of 182.38 frames per second for the segmentation task. Our
comprehensive comparison with various state-of-the-art methods reveals the
importance of benchmarking the deep learning methods for automated real-time
polyp identification and delineations that can potentially transform current
clinical practices and minimise miss-detection rates.
|
[
{
"created": "Sun, 15 Nov 2020 21:14:50 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Mar 2021 20:21:06 GMT",
"version": "v2"
}
] |
2021-04-02
|
[
[
"Jha",
"Debesh",
""
],
[
"Ali",
"Sharib",
""
],
[
"Tomar",
"Nikhil Kumar",
""
],
[
"Johansen",
"Håvard D.",
""
],
[
"Johansen",
"Dag D.",
""
],
[
"Rittscher",
"Jens",
""
],
[
"Riegler",
"Michael A.",
""
],
[
"Halvorsen",
"Pål",
""
]
] |
Computer-aided detection, localisation, and segmentation methods can help improve colonoscopy procedures. Even though many methods have been built to tackle automatic detection and segmentation of polyps, benchmarking of state-of-the-art methods still remains an open problem. This is due to the increasing number of researched computer vision methods that can be applied to polyp datasets. Benchmarking of novel methods can provide a direction to the development of automated polyp detection and segmentation tasks. Furthermore, it ensures that the produced results in the community are reproducible and provide a fair comparison of developed methods. In this paper, we benchmark several recent state-of-the-art methods using Kvasir-SEG, an open-access dataset of colonoscopy images for polyp detection, localisation, and segmentation evaluating both method accuracy and speed. Whilst, most methods in literature have competitive performance over accuracy, we show that the proposed ColonSegNet achieved a better trade-off between an average precision of 0.8000 and mean IoU of 0.8100, and the fastest speed of 180 frames per second for the detection and localisation task. Likewise, the proposed ColonSegNet achieved a competitive dice coefficient of 0.8206 and the best average speed of 182.38 frames per second for the segmentation task. Our comprehensive comparison with various state-of-the-art methods reveals the importance of benchmarking the deep learning methods for automated real-time polyp identification and delineations that can potentially transform current clinical practices and minimise miss-detection rates.
|
2408.00147
|
Colin Shea-Blymyer
|
Colin Shea-Blymyer, Houssam Abbas
|
Formal Ethical Obligations in Reinforcement Learning Agents:
Verification and Policy Updates
| null | null | null | null |
cs.AI cs.LO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
When designing agents for operation in uncertain environments, designers need
tools to automatically reason about what agents ought to do, how that conflicts
with what is actually happening, and how a policy might be modified to remove
the conflict. These obligations include ethical and social obligations,
permissions and prohibitions, which constrain how the agent achieves its
mission and executes its policy. We propose a new deontic logic, Expected Act
Utilitarian deontic logic, for enabling this reasoning at design time: for
specifying and verifying the agent's strategic obligations, then modifying its
policy from a reference policy to meet those obligations. Unlike approaches
that work at the reward level, working at the logical level increases the
transparency of the trade-offs. We introduce two algorithms: one for
model-checking whether an RL agent has the right strategic obligations, and one
for modifying a reference decision policy to make it meet obligations expressed
in our logic. We illustrate our algorithms on DAC-MDPs which accurately
abstract neural decision policies, and on toy gridworld environments.
|
[
{
"created": "Wed, 31 Jul 2024 20:21:15 GMT",
"version": "v1"
}
] |
2024-08-02
|
[
[
"Shea-Blymyer",
"Colin",
""
],
[
"Abbas",
"Houssam",
""
]
] |
When designing agents for operation in uncertain environments, designers need tools to automatically reason about what agents ought to do, how that conflicts with what is actually happening, and how a policy might be modified to remove the conflict. These obligations include ethical and social obligations, permissions and prohibitions, which constrain how the agent achieves its mission and executes its policy. We propose a new deontic logic, Expected Act Utilitarian deontic logic, for enabling this reasoning at design time: for specifying and verifying the agent's strategic obligations, then modifying its policy from a reference policy to meet those obligations. Unlike approaches that work at the reward level, working at the logical level increases the transparency of the trade-offs. We introduce two algorithms: one for model-checking whether an RL agent has the right strategic obligations, and one for modifying a reference decision policy to make it meet obligations expressed in our logic. We illustrate our algorithms on DAC-MDPs which accurately abstract neural decision policies, and on toy gridworld environments.
|
2008.06453
|
Davide Ancona
|
Davide Ancona and Angelo Ferrando and Viviana Mascardi
|
Can determinism and compositionality coexist in RML? (extended version)
| null | null | null | null |
cs.LO cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Runtime verification (RV) consists in dynamically verifying that the event
traces generated by single runs of a system under scrutiny (SUS) are compliant
with the formal specification of its expected properties. RML (Runtime
Monitoring Language) is a simple but expressive Domain Specific Language for
RV; its semantics is based on a trace calculus formalized by a deterministic
rewriting system which drives the implementation of the interpreter of the
monitors generated by the RML compiler from the specifications. While
determinism of the trace calculus ensures better performances of the generated
monitors, it makes the semantics of its operators less intuitive. In this paper
we move a first step towards a compositional semantics of the RML trace
calculus, by interpreting its basic operators as operations on sets of
instantiated event traces and by proving that such an interpretation is
equivalent to the operational semantics of the calculus.
|
[
{
"created": "Fri, 14 Aug 2020 16:33:36 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Aug 2020 15:24:13 GMT",
"version": "v2"
}
] |
2020-08-18
|
[
[
"Ancona",
"Davide",
""
],
[
"Ferrando",
"Angelo",
""
],
[
"Mascardi",
"Viviana",
""
]
] |
Runtime verification (RV) consists in dynamically verifying that the event traces generated by single runs of a system under scrutiny (SUS) are compliant with the formal specification of its expected properties. RML (Runtime Monitoring Language) is a simple but expressive Domain Specific Language for RV; its semantics is based on a trace calculus formalized by a deterministic rewriting system which drives the implementation of the interpreter of the monitors generated by the RML compiler from the specifications. While determinism of the trace calculus ensures better performances of the generated monitors, it makes the semantics of its operators less intuitive. In this paper we move a first step towards a compositional semantics of the RML trace calculus, by interpreting its basic operators as operations on sets of instantiated event traces and by proving that such an interpretation is equivalent to the operational semantics of the calculus.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.