id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.06236 | Xiyang Wu | Xiyang Wu, Rohan Chandra, Tianrui Guan, Amrit Singh Bedi, Dinesh
Manocha | iPLAN: Intent-Aware Planning in Heterogeneous Traffic via Distributed
Multi-Agent Reinforcement Learning | null | null | null | null | cs.MA cs.LG cs.RO | http://creativecommons.org/licenses/by/4.0/ | Navigating safely and efficiently in dense and heterogeneous traffic
scenarios is challenging for autonomous vehicles (AVs) due to their inability
to infer the behaviors or intentions of nearby drivers. In this work, we
introduce a distributed multi-agent reinforcement learning (MARL) algorithm
that can predict trajectories and intents in dense and heterogeneous traffic
scenarios. Our approach for intent-aware planning, iPLAN, allows agents to
infer nearby drivers' intents solely from their local observations. We model
two distinct incentives for agents' strategies: Behavioral Incentive for
high-level decision-making based on their driving behavior or personality and
Instant Incentive for motion planning for collision avoidance based on the
current traffic state. Our approach enables agents to infer their opponents'
behavior incentives and integrate this inferred information into their
decision-making and motion-planning processes. We perform experiments on two
simulation environments, Non-Cooperative Navigation and Heterogeneous Highway.
In Heterogeneous Highway, results show that, compared with centralized training
decentralized execution (CTDE) MARL baselines such as QMIX and MAPPO, our
method yields a 4.3% and 38.4% higher episodic reward in mild and chaotic
traffic, with 48.1% higher success rate and 80.6% longer survival time in
chaotic traffic. We also compare with a decentralized training decentralized
execution (DTDE) baseline IPPO and demonstrate a higher episodic reward of
12.7% and 6.3% in mild traffic and chaotic traffic, 25.3% higher success rate,
and 13.7% longer survival time.
| [
{
"created": "Fri, 9 Jun 2023 20:12:02 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Aug 2023 03:43:51 GMT",
"version": "v2"
},
{
"created": "Mon, 21 Aug 2023 05:06:36 GMT",
"version": "v3"
}
] | 2023-08-22 | [
[
"Wu",
"Xiyang",
""
],
[
"Chandra",
"Rohan",
""
],
[
"Guan",
"Tianrui",
""
],
[
"Bedi",
"Amrit Singh",
""
],
[
"Manocha",
"Dinesh",
""
]
] | Navigating safely and efficiently in dense and heterogeneous traffic scenarios is challenging for autonomous vehicles (AVs) due to their inability to infer the behaviors or intentions of nearby drivers. In this work, we introduce a distributed multi-agent reinforcement learning (MARL) algorithm that can predict trajectories and intents in dense and heterogeneous traffic scenarios. Our approach for intent-aware planning, iPLAN, allows agents to infer nearby drivers' intents solely from their local observations. We model two distinct incentives for agents' strategies: Behavioral Incentive for high-level decision-making based on their driving behavior or personality and Instant Incentive for motion planning for collision avoidance based on the current traffic state. Our approach enables agents to infer their opponents' behavior incentives and integrate this inferred information into their decision-making and motion-planning processes. We perform experiments on two simulation environments, Non-Cooperative Navigation and Heterogeneous Highway. In Heterogeneous Highway, results show that, compared with centralized training decentralized execution (CTDE) MARL baselines such as QMIX and MAPPO, our method yields a 4.3% and 38.4% higher episodic reward in mild and chaotic traffic, with 48.1% higher success rate and 80.6% longer survival time in chaotic traffic. We also compare with a decentralized training decentralized execution (DTDE) baseline IPPO and demonstrate a higher episodic reward of 12.7% and 6.3% in mild traffic and chaotic traffic, 25.3% higher success rate, and 13.7% longer survival time. |
2402.17269 | Cam-Van Thi Nguyen | Cam-Van Thi Nguyen, Cao-Bach Nguyen, Quang-Thuy Ha, Duc-Trong Le | Curriculum Learning Meets Directed Acyclic Graph for Multimodal Emotion
Recognition | Accepted by LREC-COLING 2024 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Emotion recognition in conversation (ERC) is a crucial task in natural
language processing and affective computing. This paper proposes MultiDAG+CL, a
novel approach for Multimodal Emotion Recognition in Conversation (ERC) that
employs Directed Acyclic Graph (DAG) to integrate textual, acoustic, and visual
features within a unified framework. The model is enhanced by Curriculum
Learning (CL) to address challenges related to emotional shifts and data
imbalance. Curriculum learning facilitates the learning process by gradually
presenting training samples in a meaningful order, thereby improving the
model's performance in handling emotional variations and data imbalance.
Experimental results on the IEMOCAP and MELD datasets demonstrate that the
MultiDAG+CL models outperform baseline models. We release the code for
MultiDAG+CL and experiments: https://github.com/vanntc711/MultiDAG-CL
| [
{
"created": "Tue, 27 Feb 2024 07:28:05 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Mar 2024 06:00:12 GMT",
"version": "v2"
}
] | 2024-03-11 | [
[
"Nguyen",
"Cam-Van Thi",
""
],
[
"Nguyen",
"Cao-Bach",
""
],
[
"Ha",
"Quang-Thuy",
""
],
[
"Le",
"Duc-Trong",
""
]
] | Emotion recognition in conversation (ERC) is a crucial task in natural language processing and affective computing. This paper proposes MultiDAG+CL, a novel approach for Multimodal Emotion Recognition in Conversation (ERC) that employs Directed Acyclic Graph (DAG) to integrate textual, acoustic, and visual features within a unified framework. The model is enhanced by Curriculum Learning (CL) to address challenges related to emotional shifts and data imbalance. Curriculum learning facilitates the learning process by gradually presenting training samples in a meaningful order, thereby improving the model's performance in handling emotional variations and data imbalance. Experimental results on the IEMOCAP and MELD datasets demonstrate that the MultiDAG+CL models outperform baseline models. We release the code for MultiDAG+CL and experiments: https://github.com/vanntc711/MultiDAG-CL |
2404.13655 | Zehao Dong | Zehao Dong, Muhan Zhang, Yixin Chen | SPGNN: Recognizing Salient Subgraph Patterns via Enhanced Graph
Convolution and Pooling | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph neural networks (GNNs) have revolutionized the field of machine
learning on non-Euclidean data such as graphs and networks. GNNs effectively
implement node representation learning through neighborhood aggregation and
achieve impressive results in many graph-related tasks. However, most
neighborhood aggregation approaches are summation-based, which can be
problematic as they may not be sufficiently expressive to encode informative
graph structures. Furthermore, though the graph pooling module is also of vital
importance for graph learning, especially for the task of graph classification,
research on graph down-sampling mechanisms is rather limited.
To address the above challenges, we propose a concatenation-based graph
convolution mechanism that injectively updates node representations to maximize
the discriminative power in distinguishing non-isomorphic subgraphs. In
addition, we design a novel graph pooling module, called WL-SortPool, to learn
important subgraph patterns in a deep-learning manner. WL-SortPool layer-wise
sorts node representations (i.e. continuous WL colors) to separately learn the
relative importance of subtrees with different depths for the purpose of
classification, thus better characterizing the complex graph topology and rich
information encoded in the graph. We propose a novel Subgraph Pattern GNN
(SPGNN) architecture that incorporates these enhancements. We test the proposed
SPGNN architecture on many graph classification benchmarks. Experimental
results show that our method can achieve highly competitive results with
state-of-the-art graph kernels and other GNN approaches.
| [
{
"created": "Sun, 21 Apr 2024 13:11:59 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Apr 2024 16:21:25 GMT",
"version": "v2"
}
] | 2024-04-30 | [
[
"Dong",
"Zehao",
""
],
[
"Zhang",
"Muhan",
""
],
[
"Chen",
"Yixin",
""
]
] | Graph neural networks (GNNs) have revolutionized the field of machine learning on non-Euclidean data such as graphs and networks. GNNs effectively implement node representation learning through neighborhood aggregation and achieve impressive results in many graph-related tasks. However, most neighborhood aggregation approaches are summation-based, which can be problematic as they may not be sufficiently expressive to encode informative graph structures. Furthermore, though the graph pooling module is also of vital importance for graph learning, especially for the task of graph classification, research on graph down-sampling mechanisms is rather limited. To address the above challenges, we propose a concatenation-based graph convolution mechanism that injectively updates node representations to maximize the discriminative power in distinguishing non-isomorphic subgraphs. In addition, we design a novel graph pooling module, called WL-SortPool, to learn important subgraph patterns in a deep-learning manner. WL-SortPool layer-wise sorts node representations (i.e. continuous WL colors) to separately learn the relative importance of subtrees with different depths for the purpose of classification, thus better characterizing the complex graph topology and rich information encoded in the graph. We propose a novel Subgraph Pattern GNN (SPGNN) architecture that incorporates these enhancements. We test the proposed SPGNN architecture on many graph classification benchmarks. Experimental results show that our method can achieve highly competitive results with state-of-the-art graph kernels and other GNN approaches. |
1901.07766 | Yu Ji | Yu Ji, Zixin Liu, Xing Hu, Peiqi Wang, Youhui Zhang | Programmable Neural Network Trojan for Pre-Trained Feature Extractor | null | null | null | null | cs.CR cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural network (NN) trojaning attack is an emerging and important attack
model that can broadly damage the system deployed with NN models. Existing
studies have explored the outsourced training attack scenario and transfer
learning attack scenario in some small datasets for specific domains, with
limited numbers of fixed target classes. In this paper, we propose a more
powerful trojaning attack method for both outsourced training attack and
transfer learning attack, which outperforms existing studies in the capability,
generality, and stealthiness. First, The attack is programmable that the
malicious misclassification target is not fixed and can be generated on demand
even after the victim's deployment. Second, our trojan attack is not limited in
a small domain; one trojaned model on a large-scale dataset can affect
applications of different domains that reuse its general features. Thirdly, our
trojan design is hard to be detected or eliminated even if the victims
fine-tune the whole model.
| [
{
"created": "Wed, 23 Jan 2019 08:18:48 GMT",
"version": "v1"
}
] | 2019-01-24 | [
[
"Ji",
"Yu",
""
],
[
"Liu",
"Zixin",
""
],
[
"Hu",
"Xing",
""
],
[
"Wang",
"Peiqi",
""
],
[
"Zhang",
"Youhui",
""
]
] | Neural network (NN) trojaning attack is an emerging and important attack model that can broadly damage the system deployed with NN models. Existing studies have explored the outsourced training attack scenario and transfer learning attack scenario in some small datasets for specific domains, with limited numbers of fixed target classes. In this paper, we propose a more powerful trojaning attack method for both outsourced training attack and transfer learning attack, which outperforms existing studies in the capability, generality, and stealthiness. First, The attack is programmable that the malicious misclassification target is not fixed and can be generated on demand even after the victim's deployment. Second, our trojan attack is not limited in a small domain; one trojaned model on a large-scale dataset can affect applications of different domains that reuse its general features. Thirdly, our trojan design is hard to be detected or eliminated even if the victims fine-tune the whole model. |
1902.03549 | Moustapha Diaby | Moustapha Diaby, Mark H. Karwan, and Lei Sun | On modeling hard combinatorial optimization problems as linear programs:
Refutations of the "unconditional impossibility" claims | 17 pages; 3 figures | null | null | null | cs.CC cs.DS math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been a series of developments in the recent literature (by
essentially a same "circle" of authors) with the absolute/unconditioned
(implicit or explicit) claim that there exists no abstraction of an NP-Complete
combinatorial optimization problem in which the defining combinatorial
configurations (such as "tours" in the case of the traveling salesman problem
(TSP) for example) can be modeled by a polynomial-sized system of linear
constraints. The purpose of this paper is to provide general as well as
specific refutations for these recent claims.
| [
{
"created": "Sun, 10 Feb 2019 07:09:22 GMT",
"version": "v1"
}
] | 2019-02-12 | [
[
"Diaby",
"Moustapha",
""
],
[
"Karwan",
"Mark H.",
""
],
[
"Sun",
"Lei",
""
]
] | There has been a series of developments in the recent literature (by essentially a same "circle" of authors) with the absolute/unconditioned (implicit or explicit) claim that there exists no abstraction of an NP-Complete combinatorial optimization problem in which the defining combinatorial configurations (such as "tours" in the case of the traveling salesman problem (TSP) for example) can be modeled by a polynomial-sized system of linear constraints. The purpose of this paper is to provide general as well as specific refutations for these recent claims. |
1501.01073 | Samia Allaoua Chelloug | Samia Allaoua Chelloug | Impact of the Temperature and Humidity Variations on Link Quality of
xm1000 Mote Sensors | 9 pages in International Journal of Ad hoc, Sensor & Ubiquitous
Computing (IJASUC) Vol.5, No.6, December 2014 | null | 10.5121/ijasuc.2014.5603 | null | cs.NI | http://creativecommons.org/licenses/by/3.0/ | The core motivations of deploying a sensor network for a specific application
come from the autonomy of sensors, their reduced size, and their capabilities
for computing and communicating in a short range. However, many challenges for
sensor networks still exist: minimizing energy consumption, and ensuring the
performance of communication that may be affected by many parameters. The work
described in this paper covers mainly the analysis of the impact of the
temperature and humidity variations on link quality of XM1000 operating under
TinyOS. Two-way ANOVA test has been applied and the obtained results show that
both the temperature and humidity variations impact RSSI.
| [
{
"created": "Tue, 6 Jan 2015 04:16:19 GMT",
"version": "v1"
}
] | 2015-01-07 | [
[
"Chelloug",
"Samia Allaoua",
""
]
] | The core motivations of deploying a sensor network for a specific application come from the autonomy of sensors, their reduced size, and their capabilities for computing and communicating in a short range. However, many challenges for sensor networks still exist: minimizing energy consumption, and ensuring the performance of communication that may be affected by many parameters. The work described in this paper covers mainly the analysis of the impact of the temperature and humidity variations on link quality of XM1000 operating under TinyOS. Two-way ANOVA test has been applied and the obtained results show that both the temperature and humidity variations impact RSSI. |
2002.09089 | Daniel Brown | Daniel S. Brown, Russell Coleman, Ravi Srinivasan, Scott Niekum | Safe Imitation Learning via Fast Bayesian Reward Inference from
Preferences | In proceedings ICML 2020 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian reward learning from demonstrations enables rigorous safety and
uncertainty analysis when performing imitation learning. However, Bayesian
reward learning methods are typically computationally intractable for complex
control problems. We propose Bayesian Reward Extrapolation (Bayesian REX), a
highly efficient Bayesian reward learning algorithm that scales to
high-dimensional imitation learning problems by pre-training a low-dimensional
feature encoding via self-supervised tasks and then leveraging preferences over
demonstrations to perform fast Bayesian inference. Bayesian REX can learn to
play Atari games from demonstrations, without access to the game score and can
generate 100,000 samples from the posterior over reward functions in only 5
minutes on a personal laptop. Bayesian REX also results in imitation learning
performance that is competitive with or better than state-of-the-art methods
that only learn point estimates of the reward function. Finally, Bayesian REX
enables efficient high-confidence policy evaluation without having access to
samples of the reward function. These high-confidence performance bounds can be
used to rank the performance and risk of a variety of evaluation policies and
provide a way to detect reward hacking behaviors.
| [
{
"created": "Fri, 21 Feb 2020 02:04:54 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Jul 2020 04:42:38 GMT",
"version": "v2"
},
{
"created": "Fri, 7 Aug 2020 17:55:39 GMT",
"version": "v3"
},
{
"created": "Thu, 17 Dec 2020 21:48:13 GMT",
"version": "v4"
}
] | 2020-12-21 | [
[
"Brown",
"Daniel S.",
""
],
[
"Coleman",
"Russell",
""
],
[
"Srinivasan",
"Ravi",
""
],
[
"Niekum",
"Scott",
""
]
] | Bayesian reward learning from demonstrations enables rigorous safety and uncertainty analysis when performing imitation learning. However, Bayesian reward learning methods are typically computationally intractable for complex control problems. We propose Bayesian Reward Extrapolation (Bayesian REX), a highly efficient Bayesian reward learning algorithm that scales to high-dimensional imitation learning problems by pre-training a low-dimensional feature encoding via self-supervised tasks and then leveraging preferences over demonstrations to perform fast Bayesian inference. Bayesian REX can learn to play Atari games from demonstrations, without access to the game score and can generate 100,000 samples from the posterior over reward functions in only 5 minutes on a personal laptop. Bayesian REX also results in imitation learning performance that is competitive with or better than state-of-the-art methods that only learn point estimates of the reward function. Finally, Bayesian REX enables efficient high-confidence policy evaluation without having access to samples of the reward function. These high-confidence performance bounds can be used to rank the performance and risk of a variety of evaluation policies and provide a way to detect reward hacking behaviors. |
2204.02139 | Dongkeun Kim | Dongkeun Kim, Jinsung Lee, Minsu Cho, Suha Kwak | Detector-Free Weakly Supervised Group Activity Recognition | Accepted to CVPR 2022 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Group activity recognition is the task of understanding the activity
conducted by a group of people as a whole in a multi-person video. Existing
models for this task are often impractical in that they demand ground-truth
bounding box labels of actors even in testing or rely on off-the-shelf object
detectors. Motivated by this, we propose a novel model for group activity
recognition that depends neither on bounding box labels nor on object detector.
Our model based on Transformer localizes and encodes partial contexts of a
group activity by leveraging the attention mechanism, and represents a video
clip as a set of partial context embeddings. The embedding vectors are then
aggregated to form a single group representation that reflects the entire
context of an activity while capturing temporal evolution of each partial
context. Our method achieves outstanding performance on two benchmarks,
Volleyball and NBA datasets, surpassing not only the state of the art trained
with the same level of supervision, but also some of existing models relying on
stronger supervision.
| [
{
"created": "Tue, 5 Apr 2022 12:05:04 GMT",
"version": "v1"
}
] | 2022-04-06 | [
[
"Kim",
"Dongkeun",
""
],
[
"Lee",
"Jinsung",
""
],
[
"Cho",
"Minsu",
""
],
[
"Kwak",
"Suha",
""
]
] | Group activity recognition is the task of understanding the activity conducted by a group of people as a whole in a multi-person video. Existing models for this task are often impractical in that they demand ground-truth bounding box labels of actors even in testing or rely on off-the-shelf object detectors. Motivated by this, we propose a novel model for group activity recognition that depends neither on bounding box labels nor on object detector. Our model based on Transformer localizes and encodes partial contexts of a group activity by leveraging the attention mechanism, and represents a video clip as a set of partial context embeddings. The embedding vectors are then aggregated to form a single group representation that reflects the entire context of an activity while capturing temporal evolution of each partial context. Our method achieves outstanding performance on two benchmarks, Volleyball and NBA datasets, surpassing not only the state of the art trained with the same level of supervision, but also some of existing models relying on stronger supervision. |
1609.04554 | Marco Tiloca | Marco Tiloca, Alexandra Stagkopoulou, Gianluca Dini | Performance and Security Evaluation of SDN Networks in OMNeT++/INET | Published in: A. Foerster, V. Vesely, A. Virdis, M. Kirsche (Eds.),
Proc. of the 3rd OMNeT++ Community Summit, Brno University of Technology -
Czech Republic - September 15-16, 2016 | null | null | OMNET/2016/03 | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Software Defined Networking (SDN) has been recently introduced as a new
communication paradigm in computer networks. By separating the control plane
from the data plane and entrusting packet forwarding to straightforward
switches, SDN makes it possible to deploy and run networks which are more
flexible to manage and easier to configure. This paper describes a set of
extensions for the INET framework, which allow researchers and network
designers to simulate SDN architectures and evaluate their performance and
security at design time. Together with performance evaluation and design
optimization of SDN networks, our extensions enable the simulation of SDN-based
anomaly detection and mitigation techniques, as well as the quantitative
evaluation of cyber-physical attacks and their impact on the network and
application. This work is an ongoing research activity, and we plan to propose
it for an official contribution to the INET framework.
| [
{
"created": "Thu, 15 Sep 2016 09:46:32 GMT",
"version": "v1"
}
] | 2016-09-16 | [
[
"Tiloca",
"Marco",
""
],
[
"Stagkopoulou",
"Alexandra",
""
],
[
"Dini",
"Gianluca",
""
]
] | Software Defined Networking (SDN) has been recently introduced as a new communication paradigm in computer networks. By separating the control plane from the data plane and entrusting packet forwarding to straightforward switches, SDN makes it possible to deploy and run networks which are more flexible to manage and easier to configure. This paper describes a set of extensions for the INET framework, which allow researchers and network designers to simulate SDN architectures and evaluate their performance and security at design time. Together with performance evaluation and design optimization of SDN networks, our extensions enable the simulation of SDN-based anomaly detection and mitigation techniques, as well as the quantitative evaluation of cyber-physical attacks and their impact on the network and application. This work is an ongoing research activity, and we plan to propose it for an official contribution to the INET framework. |
1711.06349 | Joshua Gardner | Josh Gardner, Christopher Brooks | Student Success Prediction in MOOCs | null | null | 10.1007/s11257-018-9203-z | null | cs.CY stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predictive models of student success in Massive Open Online Courses (MOOCs)
are a critical component of effective content personalization and adaptive
interventions. In this article we review the state of the art in predictive
models of student success in MOOCs and present a categorization of MOOC
research according to the predictors (features), prediction (outcomes), and
underlying theoretical model. We critically survey work across each category,
providing data on the raw data source, feature engineering, statistical model,
evaluation method, prediction architecture, and other aspects of these
experiments. Such a review is particularly useful given the rapid expansion of
predictive modeling research in MOOCs since the emergence of major MOOC
platforms in 2012. This survey reveals several key methodological gaps, which
include extensive filtering of experimental subpopulations, ineffective student
model evaluation, and the use of experimental data which would be unavailable
for real-world student success prediction and intervention, which is the
ultimate goal of such models. Finally, we highlight opportunities for future
research, which include temporal modeling, research bridging predictive and
explanatory student models, work which contributes to learning theory, and
evaluating long-term learner success in MOOCs.
| [
{
"created": "Thu, 16 Nov 2017 23:12:47 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Apr 2018 17:09:39 GMT",
"version": "v2"
},
{
"created": "Wed, 11 Apr 2018 01:00:09 GMT",
"version": "v3"
}
] | 2018-04-23 | [
[
"Gardner",
"Josh",
""
],
[
"Brooks",
"Christopher",
""
]
] | Predictive models of student success in Massive Open Online Courses (MOOCs) are a critical component of effective content personalization and adaptive interventions. In this article we review the state of the art in predictive models of student success in MOOCs and present a categorization of MOOC research according to the predictors (features), prediction (outcomes), and underlying theoretical model. We critically survey work across each category, providing data on the raw data source, feature engineering, statistical model, evaluation method, prediction architecture, and other aspects of these experiments. Such a review is particularly useful given the rapid expansion of predictive modeling research in MOOCs since the emergence of major MOOC platforms in 2012. This survey reveals several key methodological gaps, which include extensive filtering of experimental subpopulations, ineffective student model evaluation, and the use of experimental data which would be unavailable for real-world student success prediction and intervention, which is the ultimate goal of such models. Finally, we highlight opportunities for future research, which include temporal modeling, research bridging predictive and explanatory student models, work which contributes to learning theory, and evaluating long-term learner success in MOOCs. |
0904.3316 | Shariq Bashir Mr. | Shariq Bashir, and Abdul Rauf Baig | Ramp: Fast Frequent Itemset Mining with Efficient Bit-Vector Projection
Technique | null | null | null | null | cs.DB cs.AI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mining frequent itemset using bit-vector representation approach is very
efficient for dense type datasets, but highly inefficient for sparse datasets
due to lack of any efficient bit-vector projection technique. In this paper we
present a novel efficient bit-vector projection technique, for sparse and dense
datasets. To check the efficiency of our bit-vector projection technique, we
present a new frequent itemset mining algorithm Ramp (Real Algorithm for Mining
Patterns) build upon our bit-vector projection technique. The performance of
the Ramp is compared with the current best (all, maximal and closed) frequent
itemset mining algorithms on benchmark datasets. Different experimental results
on sparse and dense datasets show that mining frequent itemset using Ramp is
faster than the current best algorithms, which show the effectiveness of our
bit-vector projection idea. We also present a new local maximal frequent
itemsets propagation and maximal itemset superset checking approach FastLMFI,
build upon our PBR bit-vector projection technique. Our different computational
experiments suggest that itemset maximality checking using FastLMFI is fast and
efficient than a previous will known progressive focusing approach.
| [
{
"created": "Tue, 21 Apr 2009 18:49:13 GMT",
"version": "v1"
}
] | 2009-04-22 | [
[
"Bashir",
"Shariq",
""
],
[
"Baig",
"Abdul Rauf",
""
]
] | Mining frequent itemset using bit-vector representation approach is very efficient for dense type datasets, but highly inefficient for sparse datasets due to lack of any efficient bit-vector projection technique. In this paper we present a novel efficient bit-vector projection technique, for sparse and dense datasets. To check the efficiency of our bit-vector projection technique, we present a new frequent itemset mining algorithm Ramp (Real Algorithm for Mining Patterns) build upon our bit-vector projection technique. The performance of the Ramp is compared with the current best (all, maximal and closed) frequent itemset mining algorithms on benchmark datasets. Different experimental results on sparse and dense datasets show that mining frequent itemset using Ramp is faster than the current best algorithms, which show the effectiveness of our bit-vector projection idea. We also present a new local maximal frequent itemsets propagation and maximal itemset superset checking approach FastLMFI, build upon our PBR bit-vector projection technique. Our different computational experiments suggest that itemset maximality checking using FastLMFI is fast and efficient than a previous will known progressive focusing approach. |
2310.17729 | Jonayet Miah | Razib Hayat Khan, Jonayet Miah, S M Yasir Arafat, M M Mahbubul Syeed,
Duc M Ca | Improving Traffic Density Forecasting in Intelligent Transportation
Systems Using Gated Graph Neural Networks | null | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study delves into the application of graph neural networks in the realm
of traffic forecasting, a crucial facet of intelligent transportation systems.
Accurate traffic predictions are vital for functions like trip planning,
traffic control, and vehicle routing in such systems. Three prominent GNN
architectures Graph Convolutional Networks (Graph Sample and Aggregation) and
Gated Graph Neural Networks are explored within the context of traffic
prediction. Each architecture's methodology is thoroughly examined, including
layer configurations, activation functions,and hyperparameters. The primary
goal is to minimize prediction errors, with GGNNs emerging as the most
effective choice among the three models. The research outlines outcomes for
each architecture, elucidating their predictive performance through root mean
squared error and mean absolute error (MAE). Hypothetical results reveal
intriguing insights: GCNs display an RMSE of 9.10 and an MAE of 8.00, while
GraphSAGE shows improvement with an RMSE of 8.3 and an MAE of 7.5. Gated Graph
Neural Networks (GGNNs) exhibit the lowest RMSE at 9.15 and an impressive MAE
of 7.1, positioning them as the frontrunner.
| [
{
"created": "Thu, 26 Oct 2023 18:40:28 GMT",
"version": "v1"
}
] | 2023-10-30 | [
[
"Khan",
"Razib Hayat",
""
],
[
"Miah",
"Jonayet",
""
],
[
"Arafat",
"S M Yasir",
""
],
[
"Syeed",
"M M Mahbubul",
""
],
[
"Ca",
"Duc M",
""
]
] | This study delves into the application of graph neural networks in the realm of traffic forecasting, a crucial facet of intelligent transportation systems. Accurate traffic predictions are vital for functions like trip planning, traffic control, and vehicle routing in such systems. Three prominent GNN architectures Graph Convolutional Networks (Graph Sample and Aggregation) and Gated Graph Neural Networks are explored within the context of traffic prediction. Each architecture's methodology is thoroughly examined, including layer configurations, activation functions,and hyperparameters. The primary goal is to minimize prediction errors, with GGNNs emerging as the most effective choice among the three models. The research outlines outcomes for each architecture, elucidating their predictive performance through root mean squared error and mean absolute error (MAE). Hypothetical results reveal intriguing insights: GCNs display an RMSE of 9.10 and an MAE of 8.00, while GraphSAGE shows improvement with an RMSE of 8.3 and an MAE of 7.5. Gated Graph Neural Networks (GGNNs) exhibit the lowest RMSE at 9.15 and an impressive MAE of 7.1, positioning them as the frontrunner. |
1803.07480 | Maximilian Schleich | Mahmoud Abo Khamis and Hung Q. Ngo and XuanLong Nguyen and Dan Olteanu
and Maximilian Schleich | AC/DC: In-Database Learning Thunderstruck | 10 pages, 3 figures | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report on the design and implementation of the AC/DC gradient descent
solver for a class of optimization problems over normalized databases. AC/DC
decomposes an optimization problem into a set of aggregates over the join of
the database relations. It then uses the answers to these aggregates to
iteratively improve the solution to the problem until it converges.
The challenges faced by AC/DC are the large database size, the mixture of
continuous and categorical features, and the large number of aggregates to
compute. AC/DC addresses these challenges by employing a sparse data
representation, factorized computation, problem reparameterization under
functional dependencies, and a data structure that supports shared computation
of aggregates.
To train polynomial regression models and factorization machines of up to
154K features over the natural join of all relations from a real-world dataset
of up to 86M tuples, AC/DC needs up to 30 minutes on one core of a commodity
machine. This is up to three orders of magnitude faster than its competitors R,
MadLib, libFM, and TensorFlow whenever they finish and thus do not exceed
memory limitation, 24-hour timeout, or internal design limitations.
| [
{
"created": "Tue, 20 Mar 2018 15:17:14 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Jun 2018 04:35:21 GMT",
"version": "v2"
}
] | 2018-06-18 | [
[
"Khamis",
"Mahmoud Abo",
""
],
[
"Ngo",
"Hung Q.",
""
],
[
"Nguyen",
"XuanLong",
""
],
[
"Olteanu",
"Dan",
""
],
[
"Schleich",
"Maximilian",
""
]
] | We report on the design and implementation of the AC/DC gradient descent solver for a class of optimization problems over normalized databases. AC/DC decomposes an optimization problem into a set of aggregates over the join of the database relations. It then uses the answers to these aggregates to iteratively improve the solution to the problem until it converges. The challenges faced by AC/DC are the large database size, the mixture of continuous and categorical features, and the large number of aggregates to compute. AC/DC addresses these challenges by employing a sparse data representation, factorized computation, problem reparameterization under functional dependencies, and a data structure that supports shared computation of aggregates. To train polynomial regression models and factorization machines of up to 154K features over the natural join of all relations from a real-world dataset of up to 86M tuples, AC/DC needs up to 30 minutes on one core of a commodity machine. This is up to three orders of magnitude faster than its competitors R, MadLib, libFM, and TensorFlow whenever they finish and thus do not exceed memory limitation, 24-hour timeout, or internal design limitations. |
1211.3500 | Guoxu Zhou | Guoxu Zhou, Andrzej Cichocki, and Shengli Xie | Accelerated Canonical Polyadic Decomposition by Using Mode Reduction | 12 pages. Accepted by TNNLS | null | 10.1109/TNNLS.2013.2271507 | null | cs.NA cs.LG math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Canonical Polyadic (or CANDECOMP/PARAFAC, CP) decompositions (CPD) are widely
applied to analyze high order tensors. Existing CPD methods use alternating
least square (ALS) iterations and hence need to unfold tensors to each of the
$N$ modes frequently, which is one major bottleneck of efficiency for
large-scale data and especially when $N$ is large. To overcome this problem, in
this paper we proposed a new CPD method which converts the original $N$th
($N>3$) order tensor to a 3rd-order tensor first. Then the full CPD is realized
by decomposing this mode reduced tensor followed by a Khatri-Rao product
projection procedure. This way is quite efficient as unfolding to each of the
$N$ modes are avoided, and dimensionality reduction can also be easily
incorporated to further improve the efficiency. We show that, under mild
conditions, any $N$th-order CPD can be converted into a 3rd-order case but
without destroying the essential uniqueness, and theoretically gives the same
results as direct $N$-way CPD methods. Simulations show that, compared with
state-of-the-art CPD methods, the proposed method is more efficient and escape
from local solutions more easily.
| [
{
"created": "Thu, 15 Nov 2012 05:50:30 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Jun 2013 03:06:52 GMT",
"version": "v2"
}
] | 2013-06-27 | [
[
"Zhou",
"Guoxu",
""
],
[
"Cichocki",
"Andrzej",
""
],
[
"Xie",
"Shengli",
""
]
] | Canonical Polyadic (or CANDECOMP/PARAFAC, CP) decompositions (CPD) are widely applied to analyze high order tensors. Existing CPD methods use alternating least square (ALS) iterations and hence need to unfold tensors to each of the $N$ modes frequently, which is one major bottleneck of efficiency for large-scale data and especially when $N$ is large. To overcome this problem, in this paper we proposed a new CPD method which converts the original $N$th ($N>3$) order tensor to a 3rd-order tensor first. Then the full CPD is realized by decomposing this mode reduced tensor followed by a Khatri-Rao product projection procedure. This way is quite efficient as unfolding to each of the $N$ modes are avoided, and dimensionality reduction can also be easily incorporated to further improve the efficiency. We show that, under mild conditions, any $N$th-order CPD can be converted into a 3rd-order case but without destroying the essential uniqueness, and theoretically gives the same results as direct $N$-way CPD methods. Simulations show that, compared with state-of-the-art CPD methods, the proposed method is more efficient and escape from local solutions more easily. |
1402.6281 | Tomasz Brengos | Tomasz Brengos | On coalgebras with internal moves | Article: 23 pages, Appendix: 3 pages | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the first part of the paper we recall the coalgebraic approach to handling
the so-called invisible transitions that appear in different state-based
systems semantics. We claim that these transitions are always part of the unit
of a certain monad. Hence, coalgebras with internal moves are exactly
coalgebras over a monadic type. The rest of the paper is devoted to supporting
our claim by studying two important behavioural equivalences for state-based
systems with internal moves, namely: weak bisimulation and trace semantics.
We continue our research on weak bisimulations for coalgebras over order
enriched monads. The key notions used in this paper and proposed by us in our
previous work are the notions of an order saturation monad and a saturator. A
saturator operator can be intuitively understood as a reflexive, transitive
closure operator. There are two approaches towards defining saturators for
coalgebras with internal moves. Here, we give necessary conditions for them to
yield the same notion of weak bisimulation.
Finally, we propose a definition of trace semantics for coalgebras with
silent moves via a uniform fixed point operator. We compare strong and weak
bisimilation together with trace semantics for coalgebras with internal steps.
| [
{
"created": "Tue, 25 Feb 2014 19:12:55 GMT",
"version": "v1"
}
] | 2014-02-26 | [
[
"Brengos",
"Tomasz",
""
]
] | In the first part of the paper we recall the coalgebraic approach to handling the so-called invisible transitions that appear in different state-based systems semantics. We claim that these transitions are always part of the unit of a certain monad. Hence, coalgebras with internal moves are exactly coalgebras over a monadic type. The rest of the paper is devoted to supporting our claim by studying two important behavioural equivalences for state-based systems with internal moves, namely: weak bisimulation and trace semantics. We continue our research on weak bisimulations for coalgebras over order enriched monads. The key notions used in this paper and proposed by us in our previous work are the notions of an order saturation monad and a saturator. A saturator operator can be intuitively understood as a reflexive, transitive closure operator. There are two approaches towards defining saturators for coalgebras with internal moves. Here, we give necessary conditions for them to yield the same notion of weak bisimulation. Finally, we propose a definition of trace semantics for coalgebras with silent moves via a uniform fixed point operator. We compare strong and weak bisimilation together with trace semantics for coalgebras with internal steps. |
2305.11811 | Yang You | Yang You, Vincent Thomas, Francis Colas, Olivier Buffet | Monte-Carlo Search for an Equilibrium in Dec-POMDPs | Accepted to UAI 2023, preliminary version | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decentralized partially observable Markov decision processes (Dec-POMDPs)
formalize the problem of designing individual controllers for a group of
collaborative agents under stochastic dynamics and partial observability.
Seeking a global optimum is difficult (NEXP complete), but seeking a Nash
equilibrium -- each agent policy being a best response to the other agents --
is more accessible, and allowed addressing infinite-horizon problems with
solutions in the form of finite state controllers. In this paper, we show that
this approach can be adapted to cases where only a generative model (a
simulator) of the Dec-POMDP is available. This requires relying on a
simulation-based POMDP solver to construct an agent's FSC node by node. A
related process is used to heuristically derive initial FSCs. Experiment with
benchmarks shows that MC-JESP is competitive with exisiting Dec-POMDP solvers,
even better than many offline methods using explicit models.
| [
{
"created": "Fri, 19 May 2023 16:47:46 GMT",
"version": "v1"
}
] | 2023-05-22 | [
[
"You",
"Yang",
""
],
[
"Thomas",
"Vincent",
""
],
[
"Colas",
"Francis",
""
],
[
"Buffet",
"Olivier",
""
]
] | Decentralized partially observable Markov decision processes (Dec-POMDPs) formalize the problem of designing individual controllers for a group of collaborative agents under stochastic dynamics and partial observability. Seeking a global optimum is difficult (NEXP complete), but seeking a Nash equilibrium -- each agent policy being a best response to the other agents -- is more accessible, and allowed addressing infinite-horizon problems with solutions in the form of finite state controllers. In this paper, we show that this approach can be adapted to cases where only a generative model (a simulator) of the Dec-POMDP is available. This requires relying on a simulation-based POMDP solver to construct an agent's FSC node by node. A related process is used to heuristically derive initial FSCs. Experiment with benchmarks shows that MC-JESP is competitive with exisiting Dec-POMDP solvers, even better than many offline methods using explicit models. |
0804.3817 | Jan Arpe | Jan Arpe and Elchanan Mossel | Multiple Random Oracles Are Better Than One | 17 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of learning k-juntas given access to examples drawn from
a number of different product distributions. Thus we wish to learn a function f
: {-1,1}^n -> {-1,1} that depends on k (unknown) coordinates. While the best
known algorithms for the general problem of learning a k-junta require running
time of n^k * poly(n,2^k), we show that given access to k different product
distributions with biases separated by \gamma>0, the functions may be learned
in time poly(n,2^k,\gamma^{-k}). More generally, given access to t <= k
different product distributions, the functions may be learned in time n^{k/t} *
poly(n,2^k,\gamma^{-k}). Our techniques involve novel results in Fourier
analysis relating Fourier expansions with respect to different biases and a
generalization of Russo's formula.
| [
{
"created": "Wed, 23 Apr 2008 23:18:00 GMT",
"version": "v1"
}
] | 2008-04-25 | [
[
"Arpe",
"Jan",
""
],
[
"Mossel",
"Elchanan",
""
]
] | We study the problem of learning k-juntas given access to examples drawn from a number of different product distributions. Thus we wish to learn a function f : {-1,1}^n -> {-1,1} that depends on k (unknown) coordinates. While the best known algorithms for the general problem of learning a k-junta require running time of n^k * poly(n,2^k), we show that given access to k different product distributions with biases separated by \gamma>0, the functions may be learned in time poly(n,2^k,\gamma^{-k}). More generally, given access to t <= k different product distributions, the functions may be learned in time n^{k/t} * poly(n,2^k,\gamma^{-k}). Our techniques involve novel results in Fourier analysis relating Fourier expansions with respect to different biases and a generalization of Russo's formula. |
2309.11507 | Ludovic Dos Santos | Veronika Shilova, Ludovic Dos Santos, Flavian Vasile, Ga\"etan Racic,
Ugo Tanielian | AdBooster: Personalized Ad Creative Generation using Stable Diffusion
Outpainting | Fifth Workshop on Recommender Systems in Fashion (Fashion x RecSys
2023) | null | null | null | cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | In digital advertising, the selection of the optimal item (recommendation)
and its best creative presentation (creative optimization) have traditionally
been considered separate disciplines. However, both contribute significantly to
user satisfaction, underpinning our assumption that it relies on both an item's
relevance and its presentation, particularly in the case of visual creatives.
In response, we introduce the task of {\itshape Generative Creative
Optimization (GCO)}, which proposes the use of generative models for creative
generation that incorporate user interests, and {\itshape AdBooster}, a model
for personalized ad creatives based on the Stable Diffusion outpainting
architecture. This model uniquely incorporates user interests both during
fine-tuning and at generation time. To further improve AdBooster's performance,
we also introduce an automated data augmentation pipeline. Through our
experiments on simulated data, we validate AdBooster's effectiveness in
generating more relevant creatives than default product images, showing its
potential of enhancing user engagement.
| [
{
"created": "Fri, 8 Sep 2023 12:57:05 GMT",
"version": "v1"
}
] | 2023-09-22 | [
[
"Shilova",
"Veronika",
""
],
[
"Santos",
"Ludovic Dos",
""
],
[
"Vasile",
"Flavian",
""
],
[
"Racic",
"Gaëtan",
""
],
[
"Tanielian",
"Ugo",
""
]
] | In digital advertising, the selection of the optimal item (recommendation) and its best creative presentation (creative optimization) have traditionally been considered separate disciplines. However, both contribute significantly to user satisfaction, underpinning our assumption that it relies on both an item's relevance and its presentation, particularly in the case of visual creatives. In response, we introduce the task of {\itshape Generative Creative Optimization (GCO)}, which proposes the use of generative models for creative generation that incorporate user interests, and {\itshape AdBooster}, a model for personalized ad creatives based on the Stable Diffusion outpainting architecture. This model uniquely incorporates user interests both during fine-tuning and at generation time. To further improve AdBooster's performance, we also introduce an automated data augmentation pipeline. Through our experiments on simulated data, we validate AdBooster's effectiveness in generating more relevant creatives than default product images, showing its potential of enhancing user engagement. |
2406.04306 | Lukas Aichberger | Lukas Aichberger, Kajetan Schweighofer, Mykyta Ielanskyi, Sepp
Hochreiter | Semantically Diverse Language Generation for Uncertainty Estimation in
Language Models | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) can suffer from hallucinations when generating
text. These hallucinations impede various applications in society and industry
by making LLMs untrustworthy. Current LLMs generate text in an autoregressive
fashion by predicting and appending text tokens. When an LLM is uncertain about
the semantic meaning of the next tokens to generate, it is likely to start
hallucinating. Thus, it has been suggested that hallucinations stem from
predictive uncertainty. We introduce Semantically Diverse Language Generation
(SDLG) to quantify predictive uncertainty in LLMs. SDLG steers the LLM to
generate semantically diverse yet likely alternatives for an initially
generated text. This approach provides a precise measure of aleatoric semantic
uncertainty, detecting whether the initial text is likely to be hallucinated.
Experiments on question-answering tasks demonstrate that SDLG consistently
outperforms existing methods while being the most computationally efficient,
setting a new standard for uncertainty estimation in LLMs.
| [
{
"created": "Thu, 6 Jun 2024 17:53:34 GMT",
"version": "v1"
}
] | 2024-06-07 | [
[
"Aichberger",
"Lukas",
""
],
[
"Schweighofer",
"Kajetan",
""
],
[
"Ielanskyi",
"Mykyta",
""
],
[
"Hochreiter",
"Sepp",
""
]
] | Large language models (LLMs) can suffer from hallucinations when generating text. These hallucinations impede various applications in society and industry by making LLMs untrustworthy. Current LLMs generate text in an autoregressive fashion by predicting and appending text tokens. When an LLM is uncertain about the semantic meaning of the next tokens to generate, it is likely to start hallucinating. Thus, it has been suggested that hallucinations stem from predictive uncertainty. We introduce Semantically Diverse Language Generation (SDLG) to quantify predictive uncertainty in LLMs. SDLG steers the LLM to generate semantically diverse yet likely alternatives for an initially generated text. This approach provides a precise measure of aleatoric semantic uncertainty, detecting whether the initial text is likely to be hallucinated. Experiments on question-answering tasks demonstrate that SDLG consistently outperforms existing methods while being the most computationally efficient, setting a new standard for uncertainty estimation in LLMs. |
2102.02723 | Ratish Puduppully | Ratish Puduppully and Mirella Lapata | Data-to-text Generation with Macro Planning | To appear in Transactions of the Association for Computational
Linguistics (TACL); 17 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent approaches to data-to-text generation have adopted the very successful
encoder-decoder architecture or variants thereof. These models generate text
which is fluent (but often imprecise) and perform quite poorly at selecting
appropriate content and ordering it coherently. To overcome some of these
issues, we propose a neural model with a macro planning stage followed by a
generation stage reminiscent of traditional methods which embrace separate
modules for planning and surface realization. Macro plans represent high level
organization of important content such as entities, events and their
interactions; they are learnt from data and given as input to the generator.
Extensive experiments on two data-to-text benchmarks (RotoWire and MLB) show
that our approach outperforms competitive baselines in terms of automatic and
human evaluation.
| [
{
"created": "Thu, 4 Feb 2021 16:32:57 GMT",
"version": "v1"
}
] | 2021-02-05 | [
[
"Puduppully",
"Ratish",
""
],
[
"Lapata",
"Mirella",
""
]
] | Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or variants thereof. These models generate text which is fluent (but often imprecise) and perform quite poorly at selecting appropriate content and ordering it coherently. To overcome some of these issues, we propose a neural model with a macro planning stage followed by a generation stage reminiscent of traditional methods which embrace separate modules for planning and surface realization. Macro plans represent high level organization of important content such as entities, events and their interactions; they are learnt from data and given as input to the generator. Extensive experiments on two data-to-text benchmarks (RotoWire and MLB) show that our approach outperforms competitive baselines in terms of automatic and human evaluation. |
2003.13058 | Alireza M. Javid | Alireza M. Javid, Arun Venkitaraman, Mikael Skoglund, and Saikat
Chatterjee | High-dimensional Neural Feature Design for Layer-wise Reduction of
Training Cost | 2020 EURASIP Journal on Advances in Signal Processing | null | 10.1186/s13634-020-00695-2 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We design a ReLU-based multilayer neural network by mapping the feature
vectors to a higher dimensional space in every layer. We design the weight
matrices in every layer to ensure a reduction of the training cost as the
number of layers increases. Linear projection to the target in the higher
dimensional space leads to a lower training cost if a convex cost is minimized.
An $\ell_2$-norm convex constraint is used in the minimization to reduce the
generalization error and avoid overfitting. The regularization hyperparameters
of the network are derived analytically to guarantee a monotonic decrement of
the training cost, and therefore, it eliminates the need for cross-validation
to find the regularization hyperparameter in each layer. We show that the
proposed architecture is norm-preserving and provides an invertible feature
vector, and therefore, can be used to reduce the training cost of any other
learning method which employs linear projection to estimate the target.
| [
{
"created": "Sun, 29 Mar 2020 15:57:28 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Aug 2020 21:16:00 GMT",
"version": "v2"
}
] | 2020-10-28 | [
[
"Javid",
"Alireza M.",
""
],
[
"Venkitaraman",
"Arun",
""
],
[
"Skoglund",
"Mikael",
""
],
[
"Chatterjee",
"Saikat",
""
]
] | We design a ReLU-based multilayer neural network by mapping the feature vectors to a higher dimensional space in every layer. We design the weight matrices in every layer to ensure a reduction of the training cost as the number of layers increases. Linear projection to the target in the higher dimensional space leads to a lower training cost if a convex cost is minimized. An $\ell_2$-norm convex constraint is used in the minimization to reduce the generalization error and avoid overfitting. The regularization hyperparameters of the network are derived analytically to guarantee a monotonic decrement of the training cost, and therefore, it eliminates the need for cross-validation to find the regularization hyperparameter in each layer. We show that the proposed architecture is norm-preserving and provides an invertible feature vector, and therefore, can be used to reduce the training cost of any other learning method which employs linear projection to estimate the target. |
2402.07007 | Dominik Klein | Dominik K. Klein and Rogelio Ortigosa and Jes\'us Mart\'inez-Frutos
and Oliver Weeger | Nonlinear electro-elastic finite element analysis with neural network
constitutive models | null | null | null | null | cs.CE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In the present work, the applicability of physics-augmented neural network
(PANN) constitutive models for complex electro-elastic finite element analysis
is demonstrated. For the investigations, PANN models for electro-elastic
material behavior at finite deformations are calibrated to different
synthetically generated datasets, including an analytical isotropic potential,
a homogenised rank-one laminate, and a homogenised metamaterial with a
spherical inclusion. Subsequently, boundary value problems inspired by
engineering applications of composite electro-elastic materials are considered.
Scenarios with large electrically induced deformations and instabilities are
particularly challenging and thus necessitate extensive investigations of the
PANN constitutive models in the context of finite element analyses. First of
all, an excellent prediction quality of the model is required for very general
load cases occurring in the simulation. Furthermore, simulation of large
deformations and instabilities poses challenges on the stability of the
numerical solver, which is closely related to the constitutive model. In all
cases studied, the PANN models yield excellent prediction qualities and a
stable numerical behavior even in highly nonlinear scenarios. This can be
traced back to the PANN models excellent performance in learning both the first
and second derivatives of the ground truth electro-elastic potentials, even
though it is only calibrated on the first derivatives. Overall, this work
demonstrates the applicability of PANN constitutive models for the efficient
and robust simulation of engineering applications of composite electro-elastic
materials.
| [
{
"created": "Sat, 10 Feb 2024 18:00:21 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Klein",
"Dominik K.",
""
],
[
"Ortigosa",
"Rogelio",
""
],
[
"Martínez-Frutos",
"Jesús",
""
],
[
"Weeger",
"Oliver",
""
]
] | In the present work, the applicability of physics-augmented neural network (PANN) constitutive models for complex electro-elastic finite element analysis is demonstrated. For the investigations, PANN models for electro-elastic material behavior at finite deformations are calibrated to different synthetically generated datasets, including an analytical isotropic potential, a homogenised rank-one laminate, and a homogenised metamaterial with a spherical inclusion. Subsequently, boundary value problems inspired by engineering applications of composite electro-elastic materials are considered. Scenarios with large electrically induced deformations and instabilities are particularly challenging and thus necessitate extensive investigations of the PANN constitutive models in the context of finite element analyses. First of all, an excellent prediction quality of the model is required for very general load cases occurring in the simulation. Furthermore, simulation of large deformations and instabilities poses challenges on the stability of the numerical solver, which is closely related to the constitutive model. In all cases studied, the PANN models yield excellent prediction qualities and a stable numerical behavior even in highly nonlinear scenarios. This can be traced back to the PANN models excellent performance in learning both the first and second derivatives of the ground truth electro-elastic potentials, even though it is only calibrated on the first derivatives. Overall, this work demonstrates the applicability of PANN constitutive models for the efficient and robust simulation of engineering applications of composite electro-elastic materials. |
2401.15726 | MinSeok Seo | Young-Jae Park, Minseok Seo, Doyi Kim, Hyeri Kim, Sanghoon Choi,
Beomkyu Choi, Jeongwon Ryu, Sohee Son, Hae-Gon Jeon, Yeji Choi | Long-Term Typhoon Trajectory Prediction: A Physics-Conditioned Approach
Without Reanalysis Data | This paper was accepted for a Spotlight presentation at ICLR 2024 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In the face of escalating climate changes, typhoon intensities and their
ensuing damage have surged. Accurate trajectory prediction is crucial for
effective damage control. Traditional physics-based models, while
comprehensive, are computationally intensive and rely heavily on the expertise
of forecasters. Contemporary data-driven methods often rely on reanalysis data,
which can be considered to be the closest to the true representation of weather
conditions. However, reanalysis data is not produced in real-time and requires
time for adjustment because prediction models are calibrated with observational
data. This reanalysis data, such as ERA5, falls short in challenging real-world
situations. Optimal preparedness necessitates predictions at least 72 hours in
advance, beyond the capabilities of standard physics models. In response to
these constraints, we present an approach that harnesses real-time Unified
Model (UM) data, sidestepping the limitations of reanalysis data. Our model
provides predictions at 6-hour intervals for up to 72 hours in advance and
outperforms both state-of-the-art data-driven methods and numerical weather
prediction models. In line with our efforts to mitigate adversities inflicted
by \rthree{typhoons}, we release our preprocessed \textit{PHYSICS TRACK}
dataset, which includes ERA5 reanalysis data, typhoon best-track, and UM
forecast data.
| [
{
"created": "Sun, 28 Jan 2024 18:28:33 GMT",
"version": "v1"
}
] | 2024-01-30 | [
[
"Park",
"Young-Jae",
""
],
[
"Seo",
"Minseok",
""
],
[
"Kim",
"Doyi",
""
],
[
"Kim",
"Hyeri",
""
],
[
"Choi",
"Sanghoon",
""
],
[
"Choi",
"Beomkyu",
""
],
[
"Ryu",
"Jeongwon",
""
],
[
"Son",
"Sohee",
""
],
[
"Jeon",
"Hae-Gon",
""
],
[
"Choi",
"Yeji",
""
]
] | In the face of escalating climate changes, typhoon intensities and their ensuing damage have surged. Accurate trajectory prediction is crucial for effective damage control. Traditional physics-based models, while comprehensive, are computationally intensive and rely heavily on the expertise of forecasters. Contemporary data-driven methods often rely on reanalysis data, which can be considered to be the closest to the true representation of weather conditions. However, reanalysis data is not produced in real-time and requires time for adjustment because prediction models are calibrated with observational data. This reanalysis data, such as ERA5, falls short in challenging real-world situations. Optimal preparedness necessitates predictions at least 72 hours in advance, beyond the capabilities of standard physics models. In response to these constraints, we present an approach that harnesses real-time Unified Model (UM) data, sidestepping the limitations of reanalysis data. Our model provides predictions at 6-hour intervals for up to 72 hours in advance and outperforms both state-of-the-art data-driven methods and numerical weather prediction models. In line with our efforts to mitigate adversities inflicted by \rthree{typhoons}, we release our preprocessed \textit{PHYSICS TRACK} dataset, which includes ERA5 reanalysis data, typhoon best-track, and UM forecast data. |
1910.09383 | Sarath Shekkizhar | Sarath Shekkizhar and Antonio Ortega | Neighborhood and Graph Constructions using Non-Negative Kernel
Regression | 15 pages | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-driven neighborhood definitions and graph constructions are often used
in machine learning and signal processing applications. k-nearest
neighbor~(kNN) and $\epsilon$-neighborhood methods are among the most common
methods used for neighborhood selection, due to their computational simplicity.
However, the choice of parameters associated with these methods, such as k and
$\epsilon$, is still ad hoc. We make two main contributions in this paper.
First, we present an alternative view of neighborhood selection, where we show
that neighborhood construction is equivalent to a sparse signal approximation
problem. Second, we propose an algorithm, non-negative kernel regression~(NNK),
for obtaining neighborhoods that lead to better sparse representation. NNK
draws similarities to the orthogonal matching pursuit approach to signal
representation and possesses desirable geometric and theoretical properties.
Experiments demonstrate (i) the robustness of the NNK algorithm for
neighborhood and graph construction, (ii) its ability to adapt the number of
neighbors to the data properties, and (iii) its superior performance in local
neighborhood and graph-based machine learning tasks.
| [
{
"created": "Mon, 21 Oct 2019 13:58:14 GMT",
"version": "v1"
},
{
"created": "Sat, 31 Dec 2022 16:50:57 GMT",
"version": "v2"
},
{
"created": "Sat, 25 Feb 2023 18:25:36 GMT",
"version": "v3"
},
{
"created": "Sun, 16 Apr 2023 04:57:36 GMT",
"version": "v4"
}
] | 2023-04-18 | [
[
"Shekkizhar",
"Sarath",
""
],
[
"Ortega",
"Antonio",
""
]
] | Data-driven neighborhood definitions and graph constructions are often used in machine learning and signal processing applications. k-nearest neighbor~(kNN) and $\epsilon$-neighborhood methods are among the most common methods used for neighborhood selection, due to their computational simplicity. However, the choice of parameters associated with these methods, such as k and $\epsilon$, is still ad hoc. We make two main contributions in this paper. First, we present an alternative view of neighborhood selection, where we show that neighborhood construction is equivalent to a sparse signal approximation problem. Second, we propose an algorithm, non-negative kernel regression~(NNK), for obtaining neighborhoods that lead to better sparse representation. NNK draws similarities to the orthogonal matching pursuit approach to signal representation and possesses desirable geometric and theoretical properties. Experiments demonstrate (i) the robustness of the NNK algorithm for neighborhood and graph construction, (ii) its ability to adapt the number of neighbors to the data properties, and (iii) its superior performance in local neighborhood and graph-based machine learning tasks. |
1412.3670 | Sagar Jha | Devendra Bhave, Sagar Jha, Shankara Narayanan Krishna, Sven Schewe,
Ashutosh Trivedi | Bounded-Rate Multi-Mode Systems Based Motion Planning | 14 pages, 12 figures, HSCC - 2015 | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bounded-rate multi-mode systems are hybrid systems that can switch among a
finite set of modes. Its dynamics is specified by a finite number of
real-valued variables with mode-dependent rates that can vary within given
bounded sets. Given an arbitrary piecewise linear trajectory, we study the
problem of following the trajectory with arbitrary precision, using motion
primitives given as bounded-rate multi-mode systems. We give an algorithm to
solve the problem and show that the problem is co-NP complete. We further prove
that the problem can be solved in polynomial time for multi-mode systems with
fixed dimension. We study the problem with dwell-time requirement and show the
decidability of the problem under certain positivity restriction on the rate
vectors. Finally, we show that introducing structure to the multi-mode systems
leads to undecidability, even when using only a single clock variable.
| [
{
"created": "Tue, 9 Dec 2014 19:26:03 GMT",
"version": "v1"
}
] | 2014-12-12 | [
[
"Bhave",
"Devendra",
""
],
[
"Jha",
"Sagar",
""
],
[
"Krishna",
"Shankara Narayanan",
""
],
[
"Schewe",
"Sven",
""
],
[
"Trivedi",
"Ashutosh",
""
]
] | Bounded-rate multi-mode systems are hybrid systems that can switch among a finite set of modes. Its dynamics is specified by a finite number of real-valued variables with mode-dependent rates that can vary within given bounded sets. Given an arbitrary piecewise linear trajectory, we study the problem of following the trajectory with arbitrary precision, using motion primitives given as bounded-rate multi-mode systems. We give an algorithm to solve the problem and show that the problem is co-NP complete. We further prove that the problem can be solved in polynomial time for multi-mode systems with fixed dimension. We study the problem with dwell-time requirement and show the decidability of the problem under certain positivity restriction on the rate vectors. Finally, we show that introducing structure to the multi-mode systems leads to undecidability, even when using only a single clock variable. |
2111.02630 | Junyao Kuang | Junyao Kuang, Caterina Scoglio and Kristin Michel | Feature Learning and Network Structure from Noisy Node Activity Data | null | Phys. Rev. E 106, 064301, 2022 | 10.1103/PhysRevE.106.064301 | null | cs.NI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In the studies of network structures, much attention has been devoted to
developing approaches to reconstruct networks and predict missing links when
edge-related information is given. However, such approaches are not applicable
when we are only given noisy node activity data with missing values. This work
presents an unsupervised learning framework to learn node vectors and construct
networks from such node activity data. First, we design a scheme to generate
random node sequences from node context sets, which are generated from node
activity data. Then, a three-layer neural network is adopted training the node
sequences to obtain node vectors, which allow us to construct networks and
capture nodes with synergistic roles. Furthermore, we present an entropy-based
approach to select the most meaningful neighbors for each node in the resulting
network. Finally, the effectiveness of the method is validated through both
synthetic and real data.
| [
{
"created": "Thu, 4 Nov 2021 05:07:28 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Mar 2022 19:13:16 GMT",
"version": "v2"
},
{
"created": "Sat, 3 Dec 2022 03:33:13 GMT",
"version": "v3"
}
] | 2022-12-09 | [
[
"Kuang",
"Junyao",
""
],
[
"Scoglio",
"Caterina",
""
],
[
"Michel",
"Kristin",
""
]
] | In the studies of network structures, much attention has been devoted to developing approaches to reconstruct networks and predict missing links when edge-related information is given. However, such approaches are not applicable when we are only given noisy node activity data with missing values. This work presents an unsupervised learning framework to learn node vectors and construct networks from such node activity data. First, we design a scheme to generate random node sequences from node context sets, which are generated from node activity data. Then, a three-layer neural network is adopted training the node sequences to obtain node vectors, which allow us to construct networks and capture nodes with synergistic roles. Furthermore, we present an entropy-based approach to select the most meaningful neighbors for each node in the resulting network. Finally, the effectiveness of the method is validated through both synthetic and real data. |
2004.00403 | Mark Boss | Mark Boss, Varun Jampani, Kihwan Kim, Hendrik P.A. Lensch, Jan Kautz | Two-shot Spatially-varying BRDF and Shape Estimation | null | null | 10.1109/CVPR42600.2020.00404 | null | cs.CV cs.GR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Capturing the shape and spatially-varying appearance (SVBRDF) of an object
from images is a challenging task that has applications in both computer vision
and graphics. Traditional optimization-based approaches often need a large
number of images taken from multiple views in a controlled environment. Newer
deep learning-based approaches require only a few input images, but the
reconstruction quality is not on par with optimization techniques. We propose a
novel deep learning architecture with a stage-wise estimation of shape and
SVBRDF. The previous predictions guide each estimation, and a joint refinement
network later refines both SVBRDF and shape. We follow a practical mobile image
capture setting and use unaligned two-shot flash and no-flash images as input.
Both our two-shot image capture and network inference can run on mobile
hardware. We also create a large-scale synthetic training dataset with
domain-randomized geometry and realistic materials. Extensive experiments on
both synthetic and real-world datasets show that our network trained on a
synthetic dataset can generalize well to real-world images. Comparisons with
recent approaches demonstrate the superior performance of the proposed
approach.
| [
{
"created": "Wed, 1 Apr 2020 12:56:13 GMT",
"version": "v1"
}
] | 2021-05-20 | [
[
"Boss",
"Mark",
""
],
[
"Jampani",
"Varun",
""
],
[
"Kim",
"Kihwan",
""
],
[
"Lensch",
"Hendrik P. A.",
""
],
[
"Kautz",
"Jan",
""
]
] | Capturing the shape and spatially-varying appearance (SVBRDF) of an object from images is a challenging task that has applications in both computer vision and graphics. Traditional optimization-based approaches often need a large number of images taken from multiple views in a controlled environment. Newer deep learning-based approaches require only a few input images, but the reconstruction quality is not on par with optimization techniques. We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF. The previous predictions guide each estimation, and a joint refinement network later refines both SVBRDF and shape. We follow a practical mobile image capture setting and use unaligned two-shot flash and no-flash images as input. Both our two-shot image capture and network inference can run on mobile hardware. We also create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials. Extensive experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images. Comparisons with recent approaches demonstrate the superior performance of the proposed approach. |
1706.06243 | Luke Miles | Cory Siler, Luke Harold Miles, Judy Goldsmith | The Complexity of Campaigning | Will be presented at the 2017 Algorithmic Decision Theory Conference | null | null | null | cs.AI cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In "The Logic of Campaigning", Dean and Parikh consider a candidate making
campaign statements to appeal to the voters. They model these statements as
Boolean formulas over variables that represent stances on the issues, and study
optimal candidate strategies under three proposed models of voter preferences
based on the assignments that satisfy these formulas. We prove that voter
utility evaluation is computationally hard under these preference models (in
one case, #P-hard), along with certain problems related to candidate strategic
reasoning. Our results raise questions about the desirable characteristics of a
voter preference model and to what extent a polynomial-time-evaluable function
can capture them.
| [
{
"created": "Tue, 20 Jun 2017 02:28:04 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Jul 2017 21:07:09 GMT",
"version": "v2"
}
] | 2017-07-19 | [
[
"Siler",
"Cory",
""
],
[
"Miles",
"Luke Harold",
""
],
[
"Goldsmith",
"Judy",
""
]
] | In "The Logic of Campaigning", Dean and Parikh consider a candidate making campaign statements to appeal to the voters. They model these statements as Boolean formulas over variables that represent stances on the issues, and study optimal candidate strategies under three proposed models of voter preferences based on the assignments that satisfy these formulas. We prove that voter utility evaluation is computationally hard under these preference models (in one case, #P-hard), along with certain problems related to candidate strategic reasoning. Our results raise questions about the desirable characteristics of a voter preference model and to what extent a polynomial-time-evaluable function can capture them. |
2211.15929 | Guanhong Tao | Guanhong Tao, Zhenting Wang, Siyuan Cheng, Shiqing Ma, Shengwei An,
Yingqi Liu, Guangyu Shen, Zhuo Zhang, Yunshu Mao, Xiangyu Zhang | Backdoor Vulnerabilities in Normally Trained Deep Learning Models | null | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We conduct a systematic study of backdoor vulnerabilities in normally trained
Deep Learning models. They are as dangerous as backdoors injected by data
poisoning because both can be equally exploited. We leverage 20 different types
of injected backdoor attacks in the literature as the guidance and study their
correspondences in normally trained models, which we call natural backdoor
vulnerabilities. We find that natural backdoors are widely existing, with most
injected backdoor attacks having natural correspondences. We categorize these
natural backdoors and propose a general detection framework. It finds 315
natural backdoors in the 56 normally trained models downloaded from the
Internet, covering all the different categories, while existing scanners
designed for injected backdoors can at most detect 65 backdoors. We also study
the root causes and defense of natural backdoors.
| [
{
"created": "Tue, 29 Nov 2022 04:55:32 GMT",
"version": "v1"
}
] | 2022-11-30 | [
[
"Tao",
"Guanhong",
""
],
[
"Wang",
"Zhenting",
""
],
[
"Cheng",
"Siyuan",
""
],
[
"Ma",
"Shiqing",
""
],
[
"An",
"Shengwei",
""
],
[
"Liu",
"Yingqi",
""
],
[
"Shen",
"Guangyu",
""
],
[
"Zhang",
"Zhuo",
""
],
[
"Mao",
"Yunshu",
""
],
[
"Zhang",
"Xiangyu",
""
]
] | We conduct a systematic study of backdoor vulnerabilities in normally trained Deep Learning models. They are as dangerous as backdoors injected by data poisoning because both can be equally exploited. We leverage 20 different types of injected backdoor attacks in the literature as the guidance and study their correspondences in normally trained models, which we call natural backdoor vulnerabilities. We find that natural backdoors are widely existing, with most injected backdoor attacks having natural correspondences. We categorize these natural backdoors and propose a general detection framework. It finds 315 natural backdoors in the 56 normally trained models downloaded from the Internet, covering all the different categories, while existing scanners designed for injected backdoors can at most detect 65 backdoors. We also study the root causes and defense of natural backdoors. |
2204.05961 | Anya Belz | Anya Belz, Maja Popovi\'c and Simon Mille | Quantified Reproducibility Assessment of NLP Results | To be published in Proceedings of the 60th Annual Meeting of the
Association for Computational Linguistics (ACL'22) | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper describes and tests a method for carrying out quantified
reproducibility assessment (QRA) that is based on concepts and definitions from
metrology. QRA produces a single score estimating the degree of reproducibility
of a given system and evaluation measure, on the basis of the scores from, and
differences between, different reproductions. We test QRA on 18 system and
evaluation measure combinations (involving diverse NLP tasks and types of
evaluation), for each of which we have the original results and one to seven
reproduction results. The proposed QRA method produces
degree-of-reproducibility scores that are comparable across multiple
reproductions not only of the same, but of different original studies. We find
that the proposed method facilitates insights into causes of variation between
reproductions, and allows conclusions to be drawn about what changes to system
and/or evaluation design might lead to improved reproducibility.
| [
{
"created": "Tue, 12 Apr 2022 17:22:46 GMT",
"version": "v1"
}
] | 2022-04-13 | [
[
"Belz",
"Anya",
""
],
[
"Popović",
"Maja",
""
],
[
"Mille",
"Simon",
""
]
] | This paper describes and tests a method for carrying out quantified reproducibility assessment (QRA) that is based on concepts and definitions from metrology. QRA produces a single score estimating the degree of reproducibility of a given system and evaluation measure, on the basis of the scores from, and differences between, different reproductions. We test QRA on 18 system and evaluation measure combinations (involving diverse NLP tasks and types of evaluation), for each of which we have the original results and one to seven reproduction results. The proposed QRA method produces degree-of-reproducibility scores that are comparable across multiple reproductions not only of the same, but of different original studies. We find that the proposed method facilitates insights into causes of variation between reproductions, and allows conclusions to be drawn about what changes to system and/or evaluation design might lead to improved reproducibility. |
2303.07247 | Anmol Goel | Sahil Girhepuje, Anmol Goel, Gokul S Krishnan, Shreya Goyal, Satyendra
Pandey, Ponnurangam Kumaraguru and Balaraman Ravindran | Are Models Trained on Indian Legal Data Fair? | Presented at the Symposium on AI and Law (SAIL) 2023 | null | null | null | cs.CL cs.CY | http://creativecommons.org/licenses/by/4.0/ | Recent advances and applications of language technology and artificial
intelligence have enabled much success across multiple domains like law,
medical and mental health. AI-based Language Models, like Judgement Prediction,
have recently been proposed for the legal sector. However, these models are
strife with encoded social biases picked up from the training data. While bias
and fairness have been studied across NLP, most studies primarily locate
themselves within a Western context. In this work, we present an initial
investigation of fairness from the Indian perspective in the legal domain. We
highlight the propagation of learnt algorithmic biases in the bail prediction
task for models trained on Hindi legal documents. We evaluate the fairness gap
using demographic parity and show that a decision tree model trained for the
bail prediction task has an overall fairness disparity of 0.237 between input
features associated with Hindus and Muslims. Additionally, we highlight the
need for further research and studies in the avenues of fairness/bias in
applying AI in the legal sector with a specific focus on the Indian context.
| [
{
"created": "Mon, 13 Mar 2023 16:20:33 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Mar 2023 17:40:21 GMT",
"version": "v2"
},
{
"created": "Tue, 14 May 2024 08:44:37 GMT",
"version": "v3"
}
] | 2024-05-15 | [
[
"Girhepuje",
"Sahil",
""
],
[
"Goel",
"Anmol",
""
],
[
"Krishnan",
"Gokul S",
""
],
[
"Goyal",
"Shreya",
""
],
[
"Pandey",
"Satyendra",
""
],
[
"Kumaraguru",
"Ponnurangam",
""
],
[
"Ravindran",
"Balaraman",
""
]
] | Recent advances and applications of language technology and artificial intelligence have enabled much success across multiple domains like law, medical and mental health. AI-based Language Models, like Judgement Prediction, have recently been proposed for the legal sector. However, these models are strife with encoded social biases picked up from the training data. While bias and fairness have been studied across NLP, most studies primarily locate themselves within a Western context. In this work, we present an initial investigation of fairness from the Indian perspective in the legal domain. We highlight the propagation of learnt algorithmic biases in the bail prediction task for models trained on Hindi legal documents. We evaluate the fairness gap using demographic parity and show that a decision tree model trained for the bail prediction task has an overall fairness disparity of 0.237 between input features associated with Hindus and Muslims. Additionally, we highlight the need for further research and studies in the avenues of fairness/bias in applying AI in the legal sector with a specific focus on the Indian context. |
1909.01627 | Cinzia Di Giusto | Cinzia Di Giusto (C&A), Cinzia Giusto (SARDES), Laetitia Laversa
(C&A), Etienne Lozes | On the k-synchronizability of systems | null | null | null | null | cs.FL cs.CL cs.SC cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we work on the notion of k-synchronizability: a system is
k-synchronizable if any of its executions, up to reordering causally
independent actions, can be divided into a succession of k-bounded interaction
phases. We show two results (both for mailbox and peer-to-peer automata):
first, the reachability problem is decidable for k-synchronizable systems;
second, the membership problem (whether a given system is k-synchronizable) is
decidable as well. Our proofs fix several important issues in previous attempts
to prove these two results for mailbox automata.
| [
{
"created": "Wed, 4 Sep 2019 08:58:53 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jan 2020 14:24:45 GMT",
"version": "v2"
}
] | 2020-01-22 | [
[
"Di Giusto",
"Cinzia",
"",
"C&A"
],
[
"Giusto",
"Cinzia",
"",
"SARDES"
],
[
"Laversa",
"Laetitia",
"",
"C&A"
],
[
"Lozes",
"Etienne",
""
]
] | In this paper, we work on the notion of k-synchronizability: a system is k-synchronizable if any of its executions, up to reordering causally independent actions, can be divided into a succession of k-bounded interaction phases. We show two results (both for mailbox and peer-to-peer automata): first, the reachability problem is decidable for k-synchronizable systems; second, the membership problem (whether a given system is k-synchronizable) is decidable as well. Our proofs fix several important issues in previous attempts to prove these two results for mailbox automata. |
2403.17312 | Youpeng Zhao | Youpeng Zhao, Di Wu, Jun Wang | ALISA: Accelerating Large Language Model Inference via Sparsity-Aware KV
Caching | ISCA 2024 | null | null | null | cs.AI cs.LG cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Transformer architecture has significantly advanced natural language
processing (NLP) and has been foundational in developing large language models
(LLMs) such as LLaMA and OPT, which have come to dominate a broad range of NLP
tasks. Despite their superior accuracy, LLMs present unique challenges in
practical inference, concerning the compute and memory-intensive nature. Thanks
to the autoregressive characteristic of LLM inference, KV caching for the
attention layers in Transformers can effectively accelerate LLM inference by
substituting quadratic-complexity computation with linear-complexity memory
accesses. Yet, this approach requires increasing memory as demand grows for
processing longer sequences. The overhead leads to reduced throughput due to
I/O bottlenecks and even out-of-memory errors, particularly on
resource-constrained systems like a single commodity GPU. In this paper, we
propose ALISA, a novel algorithm-system co-design solution to address the
challenges imposed by KV caching. On the algorithm level, ALISA prioritizes
tokens that are most important in generating a new token via a Sparse Window
Attention (SWA) algorithm. SWA introduces high sparsity in attention layers and
reduces the memory footprint of KV caching at negligible accuracy loss. On the
system level, ALISA employs three-phase token-level dynamical scheduling and
optimizes the trade-off between caching and recomputation, thus maximizing the
overall performance in resource-constrained systems. In a single GPU-CPU
system, we demonstrate that under varying workloads, ALISA improves the
throughput of baseline systems such as FlexGen and vLLM by up to 3X and 1.9X,
respectively.
| [
{
"created": "Tue, 26 Mar 2024 01:46:34 GMT",
"version": "v1"
}
] | 2024-03-27 | [
[
"Zhao",
"Youpeng",
""
],
[
"Wu",
"Di",
""
],
[
"Wang",
"Jun",
""
]
] | The Transformer architecture has significantly advanced natural language processing (NLP) and has been foundational in developing large language models (LLMs) such as LLaMA and OPT, which have come to dominate a broad range of NLP tasks. Despite their superior accuracy, LLMs present unique challenges in practical inference, concerning the compute and memory-intensive nature. Thanks to the autoregressive characteristic of LLM inference, KV caching for the attention layers in Transformers can effectively accelerate LLM inference by substituting quadratic-complexity computation with linear-complexity memory accesses. Yet, this approach requires increasing memory as demand grows for processing longer sequences. The overhead leads to reduced throughput due to I/O bottlenecks and even out-of-memory errors, particularly on resource-constrained systems like a single commodity GPU. In this paper, we propose ALISA, a novel algorithm-system co-design solution to address the challenges imposed by KV caching. On the algorithm level, ALISA prioritizes tokens that are most important in generating a new token via a Sparse Window Attention (SWA) algorithm. SWA introduces high sparsity in attention layers and reduces the memory footprint of KV caching at negligible accuracy loss. On the system level, ALISA employs three-phase token-level dynamical scheduling and optimizes the trade-off between caching and recomputation, thus maximizing the overall performance in resource-constrained systems. In a single GPU-CPU system, we demonstrate that under varying workloads, ALISA improves the throughput of baseline systems such as FlexGen and vLLM by up to 3X and 1.9X, respectively. |
2109.00165 | Jipeng Qiang | Xinyu Lu and Jipeng Qiang and Yun Li and Yunhao Yuan and Yi Zhu | An Unsupervised Method for Building Sentence Simplification Corpora in
Multiple Languages | null | Findings of the Association for Computational Linguistics: EMNLP
2021 | null | null | cs.CL cs.IR | http://creativecommons.org/licenses/by/4.0/ | The availability of parallel sentence simplification (SS) is scarce for
neural SS modelings. We propose an unsupervised method to build SS corpora from
large-scale bilingual translation corpora, alleviating the need for SS
supervised corpora. Our method is motivated by the following two findings:
neural machine translation model usually tends to generate more high-frequency
tokens and the difference of text complexity levels exists between the source
and target language of a translation corpus. By taking the pair of the source
sentences of translation corpus and the translations of their references in a
bridge language, we can construct large-scale pseudo parallel SS data. Then, we
keep these sentence pairs with a higher complexity difference as SS sentence
pairs. The building SS corpora with an unsupervised approach can satisfy the
expectations that the aligned sentences preserve the same meanings and have
difference in text complexity levels. Experimental results show that SS methods
trained by our corpora achieve the state-of-the-art results and significantly
outperform the results on English benchmark WikiLarge.
| [
{
"created": "Wed, 1 Sep 2021 03:30:06 GMT",
"version": "v1"
}
] | 2021-09-02 | [
[
"Lu",
"Xinyu",
""
],
[
"Qiang",
"Jipeng",
""
],
[
"Li",
"Yun",
""
],
[
"Yuan",
"Yunhao",
""
],
[
"Zhu",
"Yi",
""
]
] | The availability of parallel sentence simplification (SS) is scarce for neural SS modelings. We propose an unsupervised method to build SS corpora from large-scale bilingual translation corpora, alleviating the need for SS supervised corpora. Our method is motivated by the following two findings: neural machine translation model usually tends to generate more high-frequency tokens and the difference of text complexity levels exists between the source and target language of a translation corpus. By taking the pair of the source sentences of translation corpus and the translations of their references in a bridge language, we can construct large-scale pseudo parallel SS data. Then, we keep these sentence pairs with a higher complexity difference as SS sentence pairs. The building SS corpora with an unsupervised approach can satisfy the expectations that the aligned sentences preserve the same meanings and have difference in text complexity levels. Experimental results show that SS methods trained by our corpora achieve the state-of-the-art results and significantly outperform the results on English benchmark WikiLarge. |
2407.00985 | Takayuki Nishimura | Takayuki Nishimura, Katsuyuki Kuyo, Motonari Kambara and Komei Sugiura | Object Segmentation from Open-Vocabulary Manipulation Instructions Based
on Optimal Transport Polygon Matching with Multimodal Foundation Models | Accepted for presentation at IROS2024 | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the task of generating segmentation masks for the target object
from an object manipulation instruction, which allows users to give open
vocabulary instructions to domestic service robots. Conventional segmentation
generation approaches often fail to account for objects outside the camera's
field of view and cases in which the order of vertices differs but still
represents the same polygon, which leads to erroneous mask generation. In this
study, we propose a novel method that generates segmentation masks from open
vocabulary instructions. We implement a novel loss function using optimal
transport to prevent significant loss where the order of vertices differs but
still represents the same polygon. To evaluate our approach, we constructed a
new dataset based on the REVERIE dataset and Matterport3D dataset. The results
demonstrated the effectiveness of the proposed method compared with existing
mask generation methods. Remarkably, our best model achieved a +16.32%
improvement on the dataset compared with a representative polygon-based method.
| [
{
"created": "Mon, 1 Jul 2024 05:48:48 GMT",
"version": "v1"
}
] | 2024-07-02 | [
[
"Nishimura",
"Takayuki",
""
],
[
"Kuyo",
"Katsuyuki",
""
],
[
"Kambara",
"Motonari",
""
],
[
"Sugiura",
"Komei",
""
]
] | We consider the task of generating segmentation masks for the target object from an object manipulation instruction, which allows users to give open vocabulary instructions to domestic service robots. Conventional segmentation generation approaches often fail to account for objects outside the camera's field of view and cases in which the order of vertices differs but still represents the same polygon, which leads to erroneous mask generation. In this study, we propose a novel method that generates segmentation masks from open vocabulary instructions. We implement a novel loss function using optimal transport to prevent significant loss where the order of vertices differs but still represents the same polygon. To evaluate our approach, we constructed a new dataset based on the REVERIE dataset and Matterport3D dataset. The results demonstrated the effectiveness of the proposed method compared with existing mask generation methods. Remarkably, our best model achieved a +16.32% improvement on the dataset compared with a representative polygon-based method. |
1903.03453 | Sulaiman Abo Diab | Sulaiman Y. Abo Diab | Geometry Mapping, Complete Pascal Scheme versus Standard Bilinear
Approach | 23 pages, 9 Figures | null | null | null | cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a complete Pascal interpolation scheme for use in the
plane geometry mapping applied in association with numerical methods. The
geometry of a domain element is approximated by a complete Pascal polynomial.
The interpolation procedure is formulated in a natural coordinate system. It
also presents the methodology of constructing shape functions of Pascal type
and establishing a transformation relation between natural and Cartesian
variables. The performance of the presented approach is investigated firstly by
calculating the geometrical properties of an arbitrary quadrilateral
cross-section like area and moments of inertia and comparing the results with
the exact values and with those provided by the standard linear approach and a
serendipity family approach. Secondly, the assessment of the scheme follows
using a straight-sided, compatible quadrilateral finite element for plate
bending of which geometry is approximated by a complete set of second order
with six free parameters. Triangular and quadrilateral shaped plates with
different boundary conditions are computed and compared with well-known results
in the literature. The presented procedure is of general applicability for
elements with curved edges and not limited to straight-sided edges in the
framework of numerical methods.
| [
{
"created": "Wed, 6 Mar 2019 20:46:13 GMT",
"version": "v1"
}
] | 2019-03-11 | [
[
"Diab",
"Sulaiman Y. Abo",
""
]
] | This paper presents a complete Pascal interpolation scheme for use in the plane geometry mapping applied in association with numerical methods. The geometry of a domain element is approximated by a complete Pascal polynomial. The interpolation procedure is formulated in a natural coordinate system. It also presents the methodology of constructing shape functions of Pascal type and establishing a transformation relation between natural and Cartesian variables. The performance of the presented approach is investigated firstly by calculating the geometrical properties of an arbitrary quadrilateral cross-section like area and moments of inertia and comparing the results with the exact values and with those provided by the standard linear approach and a serendipity family approach. Secondly, the assessment of the scheme follows using a straight-sided, compatible quadrilateral finite element for plate bending of which geometry is approximated by a complete set of second order with six free parameters. Triangular and quadrilateral shaped plates with different boundary conditions are computed and compared with well-known results in the literature. The presented procedure is of general applicability for elements with curved edges and not limited to straight-sided edges in the framework of numerical methods. |
2204.14116 | Benjamin Provan-Bessell | Benjamin Provan-Bessell, Marco Dalla, Andrea Visentin, Barry
O'Sullivan | SATfeatPy -- A Python-based Feature Extraction System for Satisfiability | 8 pages, 2 figures, code available at
https://github.com/bprovanbessell/SATfeatPy | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature extraction is a fundamental task in the application of machine
learning methods to SAT solving. It is used in algorithm selection and
configuration for solver portfolios and satisfiability classification. Many
approaches have been proposed to extract meaningful attributes from CNF
instances. Most of them lack a working/updated implementation, and the limited
descriptions lack clarity affecting the reproducibility. Furthermore, the
literature misses a comparison among the features. This paper introduces
SATfeatPy, a library that offers feature extraction techniques for SAT problems
in the CNF form. This package offers the implementation of all the structural
and statistical features from there major papers in the field. The library is
provided in an up-to-date, easy-to-use Python package alongside a detailed
feature description. We show the high accuracy of SAT/UNSAT and problem
category classification, using five sets of features generated using our
library from a dataset of 3000 SAT and UNSAT instances, over ten different
classes of problems. Finally, we compare the usefulness of the features and
importance for predicting a SAT instance's original structure in an ablation
study.
| [
{
"created": "Fri, 29 Apr 2022 14:10:01 GMT",
"version": "v1"
}
] | 2022-05-02 | [
[
"Provan-Bessell",
"Benjamin",
""
],
[
"Dalla",
"Marco",
""
],
[
"Visentin",
"Andrea",
""
],
[
"O'Sullivan",
"Barry",
""
]
] | Feature extraction is a fundamental task in the application of machine learning methods to SAT solving. It is used in algorithm selection and configuration for solver portfolios and satisfiability classification. Many approaches have been proposed to extract meaningful attributes from CNF instances. Most of them lack a working/updated implementation, and the limited descriptions lack clarity affecting the reproducibility. Furthermore, the literature misses a comparison among the features. This paper introduces SATfeatPy, a library that offers feature extraction techniques for SAT problems in the CNF form. This package offers the implementation of all the structural and statistical features from there major papers in the field. The library is provided in an up-to-date, easy-to-use Python package alongside a detailed feature description. We show the high accuracy of SAT/UNSAT and problem category classification, using five sets of features generated using our library from a dataset of 3000 SAT and UNSAT instances, over ten different classes of problems. Finally, we compare the usefulness of the features and importance for predicting a SAT instance's original structure in an ablation study. |
2302.05574 | Junru Lu | Junru Lu, Jiazheng Li, Byron C. Wallace, Yulan He, Gabriele Pergola | NapSS: Paragraph-level Medical Text Simplification via Narrative
Prompting and Sentence-matching Summarization | Findings of EACL 2023 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accessing medical literature is difficult for laypeople as the content is
written for specialists and contains medical jargon. Automated text
simplification methods offer a potential means to address this issue. In this
work, we propose a summarize-then-simplify two-stage strategy, which we call
NapSS, identifying the relevant content to simplify while ensuring that the
original narrative flow is preserved. In this approach, we first generate
reference summaries via sentence matching between the original and the
simplified abstracts. These summaries are then used to train an extractive
summarizer, learning the most relevant content to be simplified. Then, to
ensure the narrative consistency of the simplified text, we synthesize
auxiliary narrative prompts combining key phrases derived from the syntactical
analyses of the original text. Our model achieves results significantly better
than the seq2seq baseline on an English medical corpus, yielding 3%~4% absolute
improvements in terms of lexical similarity, and providing a further 1.1%
improvement of SARI score when combined with the baseline. We also highlight
shortcomings of existing evaluation methods, and introduce new metrics that
take into account both lexical and high-level semantic similarity. A human
evaluation conducted on a random sample of the test set further establishes the
effectiveness of the proposed approach. Codes and models are released here:
https://github.com/LuJunru/NapSS.
| [
{
"created": "Sat, 11 Feb 2023 02:20:25 GMT",
"version": "v1"
}
] | 2023-02-14 | [
[
"Lu",
"Junru",
""
],
[
"Li",
"Jiazheng",
""
],
[
"Wallace",
"Byron C.",
""
],
[
"He",
"Yulan",
""
],
[
"Pergola",
"Gabriele",
""
]
] | Accessing medical literature is difficult for laypeople as the content is written for specialists and contains medical jargon. Automated text simplification methods offer a potential means to address this issue. In this work, we propose a summarize-then-simplify two-stage strategy, which we call NapSS, identifying the relevant content to simplify while ensuring that the original narrative flow is preserved. In this approach, we first generate reference summaries via sentence matching between the original and the simplified abstracts. These summaries are then used to train an extractive summarizer, learning the most relevant content to be simplified. Then, to ensure the narrative consistency of the simplified text, we synthesize auxiliary narrative prompts combining key phrases derived from the syntactical analyses of the original text. Our model achieves results significantly better than the seq2seq baseline on an English medical corpus, yielding 3%~4% absolute improvements in terms of lexical similarity, and providing a further 1.1% improvement of SARI score when combined with the baseline. We also highlight shortcomings of existing evaluation methods, and introduce new metrics that take into account both lexical and high-level semantic similarity. A human evaluation conducted on a random sample of the test set further establishes the effectiveness of the proposed approach. Codes and models are released here: https://github.com/LuJunru/NapSS. |
1803.08298 | Carlos Lopez | Carlos F. Lopez and Cheng-Xiang Wang | A Study of Delay Drifts on Massive MIMO Wideband Channel Models | 7 pages, 5 figures. 22nd International ITG Workshop on Smart Antennas
(WSA 2018) | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the effects of the variations of the propagation
delay over large-scale antenna-arrays used in massive multiple-input
multiple-output (MIMO) wideband communication systems on the statistical
properties of the channel. Due to its simplicity and popularity, the Elliptical
geometry-based stochastic channel model (GBSM) is employed to demonstrate new
non-stationary properties of the channel in the frequency and spatial domains
caused by the drift of delays. In addition, we show that the time of travel of
multi-path components (MPCs) over large-scale arrays may result in overlooked
frequency and spatial decorrelation effects. These are theoretically
demonstrated by deriving the space-time-frequency correlation functions
(STFCFs) of both narrowband and wideband Elliptical models. Closed-form
expressions of the array-variant frequency correlation function (FCF), power
delay profile (PDP), mean delay, and delay spread of single- and multi-confocal
Elliptical models are derived when the angles of arrival (AOAs) are von Mises
distributed. In such conditions, we find that the large dimensions of the
antenna array may limit the narrowband characteristic of the single-ellipse
model and alter the wideband characteristics (PDP and FCF) of the
multi-confocal Elliptical channel model. Although we present and analyze
numerical and simulation results for a particular GBSM, similar conclusions can
be extended to other GBSMs.
| [
{
"created": "Thu, 22 Mar 2018 10:23:29 GMT",
"version": "v1"
}
] | 2018-03-23 | [
[
"Lopez",
"Carlos F.",
""
],
[
"Wang",
"Cheng-Xiang",
""
]
] | In this paper, we study the effects of the variations of the propagation delay over large-scale antenna-arrays used in massive multiple-input multiple-output (MIMO) wideband communication systems on the statistical properties of the channel. Due to its simplicity and popularity, the Elliptical geometry-based stochastic channel model (GBSM) is employed to demonstrate new non-stationary properties of the channel in the frequency and spatial domains caused by the drift of delays. In addition, we show that the time of travel of multi-path components (MPCs) over large-scale arrays may result in overlooked frequency and spatial decorrelation effects. These are theoretically demonstrated by deriving the space-time-frequency correlation functions (STFCFs) of both narrowband and wideband Elliptical models. Closed-form expressions of the array-variant frequency correlation function (FCF), power delay profile (PDP), mean delay, and delay spread of single- and multi-confocal Elliptical models are derived when the angles of arrival (AOAs) are von Mises distributed. In such conditions, we find that the large dimensions of the antenna array may limit the narrowband characteristic of the single-ellipse model and alter the wideband characteristics (PDP and FCF) of the multi-confocal Elliptical channel model. Although we present and analyze numerical and simulation results for a particular GBSM, similar conclusions can be extended to other GBSMs. |
1502.01761 | Tom Lee | Tom Lee, Sanja Fidler, Alex Levinshtein, Cristian Sminchisescu, and
Sven Dickinson | A Framework for Symmetric Part Detection in Cluttered Scenes | 10 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The role of symmetry in computer vision has waxed and waned in importance
during the evolution of the field from its earliest days. At first figuring
prominently in support of bottom-up indexing, it fell out of favor as shape
gave way to appearance and recognition gave way to detection. With a strong
prior in the form of a target object, the role of the weaker priors offered by
perceptual grouping was greatly diminished. However, as the field returns to
the problem of recognition from a large database, the bottom-up recovery of the
parts that make up the objects in a cluttered scene is critical for their
recognition. The medial axis community has long exploited the ubiquitous
regularity of symmetry as a basis for the decomposition of a closed contour
into medial parts. However, today's recognition systems are faced with
cluttered scenes, and the assumption that a closed contour exists, i.e. that
figure-ground segmentation has been solved, renders much of the medial axis
community's work inapplicable. In this article, we review a computational
framework, previously reported in Lee et al. (2013), Levinshtein et al. (2009,
2013), that bridges the representation power of the medial axis and the need to
recover and group an object's parts in a cluttered scene. Our framework is
rooted in the idea that a maximally inscribed disc, the building block of a
medial axis, can be modeled as a compact superpixel in the image. We evaluate
the method on images of cluttered scenes.
| [
{
"created": "Thu, 5 Feb 2015 23:51:16 GMT",
"version": "v1"
}
] | 2015-02-09 | [
[
"Lee",
"Tom",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Levinshtein",
"Alex",
""
],
[
"Sminchisescu",
"Cristian",
""
],
[
"Dickinson",
"Sven",
""
]
] | The role of symmetry in computer vision has waxed and waned in importance during the evolution of the field from its earliest days. At first figuring prominently in support of bottom-up indexing, it fell out of favor as shape gave way to appearance and recognition gave way to detection. With a strong prior in the form of a target object, the role of the weaker priors offered by perceptual grouping was greatly diminished. However, as the field returns to the problem of recognition from a large database, the bottom-up recovery of the parts that make up the objects in a cluttered scene is critical for their recognition. The medial axis community has long exploited the ubiquitous regularity of symmetry as a basis for the decomposition of a closed contour into medial parts. However, today's recognition systems are faced with cluttered scenes, and the assumption that a closed contour exists, i.e. that figure-ground segmentation has been solved, renders much of the medial axis community's work inapplicable. In this article, we review a computational framework, previously reported in Lee et al. (2013), Levinshtein et al. (2009, 2013), that bridges the representation power of the medial axis and the need to recover and group an object's parts in a cluttered scene. Our framework is rooted in the idea that a maximally inscribed disc, the building block of a medial axis, can be modeled as a compact superpixel in the image. We evaluate the method on images of cluttered scenes. |
1201.2605 | Zhenwen Dai | Zhenwen Dai and J\"org L\"ucke | Autonomous Cleaning of Corrupted Scanned Documents - A Generative
Modeling Approach | oral presentation and Google Student Travel Award; IEEE conference on
Computer Vision and Pattern Recognition 2012 | null | 10.1109/TPAMI.2014.2313126 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the task of cleaning scanned text documents that are strongly
corrupted by dirt such as manual line strokes, spilled ink etc. We aim at
autonomously removing dirt from a single letter-size page based only on the
information the page contains. Our approach, therefore, has to learn character
representations without supervision and requires a mechanism to distinguish
learned representations from irregular patterns. To learn character
representations, we use a probabilistic generative model parameterizing pattern
features, feature variances, the features' planar arrangements, and pattern
frequencies. The latent variables of the model describe pattern class, pattern
position, and the presence or absence of individual pattern features. The model
parameters are optimized using a novel variational EM approximation. After
learning, the parameters represent, independently of their absolute position,
planar feature arrangements and their variances. A quality measure defined
based on the learned representation then allows for an autonomous
discrimination between regular character patterns and the irregular patterns
making up the dirt. The irregular patterns can thus be removed to clean the
document. For a full Latin alphabet we found that a single page does not
contain sufficiently many character examples. However, even if heavily
corrupted by dirt, we show that a page containing a lower number of character
types can efficiently and autonomously be cleaned solely based on the
structural regularity of the characters it contains. In different examples
using characters from different alphabets, we demonstrate generality of the
approach and discuss its implications for future developments.
| [
{
"created": "Thu, 12 Jan 2012 16:09:10 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Jul 2012 12:42:01 GMT",
"version": "v2"
}
] | 2014-10-21 | [
[
"Dai",
"Zhenwen",
""
],
[
"Lücke",
"Jörg",
""
]
] | We study the task of cleaning scanned text documents that are strongly corrupted by dirt such as manual line strokes, spilled ink etc. We aim at autonomously removing dirt from a single letter-size page based only on the information the page contains. Our approach, therefore, has to learn character representations without supervision and requires a mechanism to distinguish learned representations from irregular patterns. To learn character representations, we use a probabilistic generative model parameterizing pattern features, feature variances, the features' planar arrangements, and pattern frequencies. The latent variables of the model describe pattern class, pattern position, and the presence or absence of individual pattern features. The model parameters are optimized using a novel variational EM approximation. After learning, the parameters represent, independently of their absolute position, planar feature arrangements and their variances. A quality measure defined based on the learned representation then allows for an autonomous discrimination between regular character patterns and the irregular patterns making up the dirt. The irregular patterns can thus be removed to clean the document. For a full Latin alphabet we found that a single page does not contain sufficiently many character examples. However, even if heavily corrupted by dirt, we show that a page containing a lower number of character types can efficiently and autonomously be cleaned solely based on the structural regularity of the characters it contains. In different examples using characters from different alphabets, we demonstrate generality of the approach and discuss its implications for future developments. |
2302.05762 | Qiwei Han | Fynn Oldenburg, Qiwei Han, Maximilian Kaiser | Interpretable Deep Learning for Forecasting Online Advertising Costs:
Insights from the Competitive Bidding Landscape | Acceptd at AAAI 2023 Web for Advertising Workshop, 12 pages, 8
figures, 4 tables | null | null | null | cs.LG cs.AI cs.SI | http://creativecommons.org/licenses/by/4.0/ | As advertisers increasingly shift their budgets toward digital advertising,
forecasting advertising costs is essential for making budget plans to optimize
marketing campaign returns. In this paper, we perform a comprehensive study
using a variety of time-series forecasting methods to predict daily average
cost-per-click (CPC) in the online advertising market. We show that forecasting
advertising costs would benefit from multivariate models using covariates from
competitors' CPC development identified through time-series clustering. We
further interpret the results by analyzing feature importance and temporal
attention. Finally, we show that our approach has several advantages over
models that individual advertisers might build based solely on their collected
data.
| [
{
"created": "Sat, 11 Feb 2023 19:26:17 GMT",
"version": "v1"
}
] | 2023-02-14 | [
[
"Oldenburg",
"Fynn",
""
],
[
"Han",
"Qiwei",
""
],
[
"Kaiser",
"Maximilian",
""
]
] | As advertisers increasingly shift their budgets toward digital advertising, forecasting advertising costs is essential for making budget plans to optimize marketing campaign returns. In this paper, we perform a comprehensive study using a variety of time-series forecasting methods to predict daily average cost-per-click (CPC) in the online advertising market. We show that forecasting advertising costs would benefit from multivariate models using covariates from competitors' CPC development identified through time-series clustering. We further interpret the results by analyzing feature importance and temporal attention. Finally, we show that our approach has several advantages over models that individual advertisers might build based solely on their collected data. |
1609.05307 | Hung Pham | Hung Pham, Quang-Cuong Pham | On the Structure of the Time-Optimal Path Parameterization Problem with
Third-Order Constraints | 8 pages, 6 figures, ICRA 2017 | null | 10.1109/ICRA.2017.7989084 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finding the Time-Optimal Parameterization of a Path (TOPP) subject to
second-order constraints (e.g. acceleration, torque, contact stability, etc.)
is an important and well-studied problem in robotics. In comparison, TOPP
subject to third-order constraints (e.g. jerk, torque rate, etc.) has received
far less attention and remains largely open. In this paper, we investigate the
structure of the TOPP problem with third-order constraints. In particular, we
identify two major difficulties: (i) how to smoothly connect optimal profiles,
and (ii) how to address singularities, which stop profile integration
prematurely. We propose a new algorithm, TOPP3, which addresses these two
difficulties and thereby constitutes an important milestone towards an
efficient computational solution to TOPP with third-order constraints.
| [
{
"created": "Sat, 17 Sep 2016 09:27:46 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Feb 2017 12:11:22 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Sep 2017 12:08:31 GMT",
"version": "v3"
}
] | 2017-09-20 | [
[
"Pham",
"Hung",
""
],
[
"Pham",
"Quang-Cuong",
""
]
] | Finding the Time-Optimal Parameterization of a Path (TOPP) subject to second-order constraints (e.g. acceleration, torque, contact stability, etc.) is an important and well-studied problem in robotics. In comparison, TOPP subject to third-order constraints (e.g. jerk, torque rate, etc.) has received far less attention and remains largely open. In this paper, we investigate the structure of the TOPP problem with third-order constraints. In particular, we identify two major difficulties: (i) how to smoothly connect optimal profiles, and (ii) how to address singularities, which stop profile integration prematurely. We propose a new algorithm, TOPP3, which addresses these two difficulties and thereby constitutes an important milestone towards an efficient computational solution to TOPP with third-order constraints. |
2210.05335 | Junjie Wang | Yatai Ji, Junjie Wang, Yuan Gong, Lin Zhang, Yanru Zhu, Hongfa Wang,
Jiaxing Zhang, Tetsuya Sakai, Yujiu Yang | MAP: Multimodal Uncertainty-Aware Vision-Language Pre-training Model | CVPR 2023 Main Track Long Paper | null | null | null | cs.CV cs.CL cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal semantic understanding often has to deal with uncertainty, which
means the obtained messages tend to refer to multiple targets. Such uncertainty
is problematic for our interpretation, including inter- and intra-modal
uncertainty. Little effort has studied the modeling of this uncertainty,
particularly in pre-training on unlabeled datasets and fine-tuning in
task-specific downstream datasets. In this paper, we project the
representations of all modalities as probabilistic distributions via a
Probability Distribution Encoder (PDE) by utilizing sequence-level
interactions. Compared to the existing deterministic methods, such uncertainty
modeling can convey richer multimodal semantic information and more complex
relationships. Furthermore, we integrate uncertainty modeling with popular
pre-training frameworks and propose suitable pre-training tasks:
Distribution-based Vision-Language Contrastive learning (D-VLC),
Distribution-based Masked Language Modeling (D-MLM), and Distribution-based
Image-Text Matching (D-ITM). The fine-tuned models are applied to challenging
downstream tasks, including image-text retrieval, visual question answering,
visual reasoning, and visual entailment, and achieve state-of-the-art results.
| [
{
"created": "Tue, 11 Oct 2022 10:54:54 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Mar 2023 04:54:25 GMT",
"version": "v2"
},
{
"created": "Thu, 20 Jul 2023 16:24:14 GMT",
"version": "v3"
}
] | 2023-07-21 | [
[
"Ji",
"Yatai",
""
],
[
"Wang",
"Junjie",
""
],
[
"Gong",
"Yuan",
""
],
[
"Zhang",
"Lin",
""
],
[
"Zhu",
"Yanru",
""
],
[
"Wang",
"Hongfa",
""
],
[
"Zhang",
"Jiaxing",
""
],
[
"Sakai",
"Tetsuya",
""
],
[
"Yang",
"Yujiu",
""
]
] | Multimodal semantic understanding often has to deal with uncertainty, which means the obtained messages tend to refer to multiple targets. Such uncertainty is problematic for our interpretation, including inter- and intra-modal uncertainty. Little effort has studied the modeling of this uncertainty, particularly in pre-training on unlabeled datasets and fine-tuning in task-specific downstream datasets. In this paper, we project the representations of all modalities as probabilistic distributions via a Probability Distribution Encoder (PDE) by utilizing sequence-level interactions. Compared to the existing deterministic methods, such uncertainty modeling can convey richer multimodal semantic information and more complex relationships. Furthermore, we integrate uncertainty modeling with popular pre-training frameworks and propose suitable pre-training tasks: Distribution-based Vision-Language Contrastive learning (D-VLC), Distribution-based Masked Language Modeling (D-MLM), and Distribution-based Image-Text Matching (D-ITM). The fine-tuned models are applied to challenging downstream tasks, including image-text retrieval, visual question answering, visual reasoning, and visual entailment, and achieve state-of-the-art results. |
2001.04351 | Liang Xu | Liang Xu, Yu tong, Qianqian Dong, Yixuan Liao, Cong Yu, Yin Tian,
Weitang Liu, Lu Li, Caiquan Liu, Xuanwei Zhang | CLUENER2020: Fine-grained Named Entity Recognition Dataset and Benchmark
for Chinese | 6 pages, 5 tables, 1 figure | null | null | null | cs.CL cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce the NER dataset from CLUE organization
(CLUENER2020), a well-defined fine-grained dataset for named entity recognition
in Chinese. CLUENER2020 contains 10 categories. Apart from common labels like
person, organization, and location, it contains more diverse categories. It is
more challenging than current other Chinese NER datasets and could better
reflect real-world applications. For comparison, we implement several
state-of-the-art baselines as sequence labeling tasks and report human
performance, as well as its analysis. To facilitate future work on fine-grained
NER for Chinese, we release our dataset, baselines, and leader-board.
| [
{
"created": "Mon, 13 Jan 2020 15:39:56 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Jan 2020 19:06:49 GMT",
"version": "v2"
},
{
"created": "Fri, 17 Jan 2020 16:18:16 GMT",
"version": "v3"
},
{
"created": "Mon, 20 Jan 2020 16:32:50 GMT",
"version": "v4"
}
] | 2020-01-22 | [
[
"Xu",
"Liang",
""
],
[
"tong",
"Yu",
""
],
[
"Dong",
"Qianqian",
""
],
[
"Liao",
"Yixuan",
""
],
[
"Yu",
"Cong",
""
],
[
"Tian",
"Yin",
""
],
[
"Liu",
"Weitang",
""
],
[
"Li",
"Lu",
""
],
[
"Liu",
"Caiquan",
""
],
[
"Zhang",
"Xuanwei",
""
]
] | In this paper, we introduce the NER dataset from CLUE organization (CLUENER2020), a well-defined fine-grained dataset for named entity recognition in Chinese. CLUENER2020 contains 10 categories. Apart from common labels like person, organization, and location, it contains more diverse categories. It is more challenging than current other Chinese NER datasets and could better reflect real-world applications. For comparison, we implement several state-of-the-art baselines as sequence labeling tasks and report human performance, as well as its analysis. To facilitate future work on fine-grained NER for Chinese, we release our dataset, baselines, and leader-board. |
2103.16489 | Cagatay Basdogan | Idil Ozdamar, M.Reza Alipour, Benoit P. Delhaye, Philippe Lef`evre,
Cagatay Basdogan | Step-Change in Friction under Electrovibration | null | IEEE Transactions on Haptics, 2020, Vol. 13, No. 1, pp. 137-143 | 10.1109/TOH.2020.2966992 | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rendering tactile effects on a touch screen via electrovibration has many
potential applications. However, our knowledge on tactile perception of change
in friction and the underlying contact mechanics are both very limited. In this
study, we investigate the tactile perception and the contact mechanics for a
step change in friction under electrovibration during a relative sliding
between finger and the surface of a capacitive touchscreen. First, we conduct
magnitude estimation experiments to investigate the role of normal force and
sliding velocity on the perceived tactile intensity for a step increase and
decrease in friction, called as rising friction (RF) and falling friction (FF).
To investigate the contact mechanics involved in RF and FF, we then measure the
frictional force, the apparent contact area, and the strains acting on the
fingerpad during sliding at a constant velocity under three different normal
loads using a custom-made experimental set-up. The results show that the
participants perceived RF stronger than FF, and both the normal force and
sliding velocity significantly influenced their perception. These results are
supported by our mechanical measurements; the relative change in friction, the
apparent contact area, and the strain in the sliding direction were all higher
for RF than those for FF, especially for low normal forces. Taken together, our
results suggest that different contact mechanics take place during RF and FF
due to the viscoelastic behavior of fingerpad skin, and those differences
influence our tactile perception of a step change in friction.
| [
{
"created": "Tue, 30 Mar 2021 16:45:27 GMT",
"version": "v1"
}
] | 2021-03-31 | [
[
"Ozdamar",
"Idil",
""
],
[
"Alipour",
"M. Reza",
""
],
[
"Delhaye",
"Benoit P.",
""
],
[
"Lef`evre",
"Philippe",
""
],
[
"Basdogan",
"Cagatay",
""
]
] | Rendering tactile effects on a touch screen via electrovibration has many potential applications. However, our knowledge on tactile perception of change in friction and the underlying contact mechanics are both very limited. In this study, we investigate the tactile perception and the contact mechanics for a step change in friction under electrovibration during a relative sliding between finger and the surface of a capacitive touchscreen. First, we conduct magnitude estimation experiments to investigate the role of normal force and sliding velocity on the perceived tactile intensity for a step increase and decrease in friction, called as rising friction (RF) and falling friction (FF). To investigate the contact mechanics involved in RF and FF, we then measure the frictional force, the apparent contact area, and the strains acting on the fingerpad during sliding at a constant velocity under three different normal loads using a custom-made experimental set-up. The results show that the participants perceived RF stronger than FF, and both the normal force and sliding velocity significantly influenced their perception. These results are supported by our mechanical measurements; the relative change in friction, the apparent contact area, and the strain in the sliding direction were all higher for RF than those for FF, especially for low normal forces. Taken together, our results suggest that different contact mechanics take place during RF and FF due to the viscoelastic behavior of fingerpad skin, and those differences influence our tactile perception of a step change in friction. |
1901.05719 | Lingchen Huang | Lingchen Huang, Huazi Zhang, Rong Li, Yiqun Ge, Jun Wang | AI Coding: Learning to Construct Error Correction Codes | 14 pages; 15 figures; Accepted for publication in the IEEE
Transactions on Communications | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we investigate an artificial-intelligence (AI) driven approach
to design error correction codes (ECC). Classic error correction code was
designed upon coding theory that typically defines code properties (e.g.,
hamming distance, subchannel reliability, etc.) to reflect code performance.
Its code design is to optimize code properties. However, an AI-driven approach
doesn't necessarily rely on coding theory any longer. Specifically, we propose
a constructor-evaluator framework, in which the code constructor is realized by
AI algorithms and the code evaluator provides code performance metric
measurements. The code constructor keeps improving the code construction to
maximize code performance that is evaluated by the code evaluator. As examples,
we construct linear block codes and polar codes with reinforcement learning
(RL) and evolutionary algorithms. The results show that comparable code
performance can be achieved with respect to the existing codes. It is
noteworthy that our method can provide superior performances where existing
classic constructions fail to achieve optimum for a specific decoder (e.g.,
list decoding for polar codes).
| [
{
"created": "Thu, 17 Jan 2019 10:24:22 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Oct 2019 02:39:05 GMT",
"version": "v2"
}
] | 2019-10-31 | [
[
"Huang",
"Lingchen",
""
],
[
"Zhang",
"Huazi",
""
],
[
"Li",
"Rong",
""
],
[
"Ge",
"Yiqun",
""
],
[
"Wang",
"Jun",
""
]
] | In this paper, we investigate an artificial-intelligence (AI) driven approach to design error correction codes (ECC). Classic error correction code was designed upon coding theory that typically defines code properties (e.g., hamming distance, subchannel reliability, etc.) to reflect code performance. Its code design is to optimize code properties. However, an AI-driven approach doesn't necessarily rely on coding theory any longer. Specifically, we propose a constructor-evaluator framework, in which the code constructor is realized by AI algorithms and the code evaluator provides code performance metric measurements. The code constructor keeps improving the code construction to maximize code performance that is evaluated by the code evaluator. As examples, we construct linear block codes and polar codes with reinforcement learning (RL) and evolutionary algorithms. The results show that comparable code performance can be achieved with respect to the existing codes. It is noteworthy that our method can provide superior performances where existing classic constructions fail to achieve optimum for a specific decoder (e.g., list decoding for polar codes). |
2404.09200 | Pengda Mao | Pengda Mao and Quan Quan | Tube-RRT*: Efficient Homotopic Path Planning for Swarm Robotics
Passing-Through Large-Scale Obstacle Environments | 8 pages, 8 figures, submitted to RA-L | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the concept of optimal virtual tube has emerged as a novel solution
to the challenging task of navigating obstacle-dense environments for swarm
robotics, offering a wide ranging of applications. However, it lacks an
efficient homotopic path planning method in obstacle-dense environments. This
paper introduces Tube-RRT*, an innovative homotopic path planning method that
builds upon and improves the Rapidly-exploring Random Tree (RRT) algorithm.
Tube-RRT* is specifically designed to generate homotopic paths for the
trajectories in the virtual tube, strategically considering opening volume and
tube length to mitigate swarm congestion and ensure agile navigation. Through
comprehensive comparative simulations conducted within complex, large-scale
obstacle environments, we demonstrate the effectiveness of Tube-RRT*.
| [
{
"created": "Sun, 14 Apr 2024 09:29:37 GMT",
"version": "v1"
}
] | 2024-04-16 | [
[
"Mao",
"Pengda",
""
],
[
"Quan",
"Quan",
""
]
] | Recently, the concept of optimal virtual tube has emerged as a novel solution to the challenging task of navigating obstacle-dense environments for swarm robotics, offering a wide ranging of applications. However, it lacks an efficient homotopic path planning method in obstacle-dense environments. This paper introduces Tube-RRT*, an innovative homotopic path planning method that builds upon and improves the Rapidly-exploring Random Tree (RRT) algorithm. Tube-RRT* is specifically designed to generate homotopic paths for the trajectories in the virtual tube, strategically considering opening volume and tube length to mitigate swarm congestion and ensure agile navigation. Through comprehensive comparative simulations conducted within complex, large-scale obstacle environments, we demonstrate the effectiveness of Tube-RRT*. |
1610.04028 | Arash Andalib | Arash Andalib, Mehdi Zare, Farid Atry | A fuzzy expert system for earthquake prediction, case study: the Zagros
range | 4 pages, 4 figures in proceedings of the third International
Conference on Modeling, Simulation and Applied Optimization, 2009 Corrected
typos, added publication information, Corrected typo, Added publication
information | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A methodology for the development of a fuzzy expert system (FES) with
application to earthquake prediction is presented. The idea is to reproduce the
performance of a human expert in earthquake prediction. To do this, at the
first step, rules provided by the human expert are used to generate a fuzzy
rule base. These rules are then fed into an inference engine to produce a fuzzy
inference system (FIS) and to infer the results. In this paper, we have used a
Sugeno type fuzzy inference system to build the FES. At the next step, the
adaptive network-based fuzzy inference system (ANFIS) is used to refine the FES
parameters and improve its performance. The proposed framework is then employed
to attain the performance of a human expert used to predict earthquakes in the
Zagros area based on the idea of coupled earthquakes. While the prediction
results are promising in parts of the testing set, the general performance
indicates that prediction methodology based on coupled earthquakes needs more
investigation and more complicated reasoning procedure to yield satisfactory
predictions.
| [
{
"created": "Thu, 13 Oct 2016 11:18:02 GMT",
"version": "v1"
},
{
"created": "Wed, 17 May 2017 21:23:01 GMT",
"version": "v2"
}
] | 2017-05-19 | [
[
"Andalib",
"Arash",
""
],
[
"Zare",
"Mehdi",
""
],
[
"Atry",
"Farid",
""
]
] | A methodology for the development of a fuzzy expert system (FES) with application to earthquake prediction is presented. The idea is to reproduce the performance of a human expert in earthquake prediction. To do this, at the first step, rules provided by the human expert are used to generate a fuzzy rule base. These rules are then fed into an inference engine to produce a fuzzy inference system (FIS) and to infer the results. In this paper, we have used a Sugeno type fuzzy inference system to build the FES. At the next step, the adaptive network-based fuzzy inference system (ANFIS) is used to refine the FES parameters and improve its performance. The proposed framework is then employed to attain the performance of a human expert used to predict earthquakes in the Zagros area based on the idea of coupled earthquakes. While the prediction results are promising in parts of the testing set, the general performance indicates that prediction methodology based on coupled earthquakes needs more investigation and more complicated reasoning procedure to yield satisfactory predictions. |
2010.07347 | Changjiang Cai | Changjiang Cai, Matteo Poggi, Stefano Mattoccia, Philippos Mordohai | Matching-space Stereo Networks for Cross-domain Generalization | 14 pages, 8 figures, International Conference on 3D Vision
(3DV'2020), Github code at https://github.com/ccj5351/MS-Nets | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | End-to-end deep networks represent the state of the art for stereo matching.
While excelling on images framing environments similar to the training set,
major drops in accuracy occur in unseen domains (e.g., when moving from
synthetic to real scenes). In this paper we introduce a novel family of
architectures, namely Matching-Space Networks (MS-Nets), with improved
generalization properties. By replacing learning-based feature extraction from
image RGB values with matching functions and confidence measures from
conventional wisdom, we move the learning process from the color space to the
Matching Space, avoiding over-specialization to domain specific features.
Extensive experimental results on four real datasets highlight that our
proposal leads to superior generalization to unseen environments over
conventional deep architectures, keeping accuracy on the source domain almost
unaltered. Our code is available at https://github.com/ccj5351/MS-Nets.
| [
{
"created": "Wed, 14 Oct 2020 18:29:20 GMT",
"version": "v1"
}
] | 2020-10-16 | [
[
"Cai",
"Changjiang",
""
],
[
"Poggi",
"Matteo",
""
],
[
"Mattoccia",
"Stefano",
""
],
[
"Mordohai",
"Philippos",
""
]
] | End-to-end deep networks represent the state of the art for stereo matching. While excelling on images framing environments similar to the training set, major drops in accuracy occur in unseen domains (e.g., when moving from synthetic to real scenes). In this paper we introduce a novel family of architectures, namely Matching-Space Networks (MS-Nets), with improved generalization properties. By replacing learning-based feature extraction from image RGB values with matching functions and confidence measures from conventional wisdom, we move the learning process from the color space to the Matching Space, avoiding over-specialization to domain specific features. Extensive experimental results on four real datasets highlight that our proposal leads to superior generalization to unseen environments over conventional deep architectures, keeping accuracy on the source domain almost unaltered. Our code is available at https://github.com/ccj5351/MS-Nets. |
1306.4151 | Elchanan Mossel | Elchanan Mossel and Anupam Prakash and Gregory Valiant | Computation in anonymous networks | null | null | null | null | cs.CC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We identify and investigate a computational model arising in molecular
computing, social computing and sensor network. The model is made of of
multiple agents who are computationally limited and posses no global
information. The agents may represent nodes in a social network, sensors, or
molecules in a molecular computer. Assuming that each agent is in one of $k$
states, we say that {\em the system computes} $f:[k]^{n} \to [k]$ if all agents
eventually converge to the correct value of $f$. We present number of general
results characterizing the computational power of the mode. We further present
protocols for computing the plurality function with $O(\log k)$ memory and for
approximately counting the number of nodes of a given color with $O(\log \log
n)$ memory, where $n$ is the number of agents in the networks. These results
are tight.
| [
{
"created": "Tue, 18 Jun 2013 11:37:39 GMT",
"version": "v1"
}
] | 2013-06-19 | [
[
"Mossel",
"Elchanan",
""
],
[
"Prakash",
"Anupam",
""
],
[
"Valiant",
"Gregory",
""
]
] | We identify and investigate a computational model arising in molecular computing, social computing and sensor network. The model is made of of multiple agents who are computationally limited and posses no global information. The agents may represent nodes in a social network, sensors, or molecules in a molecular computer. Assuming that each agent is in one of $k$ states, we say that {\em the system computes} $f:[k]^{n} \to [k]$ if all agents eventually converge to the correct value of $f$. We present number of general results characterizing the computational power of the mode. We further present protocols for computing the plurality function with $O(\log k)$ memory and for approximately counting the number of nodes of a given color with $O(\log \log n)$ memory, where $n$ is the number of agents in the networks. These results are tight. |
2010.11105 | Jakub Ber\'anek | Stanislav B\"ohm, Jakub Ber\'anek | Runtime vs Scheduler: Analyzing Dask's Overheads | null | null | 10.1109/WORKS51914.2020.00006 | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dask is a distributed task framework which is commonly used by data
scientists to parallelize Python code on computing clusters with little
programming effort. It uses a sophisticated work-stealing scheduler which has
been hand-tuned to execute task graphs as efficiently as possible. But is
scheduler optimization a worthwhile effort for Dask? Our paper shows on many
real world task graphs that even a completely random scheduler is surprisingly
competitive with its built-in scheduler and that the main bottleneck of Dask
lies in its runtime overhead. We develop a drop-in replacement for the Dask
central server written in Rust which is backwards compatible with existing Dask
programs. Thanks to its efficient runtime, our server implementation is able to
scale up to larger clusters than Dask and consistently outperforms it on a
variety of task graphs, despite the fact that it uses a simpler scheduling
algorithm.
| [
{
"created": "Wed, 21 Oct 2020 16:13:37 GMT",
"version": "v1"
}
] | 2021-01-21 | [
[
"Böhm",
"Stanislav",
""
],
[
"Beránek",
"Jakub",
""
]
] | Dask is a distributed task framework which is commonly used by data scientists to parallelize Python code on computing clusters with little programming effort. It uses a sophisticated work-stealing scheduler which has been hand-tuned to execute task graphs as efficiently as possible. But is scheduler optimization a worthwhile effort for Dask? Our paper shows on many real world task graphs that even a completely random scheduler is surprisingly competitive with its built-in scheduler and that the main bottleneck of Dask lies in its runtime overhead. We develop a drop-in replacement for the Dask central server written in Rust which is backwards compatible with existing Dask programs. Thanks to its efficient runtime, our server implementation is able to scale up to larger clusters than Dask and consistently outperforms it on a variety of task graphs, despite the fact that it uses a simpler scheduling algorithm. |
2008.09194 | Baiwu Zhang | Baiwu Zhang, Jin Peng Zhou, Ilia Shumailov, Nicolas Papernot | On Attribution of Deepfakes | null | null | null | null | cs.LG cs.CR cs.CV cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Progress in generative modelling, especially generative adversarial networks,
have made it possible to efficiently synthesize and alter media at scale.
Malicious individuals now rely on these machine-generated media, or deepfakes,
to manipulate social discourse. In order to ensure media authenticity, existing
research is focused on deepfake detection. Yet, the adversarial nature of
frameworks used for generative modeling suggests that progress towards
detecting deepfakes will enable more realistic deepfake generation. Therefore,
it comes at no surprise that developers of generative models are under the
scrutiny of stakeholders dealing with misinformation campaigns. At the same
time, generative models have a lot of positive applications. As such, there is
a clear need to develop tools that ensure the transparent use of generative
modeling, while minimizing the harm caused by malicious applications.
Our technique optimizes over the source of entropy of each generative model
to probabilistically attribute a deepfake to one of the models. We evaluate our
method on the seminal example of face synthesis, demonstrating that our
approach achieves 97.62% attribution accuracy, and is less sensitive to
perturbations and adversarial examples. We discuss the ethical implications of
our work, identify where our technique can be used, and highlight that a more
meaningful legislative framework is required for a more transparent and ethical
use of generative modeling. Finally, we argue that model developers should be
capable of claiming plausible deniability and propose a second framework to do
so -- this allows a model developer to produce evidence that they did not
produce media that they are being accused of having produced.
| [
{
"created": "Thu, 20 Aug 2020 20:25:18 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Mar 2021 21:41:33 GMT",
"version": "v2"
}
] | 2021-03-05 | [
[
"Zhang",
"Baiwu",
""
],
[
"Zhou",
"Jin Peng",
""
],
[
"Shumailov",
"Ilia",
""
],
[
"Papernot",
"Nicolas",
""
]
] | Progress in generative modelling, especially generative adversarial networks, have made it possible to efficiently synthesize and alter media at scale. Malicious individuals now rely on these machine-generated media, or deepfakes, to manipulate social discourse. In order to ensure media authenticity, existing research is focused on deepfake detection. Yet, the adversarial nature of frameworks used for generative modeling suggests that progress towards detecting deepfakes will enable more realistic deepfake generation. Therefore, it comes at no surprise that developers of generative models are under the scrutiny of stakeholders dealing with misinformation campaigns. At the same time, generative models have a lot of positive applications. As such, there is a clear need to develop tools that ensure the transparent use of generative modeling, while minimizing the harm caused by malicious applications. Our technique optimizes over the source of entropy of each generative model to probabilistically attribute a deepfake to one of the models. We evaluate our method on the seminal example of face synthesis, demonstrating that our approach achieves 97.62% attribution accuracy, and is less sensitive to perturbations and adversarial examples. We discuss the ethical implications of our work, identify where our technique can be used, and highlight that a more meaningful legislative framework is required for a more transparent and ethical use of generative modeling. Finally, we argue that model developers should be capable of claiming plausible deniability and propose a second framework to do so -- this allows a model developer to produce evidence that they did not produce media that they are being accused of having produced. |
2203.09308 | Chuxu Zhang | Chuxu Zhang, Kaize Ding, Jundong Li, Xiangliang Zhang, Yanfang Ye,
Nitesh V. Chawla, Huan Liu | Few-Shot Learning on Graphs | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph representation learning has attracted tremendous attention due to its
remarkable performance in many real-world applications. However, prevailing
supervised graph representation learning models for specific tasks often suffer
from label sparsity issue as data labeling is always time and resource
consuming. In light of this, few-shot learning on graphs (FSLG), which combines
the strengths of graph representation learning and few-shot learning together,
has been proposed to tackle the performance degradation in face of limited
annotated data challenge. There have been many studies working on FSLG
recently. In this paper, we comprehensively survey these work in the form of a
series of methods and applications. Specifically, we first introduce FSLG
challenges and bases, then categorize and summarize existing work of FSLG in
terms of three major graph mining tasks at different granularity levels, i.e.,
node, edge, and graph. Finally, we share our thoughts on some future research
directions of FSLG. The authors of this survey have contributed significantly
to the AI literature on FSLG over the last few years.
| [
{
"created": "Thu, 17 Mar 2022 13:21:11 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Jun 2022 13:14:52 GMT",
"version": "v2"
}
] | 2022-06-08 | [
[
"Zhang",
"Chuxu",
""
],
[
"Ding",
"Kaize",
""
],
[
"Li",
"Jundong",
""
],
[
"Zhang",
"Xiangliang",
""
],
[
"Ye",
"Yanfang",
""
],
[
"Chawla",
"Nitesh V.",
""
],
[
"Liu",
"Huan",
""
]
] | Graph representation learning has attracted tremendous attention due to its remarkable performance in many real-world applications. However, prevailing supervised graph representation learning models for specific tasks often suffer from label sparsity issue as data labeling is always time and resource consuming. In light of this, few-shot learning on graphs (FSLG), which combines the strengths of graph representation learning and few-shot learning together, has been proposed to tackle the performance degradation in face of limited annotated data challenge. There have been many studies working on FSLG recently. In this paper, we comprehensively survey these work in the form of a series of methods and applications. Specifically, we first introduce FSLG challenges and bases, then categorize and summarize existing work of FSLG in terms of three major graph mining tasks at different granularity levels, i.e., node, edge, and graph. Finally, we share our thoughts on some future research directions of FSLG. The authors of this survey have contributed significantly to the AI literature on FSLG over the last few years. |
2010.06401 | Pawan Aurora | Pawan Aurora, Hans Raj Tiwary | On the Complexity of Some Facet-Defining Inequalities of the
QAP-polytope | 20 pages. To be published in COCOA 2020 proceedings | null | null | null | cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Quadratic Assignment Problem (QAP) is a well-known NP-hard problem that
is equivalent to optimizing a linear objective function over the QAP polytope.
The QAP polytope with parameter $n$ - \qappolytope{n} - is defined as the
convex hull of rank-$1$ matrices $xx^T$ with $x$ as the vectorized $n\times n$
permutation matrices.
In this paper we consider all the known exponential-sized families of
facet-defining inequalities of the QAP-polytope. We describe a new family of
valid inequalities that we show to be facet-defining. We also show that
membership testing (and hence optimizing) over some of the known classes of
inequalities is coNP-complete. We complement our hardness results by showing a
lower bound of $2^{\Omega(n)}$ on the extension complexity of all relaxations
of \qappolytope{n} for which any of the known classes of inequalities are
valid.
| [
{
"created": "Tue, 13 Oct 2020 13:56:24 GMT",
"version": "v1"
}
] | 2020-10-14 | [
[
"Aurora",
"Pawan",
""
],
[
"Tiwary",
"Hans Raj",
""
]
] | The Quadratic Assignment Problem (QAP) is a well-known NP-hard problem that is equivalent to optimizing a linear objective function over the QAP polytope. The QAP polytope with parameter $n$ - \qappolytope{n} - is defined as the convex hull of rank-$1$ matrices $xx^T$ with $x$ as the vectorized $n\times n$ permutation matrices. In this paper we consider all the known exponential-sized families of facet-defining inequalities of the QAP-polytope. We describe a new family of valid inequalities that we show to be facet-defining. We also show that membership testing (and hence optimizing) over some of the known classes of inequalities is coNP-complete. We complement our hardness results by showing a lower bound of $2^{\Omega(n)}$ on the extension complexity of all relaxations of \qappolytope{n} for which any of the known classes of inequalities are valid. |
2105.04021 | Bhaskar Mitra | Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos and Jimmy
Lin | MS MARCO: Benchmarking Ranking Models in the Large-Data Regime | null | null | null | null | cs.IR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluation efforts such as TREC, CLEF, NTCIR and FIRE, alongside public
leaderboard such as MS MARCO, are intended to encourage research and track our
progress, addressing big questions in our field. However, the goal is not
simply to identify which run is "best", achieving the top score. The goal is to
move the field forward by developing new robust techniques, that work in many
different settings, and are adopted in research and practice. This paper uses
the MS MARCO and TREC Deep Learning Track as our case study, comparing it to
the case of TREC ad hoc ranking in the 1990s. We show how the design of the
evaluation effort can encourage or discourage certain outcomes, and raising
questions about internal and external validity of results. We provide some
analysis of certain pitfalls, and a statement of best practices for avoiding
such pitfalls. We summarize the progress of the effort so far, and describe our
desired end state of "robust usefulness", along with steps that might be
required to get us there.
| [
{
"created": "Sun, 9 May 2021 20:57:36 GMT",
"version": "v1"
}
] | 2021-05-11 | [
[
"Craswell",
"Nick",
""
],
[
"Mitra",
"Bhaskar",
""
],
[
"Yilmaz",
"Emine",
""
],
[
"Campos",
"Daniel",
""
],
[
"Lin",
"Jimmy",
""
]
] | Evaluation efforts such as TREC, CLEF, NTCIR and FIRE, alongside public leaderboard such as MS MARCO, are intended to encourage research and track our progress, addressing big questions in our field. However, the goal is not simply to identify which run is "best", achieving the top score. The goal is to move the field forward by developing new robust techniques, that work in many different settings, and are adopted in research and practice. This paper uses the MS MARCO and TREC Deep Learning Track as our case study, comparing it to the case of TREC ad hoc ranking in the 1990s. We show how the design of the evaluation effort can encourage or discourage certain outcomes, and raising questions about internal and external validity of results. We provide some analysis of certain pitfalls, and a statement of best practices for avoiding such pitfalls. We summarize the progress of the effort so far, and describe our desired end state of "robust usefulness", along with steps that might be required to get us there. |
1203.1095 | Guido Tack | Tom Schrijvers, Guido Tack, Pieter Wuille, Horst Samulowitz, Peter J.
Stuckey | Search Combinators | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to model search in a constraint solver can be an essential asset
for solving combinatorial problems. However, existing infrastructure for
defining search heuristics is often inadequate. Either modeling capabilities
are extremely limited or users are faced with a general-purpose programming
language whose features are not tailored towards writing search heuristics. As
a result, major improvements in performance may remain unexplored.
This article introduces search combinators, a lightweight and
solver-independent method that bridges the gap between a conceptually simple
modeling language for search (high-level, functional and naturally
compositional) and an efficient implementation (low-level, imperative and
highly non-modular). By allowing the user to define application-tailored search
strategies from a small set of primitives, search combinators effectively
provide a rich domain-specific language (DSL) for modeling search to the user.
Remarkably, this DSL comes at a low implementation cost to the developer of a
constraint solver.
The article discusses two modular implementation approaches and shows, by
empirical evaluation, that search combinators can be implemented without
overhead compared to a native, direct implementation in a constraint solver.
| [
{
"created": "Tue, 6 Mar 2012 03:59:34 GMT",
"version": "v1"
}
] | 2012-03-07 | [
[
"Schrijvers",
"Tom",
""
],
[
"Tack",
"Guido",
""
],
[
"Wuille",
"Pieter",
""
],
[
"Samulowitz",
"Horst",
""
],
[
"Stuckey",
"Peter J.",
""
]
] | The ability to model search in a constraint solver can be an essential asset for solving combinatorial problems. However, existing infrastructure for defining search heuristics is often inadequate. Either modeling capabilities are extremely limited or users are faced with a general-purpose programming language whose features are not tailored towards writing search heuristics. As a result, major improvements in performance may remain unexplored. This article introduces search combinators, a lightweight and solver-independent method that bridges the gap between a conceptually simple modeling language for search (high-level, functional and naturally compositional) and an efficient implementation (low-level, imperative and highly non-modular). By allowing the user to define application-tailored search strategies from a small set of primitives, search combinators effectively provide a rich domain-specific language (DSL) for modeling search to the user. Remarkably, this DSL comes at a low implementation cost to the developer of a constraint solver. The article discusses two modular implementation approaches and shows, by empirical evaluation, that search combinators can be implemented without overhead compared to a native, direct implementation in a constraint solver. |
2403.11529 | Hantao Zhou | Hantao Zhou, Runze Hu, Xiu Li | Video Object Segmentation with Dynamic Query Modulation | Accepted by ICME2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Storing intermediate frame segmentations as memory for long-range context
modeling, spatial-temporal memory-based methods have recently showcased
impressive results in semi-supervised video object segmentation (SVOS).
However, these methods face two key limitations: 1) relying on non-local
pixel-level matching to read memory, resulting in noisy retrieved features for
segmentation; 2) segmenting each object independently without interaction.
These shortcomings make the memory-based methods struggle in similar object and
multi-object segmentation. To address these issues, we propose a query
modulation method, termed QMVOS. This method summarizes object features into
dynamic queries and then treats them as dynamic filters for mask prediction,
thereby providing high-level descriptions and object-level perception for the
model. Efficient and effective multi-object interactions are realized through
inter-query attention. Extensive experiments demonstrate that our method can
bring significant improvements to the memory-based SVOS method and achieve
competitive performance on standard SVOS benchmarks. The code is available at
https://github.com/zht8506/QMVOS.
| [
{
"created": "Mon, 18 Mar 2024 07:31:39 GMT",
"version": "v1"
}
] | 2024-03-19 | [
[
"Zhou",
"Hantao",
""
],
[
"Hu",
"Runze",
""
],
[
"Li",
"Xiu",
""
]
] | Storing intermediate frame segmentations as memory for long-range context modeling, spatial-temporal memory-based methods have recently showcased impressive results in semi-supervised video object segmentation (SVOS). However, these methods face two key limitations: 1) relying on non-local pixel-level matching to read memory, resulting in noisy retrieved features for segmentation; 2) segmenting each object independently without interaction. These shortcomings make the memory-based methods struggle in similar object and multi-object segmentation. To address these issues, we propose a query modulation method, termed QMVOS. This method summarizes object features into dynamic queries and then treats them as dynamic filters for mask prediction, thereby providing high-level descriptions and object-level perception for the model. Efficient and effective multi-object interactions are realized through inter-query attention. Extensive experiments demonstrate that our method can bring significant improvements to the memory-based SVOS method and achieve competitive performance on standard SVOS benchmarks. The code is available at https://github.com/zht8506/QMVOS. |
2401.08974 | Lipeng Zhu | Lipeng Zhu, Wenyan Ma, Zhenyu Xiao, and Rui Zhang | Performance Analysis and Optimization for Movable Antenna Aided Wideband
Communications | null | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Movable antenna (MA) has emerged as a promising technology to enhance
wireless communication performance by enabling the local movement of antennas
at the transmitter (Tx) and/or receiver (Rx) for achieving more favorable
channel conditions. As the existing studies on MA-aided wireless communications
have mainly considered narrow-band transmission in flat fading channels, we
investigate in this paper the MA-aided wideband communications employing
orthogonal frequency division multiplexing (OFDM) in frequency-selective fading
channels. Under the general multi-tap field-response channel model, the
wireless channel variations in both space and frequency are characterized with
different positions of the MAs. Unlike the narrow-band transmission where the
optimal MA position at the Tx/Rx simply maximizes the single-tap channel
amplitude, the MA position in the wideband case needs to balance the amplitudes
and phases over multiple channel taps in order to maximize the OFDM
transmission rate over multiple frequency subcarriers. First, we derive an
upper bound on the OFDM achievable rate in closed form when the size of the
Tx/Rx region for antenna movement is arbitrarily large. Next, we develop a
parallel greedy ascent (PGA) algorithm to obtain locally optimal solutions to
the MAs' positions for OFDM rate maximization subject to finite-size Tx/Rx
regions. To reduce computational complexity, a simplified PGA algorithm is also
provided to optimize the MAs' positions more efficiently. Simulation results
demonstrate that the proposed PGA algorithms can approach the OFDM rate upper
bound closely with the increase of Tx/Rx region sizes and outperform
conventional systems with fixed-position antennas (FPAs) under the wideband
channel setup.
| [
{
"created": "Wed, 17 Jan 2024 04:54:47 GMT",
"version": "v1"
}
] | 2024-01-18 | [
[
"Zhu",
"Lipeng",
""
],
[
"Ma",
"Wenyan",
""
],
[
"Xiao",
"Zhenyu",
""
],
[
"Zhang",
"Rui",
""
]
] | Movable antenna (MA) has emerged as a promising technology to enhance wireless communication performance by enabling the local movement of antennas at the transmitter (Tx) and/or receiver (Rx) for achieving more favorable channel conditions. As the existing studies on MA-aided wireless communications have mainly considered narrow-band transmission in flat fading channels, we investigate in this paper the MA-aided wideband communications employing orthogonal frequency division multiplexing (OFDM) in frequency-selective fading channels. Under the general multi-tap field-response channel model, the wireless channel variations in both space and frequency are characterized with different positions of the MAs. Unlike the narrow-band transmission where the optimal MA position at the Tx/Rx simply maximizes the single-tap channel amplitude, the MA position in the wideband case needs to balance the amplitudes and phases over multiple channel taps in order to maximize the OFDM transmission rate over multiple frequency subcarriers. First, we derive an upper bound on the OFDM achievable rate in closed form when the size of the Tx/Rx region for antenna movement is arbitrarily large. Next, we develop a parallel greedy ascent (PGA) algorithm to obtain locally optimal solutions to the MAs' positions for OFDM rate maximization subject to finite-size Tx/Rx regions. To reduce computational complexity, a simplified PGA algorithm is also provided to optimize the MAs' positions more efficiently. Simulation results demonstrate that the proposed PGA algorithms can approach the OFDM rate upper bound closely with the increase of Tx/Rx region sizes and outperform conventional systems with fixed-position antennas (FPAs) under the wideband channel setup. |
2104.12674 | Lawrence Paulson | Lawrence C. Paulson | The Relative Consistency of the Axiom of Choice Mechanized Using
Isabelle/ZF | null | LMS Journal of Computation and Mathematics, Volume 6, 2003, pp.
198-248 | 10.1112/S1461157000000449 | null | cs.LO | http://creativecommons.org/licenses/by/4.0/ | The proof of the relative consistency of the axiom of choice has been
mechanized using Isabelle/ZF. The proof builds upon a previous mechanization of
the reflection theorem. The heavy reliance on metatheory in the original proof
makes the formalization unusually long, and not entirely satisfactory: two
parts of the proof do not fit together. It seems impossible to solve these
problems without formalizing the metatheory. However, the present development
follows a standard textbook, Kunen's Set Theory, and could support the
formalization of further material from that book. It also serves as an example
of what to expect when deep mathematics is formalized.
| [
{
"created": "Mon, 26 Apr 2021 16:00:22 GMT",
"version": "v1"
}
] | 2021-04-27 | [
[
"Paulson",
"Lawrence C.",
""
]
] | The proof of the relative consistency of the axiom of choice has been mechanized using Isabelle/ZF. The proof builds upon a previous mechanization of the reflection theorem. The heavy reliance on metatheory in the original proof makes the formalization unusually long, and not entirely satisfactory: two parts of the proof do not fit together. It seems impossible to solve these problems without formalizing the metatheory. However, the present development follows a standard textbook, Kunen's Set Theory, and could support the formalization of further material from that book. It also serves as an example of what to expect when deep mathematics is formalized. |
2001.03665 | Salar Nouri | Ali Parchekani, Salar Nouri, Vahid Shah-Mansouri, and Seyed Pooya
Shariatpanahi | Classification of Traffic Using Neural Networks by Rejecting: a Novel
Approach in Classifying VPN Traffic | null | null | null | null | cs.NI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce a novel end-to-end traffic classification method
to distinguish between traffic classes including VPN traffic in three layers of
the Open Systems Interconnection (OSI) model. Classification of VPN traffic is
not trivial using traditional classification approaches due to its encrypted
nature. We utilize two well-known neural networks, namely multi-layer
perceptron and recurrent neural network to create our cascade neural network
focused on two metrics: class scores and distance from the center of the
classes. Such approach combines extraction, selection, and classification
functionality into a single end-to-end system to systematically learn the
non-linear relationship between input and predicted performance. Therefore, we
could distinguish VPN traffics from non-VPN traffics by rejecting the unrelated
features of the VPN class. Moreover, we obtain the application type of non-VPN
traffics at the same time. The approach is evaluated using the general traffic
dataset ISCX VPN-nonVPN, and an acquired dataset. The results demonstrate the
efficacy of the framework approach for encrypting traffic classification while
also achieving extreme accuracy, $95$ percent, which is higher than the
accuracy of the state-of-the-art models, and strong generalization
capabilities.
| [
{
"created": "Fri, 10 Jan 2020 21:01:22 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Dec 2021 06:00:33 GMT",
"version": "v2"
}
] | 2021-12-13 | [
[
"Parchekani",
"Ali",
""
],
[
"Nouri",
"Salar",
""
],
[
"Shah-Mansouri",
"Vahid",
""
],
[
"Shariatpanahi",
"Seyed Pooya",
""
]
] | In this paper, we introduce a novel end-to-end traffic classification method to distinguish between traffic classes including VPN traffic in three layers of the Open Systems Interconnection (OSI) model. Classification of VPN traffic is not trivial using traditional classification approaches due to its encrypted nature. We utilize two well-known neural networks, namely multi-layer perceptron and recurrent neural network to create our cascade neural network focused on two metrics: class scores and distance from the center of the classes. Such approach combines extraction, selection, and classification functionality into a single end-to-end system to systematically learn the non-linear relationship between input and predicted performance. Therefore, we could distinguish VPN traffics from non-VPN traffics by rejecting the unrelated features of the VPN class. Moreover, we obtain the application type of non-VPN traffics at the same time. The approach is evaluated using the general traffic dataset ISCX VPN-nonVPN, and an acquired dataset. The results demonstrate the efficacy of the framework approach for encrypting traffic classification while also achieving extreme accuracy, $95$ percent, which is higher than the accuracy of the state-of-the-art models, and strong generalization capabilities. |
1710.09713 | Xing Wang | Xing Wang | The relationship between the number of editorial board members and the
scientific output of universities in the chemistry field | This paper is revised and extended based on the paper entitled "Which
Drives Which? The Causal Relationship between the Number of Editorial Board
Members and the Scientific Output of Universities in the Chemistry Field"in
Proceedings of the 16th International Conference of the International Society
for Scientometrics and Informetrics (ISSI 2017), Wuhan | null | null | null | cs.DL stat.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Editorial board members, who are considered the gatekeepers of scientific
journals, play an important role in academia, and may directly or indirectly
affect the scientific output of a university. In this article, we used the
quantile regression method among a sample of 1,387 university in chemistry to
characterize the correlation between the number of editorial board members and
the scientific output of their universities. Furthermore, we used time-series
data and the Granger causality test to explore the causal relationship between
the number of editorial board members and the number of articles of some top
universities. Our results suggest that the number of editorial board members is
positively and significantly related to the scientific output (as measured by
the number of articles, total number of citations, citations per paper, and h
index) of their universities. However, the Granger causality test results
suggest that the causal relationship between the number of editorial board
members and the number of articles of some top universities is not obvious.
Combining these findings with the results of qualitative interviews with
editorial board members, we discuss the causal relationship between the number
of editorial board members and the scientific output of their universities.
| [
{
"created": "Wed, 25 Oct 2017 08:53:19 GMT",
"version": "v1"
}
] | 2017-10-27 | [
[
"Wang",
"Xing",
""
]
] | Editorial board members, who are considered the gatekeepers of scientific journals, play an important role in academia, and may directly or indirectly affect the scientific output of a university. In this article, we used the quantile regression method among a sample of 1,387 university in chemistry to characterize the correlation between the number of editorial board members and the scientific output of their universities. Furthermore, we used time-series data and the Granger causality test to explore the causal relationship between the number of editorial board members and the number of articles of some top universities. Our results suggest that the number of editorial board members is positively and significantly related to the scientific output (as measured by the number of articles, total number of citations, citations per paper, and h index) of their universities. However, the Granger causality test results suggest that the causal relationship between the number of editorial board members and the number of articles of some top universities is not obvious. Combining these findings with the results of qualitative interviews with editorial board members, we discuss the causal relationship between the number of editorial board members and the scientific output of their universities. |
2109.08302 | Xiaoqiang Wang | Jiaojiao Wang, Dabin Zheng, Shenghua Li | Rack-Aware MSR Codes with Multiple Erasure Tolerance | Distributed storage; Multiple erasure tolerance; MSRR codes;
Universally error-resilient repair | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | The minimum storage rack-aware regenerating (MSRR) code is a variation of
regenerating codes that achieves the optimal repair bandwidth for a single node
failure in the rack-aware model. The authors in~\cite{Chen-Barg2019}
and~\cite{Zhou-Zhang2021} provided explicit constructions of MSRR codes for all
parameters to repair a single failed node. This paper generalizes the results
in~\cite{Chen-Barg2019} to the case of multiple node failures. We propose a
class of MDS array codes and scalar Reed-Solomon (RS) codes, and show that
these codes have optimal repair bandwidth and error resilient capability for
multiple node failures in the rack-aware storage model. Besides, our codes keep
the same access level as the low-access constructions in~\cite{Chen-Barg2019}
and~\cite{Zhou-Zhang2021}.
| [
{
"created": "Fri, 17 Sep 2021 01:50:25 GMT",
"version": "v1"
}
] | 2021-09-20 | [
[
"Wang",
"Jiaojiao",
""
],
[
"Zheng",
"Dabin",
""
],
[
"Li",
"Shenghua",
""
]
] | The minimum storage rack-aware regenerating (MSRR) code is a variation of regenerating codes that achieves the optimal repair bandwidth for a single node failure in the rack-aware model. The authors in~\cite{Chen-Barg2019} and~\cite{Zhou-Zhang2021} provided explicit constructions of MSRR codes for all parameters to repair a single failed node. This paper generalizes the results in~\cite{Chen-Barg2019} to the case of multiple node failures. We propose a class of MDS array codes and scalar Reed-Solomon (RS) codes, and show that these codes have optimal repair bandwidth and error resilient capability for multiple node failures in the rack-aware storage model. Besides, our codes keep the same access level as the low-access constructions in~\cite{Chen-Barg2019} and~\cite{Zhou-Zhang2021}. |
2202.10773 | Daniel Franco-Barranco | Daniel Franco-Barranco and Julio Pastor-Tronch and Aitor
Gonzalez-Marfil and Arrate Mu\~noz-Barrutia and Ignacio Arganda-Carreras | Deep learning based domain adaptation for mitochondria segmentation on
EM volumes | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate segmentation of electron microscopy (EM) volumes of the brain is
essential to characterize neuronal structures at a cell or organelle level.
While supervised deep learning methods have led to major breakthroughs in that
direction during the past years, they usually require large amounts of
annotated data to be trained, and perform poorly on other data acquired under
similar experimental and imaging conditions. This is a problem known as domain
adaptation, since models that learned from a sample distribution (or source
domain) struggle to maintain their performance on samples extracted from a
different distribution or target domain. In this work, we address the complex
case of deep learning based domain adaptation for mitochondria segmentation
across EM datasets from different tissues and species. We present three
unsupervised domain adaptation strategies to improve mitochondria segmentation
in the target domain based on (1) state-of-the-art style transfer between
images of both domains; (2) self-supervised learning to pre-train a model using
unlabeled source and target images, and then fine-tune it only with the source
labels; and (3) multi-task neural network architectures trained end-to-end with
both labeled and unlabeled images. Additionally, we propose a new training
stopping criterion based on morphological priors obtained exclusively in the
source domain. We carried out all possible cross-dataset experiments using
three publicly available EM datasets. We evaluated our proposed strategies on
the mitochondria semantic labels predicted on the target datasets. The methods
introduced here outperform the baseline methods and compare favorably to the
state of the art. In the absence of validation labels, monitoring our proposed
morphology-based metric is an intuitive and effective way to stop the training
process and select in average optimal models.
| [
{
"created": "Tue, 22 Feb 2022 09:49:25 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Jul 2022 14:43:44 GMT",
"version": "v2"
}
] | 2022-07-06 | [
[
"Franco-Barranco",
"Daniel",
""
],
[
"Pastor-Tronch",
"Julio",
""
],
[
"Gonzalez-Marfil",
"Aitor",
""
],
[
"Muñoz-Barrutia",
"Arrate",
""
],
[
"Arganda-Carreras",
"Ignacio",
""
]
] | Accurate segmentation of electron microscopy (EM) volumes of the brain is essential to characterize neuronal structures at a cell or organelle level. While supervised deep learning methods have led to major breakthroughs in that direction during the past years, they usually require large amounts of annotated data to be trained, and perform poorly on other data acquired under similar experimental and imaging conditions. This is a problem known as domain adaptation, since models that learned from a sample distribution (or source domain) struggle to maintain their performance on samples extracted from a different distribution or target domain. In this work, we address the complex case of deep learning based domain adaptation for mitochondria segmentation across EM datasets from different tissues and species. We present three unsupervised domain adaptation strategies to improve mitochondria segmentation in the target domain based on (1) state-of-the-art style transfer between images of both domains; (2) self-supervised learning to pre-train a model using unlabeled source and target images, and then fine-tune it only with the source labels; and (3) multi-task neural network architectures trained end-to-end with both labeled and unlabeled images. Additionally, we propose a new training stopping criterion based on morphological priors obtained exclusively in the source domain. We carried out all possible cross-dataset experiments using three publicly available EM datasets. We evaluated our proposed strategies on the mitochondria semantic labels predicted on the target datasets. The methods introduced here outperform the baseline methods and compare favorably to the state of the art. In the absence of validation labels, monitoring our proposed morphology-based metric is an intuitive and effective way to stop the training process and select in average optimal models. |
2407.17889 | Qing Zhao | Qing Zhao, Chengkui Zhang, Hao Li, Ting Ke | An Error Discovery and Correction for the Family of V-Shaped BPSO
Algorithms | 25 pages, 11 figures | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | BPSO algorithm is a swarm intelligence optimization algorithm, which has the
characteristics of good optimization effect, high efficiency and easy to
implement. In recent years, it has been used to optimize a variety of machine
learning and deep learning models, such as CNN, LSTM, SVM, etc. But it is easy
to fall into local optimum for the lack of exploitation ability. It is found
that in the article, which is different from previous studies, The reason for
the poor performance is an error existing in their velocity update function,
which leads to abnormal and chaotic behavior of particles. This not only makes
the algorithm difficult to converge, but also often searches the repeated
space. So, traditionally, it has to rely on a low w value in the later stage to
force these algorithms to converge, but also makes them quickly lose their
search ability and prone to getting trapped in local optima. This article
proposes a velocity legacy term correction method for all V-shaped BPSOs.
Experimentals based on 0/1 knapsack problems show that it has a significant
effect on accuracy and efficiency for all of the 4 commonly used V-Shaped
BPSOs. Therefore it is an significant breakthrough in the field of swarm
intelligence.
| [
{
"created": "Thu, 25 Jul 2024 09:18:32 GMT",
"version": "v1"
}
] | 2024-07-26 | [
[
"Zhao",
"Qing",
""
],
[
"Zhang",
"Chengkui",
""
],
[
"Li",
"Hao",
""
],
[
"Ke",
"Ting",
""
]
] | BPSO algorithm is a swarm intelligence optimization algorithm, which has the characteristics of good optimization effect, high efficiency and easy to implement. In recent years, it has been used to optimize a variety of machine learning and deep learning models, such as CNN, LSTM, SVM, etc. But it is easy to fall into local optimum for the lack of exploitation ability. It is found that in the article, which is different from previous studies, The reason for the poor performance is an error existing in their velocity update function, which leads to abnormal and chaotic behavior of particles. This not only makes the algorithm difficult to converge, but also often searches the repeated space. So, traditionally, it has to rely on a low w value in the later stage to force these algorithms to converge, but also makes them quickly lose their search ability and prone to getting trapped in local optima. This article proposes a velocity legacy term correction method for all V-shaped BPSOs. Experimentals based on 0/1 knapsack problems show that it has a significant effect on accuracy and efficiency for all of the 4 commonly used V-Shaped BPSOs. Therefore it is an significant breakthrough in the field of swarm intelligence. |
1905.09700 | Chen Tessler | Chen Tessler, Tom Zahavy, Deborah Cohen, Daniel J. Mankowitz and Shie
Mannor | Action Assembly: Sparse Imitation Learning for Text Based Games with
Combinatorial Action Spaces | Under review at IJCAI 2020 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a computationally efficient algorithm that combines compressed
sensing with imitation learning to solve text-based games with combinatorial
action spaces. Specifically, we introduce a new compressed sensing algorithm,
named IK-OMP, which can be seen as an extension to the Orthogonal Matching
Pursuit (OMP). We incorporate IK-OMP into a supervised imitation learning
setting and show that the combined approach (Sparse Imitation Learning,
Sparse-IL) solves the entire text-based game of Zork1 with an action space of
approximately 10 million actions given both perfect and noisy demonstrations.
| [
{
"created": "Thu, 23 May 2019 15:06:55 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Dec 2019 09:13:40 GMT",
"version": "v2"
},
{
"created": "Sun, 9 Feb 2020 09:58:08 GMT",
"version": "v3"
}
] | 2020-02-11 | [
[
"Tessler",
"Chen",
""
],
[
"Zahavy",
"Tom",
""
],
[
"Cohen",
"Deborah",
""
],
[
"Mankowitz",
"Daniel J.",
""
],
[
"Mannor",
"Shie",
""
]
] | We propose a computationally efficient algorithm that combines compressed sensing with imitation learning to solve text-based games with combinatorial action spaces. Specifically, we introduce a new compressed sensing algorithm, named IK-OMP, which can be seen as an extension to the Orthogonal Matching Pursuit (OMP). We incorporate IK-OMP into a supervised imitation learning setting and show that the combined approach (Sparse Imitation Learning, Sparse-IL) solves the entire text-based game of Zork1 with an action space of approximately 10 million actions given both perfect and noisy demonstrations. |
2009.00513 | Allen Riddell | Allen Riddell and Troy J. Bassett | What Library Digitization Leaves Out: Predicting the Availability of
Digital Surrogates of English Novels | null | portal: Libraries and the Academy, 21(4), 885-900 (2021) | 10.1353/pla.2021.0045 | null | cs.DL | http://creativecommons.org/publicdomain/zero/1.0/ | Library digitization has made more than a hundred thousand 19th-century
English-language books available to the public. Do the books which have been
digitized reflect the population of published books? An affirmative answer
would allow book and literary historians to use holdings of major digital
libraries as proxies for the population of published works, sparing them the
labor of collecting a representative sample. We address this question by taking
advantage of exhaustive bibliographies of novels published for the first time
in the British Isles in 1836 and 1838, identifying which of these novels have
at least one digital surrogate in the Internet Archive, HathiTrust, Google
Books, and the British Library. We find that digital surrogate availability is
not random. Certain kinds of novels, notably novels written by men and novels
published in multivolume format, have digital surrogates available at
distinctly higher rates than other kinds of novels. As the processes leading to
this outcome are unlikely to be isolated to the novel and the late 1830s, these
findings suggest that similar patterns will likely be observed during adjacent
decades and in other genres of publishing (e.g., non-fiction).
| [
{
"created": "Tue, 1 Sep 2020 15:20:15 GMT",
"version": "v1"
}
] | 2021-11-15 | [
[
"Riddell",
"Allen",
""
],
[
"Bassett",
"Troy J.",
""
]
] | Library digitization has made more than a hundred thousand 19th-century English-language books available to the public. Do the books which have been digitized reflect the population of published books? An affirmative answer would allow book and literary historians to use holdings of major digital libraries as proxies for the population of published works, sparing them the labor of collecting a representative sample. We address this question by taking advantage of exhaustive bibliographies of novels published for the first time in the British Isles in 1836 and 1838, identifying which of these novels have at least one digital surrogate in the Internet Archive, HathiTrust, Google Books, and the British Library. We find that digital surrogate availability is not random. Certain kinds of novels, notably novels written by men and novels published in multivolume format, have digital surrogates available at distinctly higher rates than other kinds of novels. As the processes leading to this outcome are unlikely to be isolated to the novel and the late 1830s, these findings suggest that similar patterns will likely be observed during adjacent decades and in other genres of publishing (e.g., non-fiction). |
2309.00236 | Luke Bailey | Luke Bailey, Euan Ong, Stuart Russell, Scott Emmons | Image Hijacks: Adversarial Images can Control Generative Models at
Runtime | Project page at https://image-hijacks.github.io | null | null | null | cs.LG cs.CL cs.CR | http://creativecommons.org/licenses/by/4.0/ | Are foundation models secure against malicious actors? In this work, we focus
on the image input to a vision-language model (VLM). We discover image hijacks,
adversarial images that control the behaviour of VLMs at inference time, and
introduce the general Behaviour Matching algorithm for training image hijacks.
From this, we derive the Prompt Matching method, allowing us to train hijacks
matching the behaviour of an arbitrary user-defined text prompt (e.g. 'the
Eiffel Tower is now located in Rome') using a generic, off-the-shelf dataset
unrelated to our choice of prompt. We use Behaviour Matching to craft hijacks
for four types of attack, forcing VLMs to generate outputs of the adversary's
choice, leak information from their context window, override their safety
training, and believe false statements. We study these attacks against LLaVA, a
state-of-the-art VLM based on CLIP and LLaMA-2, and find that all attack types
achieve a success rate of over 80%. Moreover, our attacks are automated and
require only small image perturbations.
| [
{
"created": "Fri, 1 Sep 2023 03:53:40 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Sep 2023 17:59:23 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Apr 2024 20:18:47 GMT",
"version": "v3"
}
] | 2024-04-24 | [
[
"Bailey",
"Luke",
""
],
[
"Ong",
"Euan",
""
],
[
"Russell",
"Stuart",
""
],
[
"Emmons",
"Scott",
""
]
] | Are foundation models secure against malicious actors? In this work, we focus on the image input to a vision-language model (VLM). We discover image hijacks, adversarial images that control the behaviour of VLMs at inference time, and introduce the general Behaviour Matching algorithm for training image hijacks. From this, we derive the Prompt Matching method, allowing us to train hijacks matching the behaviour of an arbitrary user-defined text prompt (e.g. 'the Eiffel Tower is now located in Rome') using a generic, off-the-shelf dataset unrelated to our choice of prompt. We use Behaviour Matching to craft hijacks for four types of attack, forcing VLMs to generate outputs of the adversary's choice, leak information from their context window, override their safety training, and believe false statements. We study these attacks against LLaVA, a state-of-the-art VLM based on CLIP and LLaMA-2, and find that all attack types achieve a success rate of over 80%. Moreover, our attacks are automated and require only small image perturbations. |
1805.02102 | Maria Luisa Damiani | Maria Luisa Damiani, Fatima Hachem, Issa Hamza, Nathan Ranc, Paul
Moorcroft, Francesca Cagnacci | Cluster-based trajectory segmentation with local noise | 41 pages, Data Mining and Knowledge Discovery (2018) | Data Mining and Knowledge Discovery, 2018, Vol 32, Issue 4,
1017-1055 | 10.1007/s10618-018-0561-2 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a framework for the partitioning of a spatial trajectory in a
sequence of segments based on spatial density and temporal criteria. The result
is a set of temporally separated clusters interleaved by sub-sequences of
unclustered points. A major novelty is the proposal of an outlier or noise
model based on the distinction between intra-cluster (local noise) and
inter-cluster noise (transition): the local noise models the temporary absence
from a residence while the transition the definitive departure towards a next
residence. We analyze in detail the properties of the model and present a
comprehensive solution for the extraction of temporally ordered clusters. The
effectiveness of the solution is evaluated first qualitatively and next
quantitatively by contrasting the segmentation with ground truth. The ground
truth consists of a set of trajectories of labeled points simulating animal
movement. Moreover, we show that the approach can streamline the discovery of
additional derived patterns, by presenting a novel technique for the analysis
of periodic movement. From a methodological perspective, a valuable aspect of
this research is that it combines the theoretical investigation with the
application and external validation of the segmentation framework. This paves
the way to an effective deployment of the solution in broad and challenging
fields such as e-science.
| [
{
"created": "Sat, 5 May 2018 18:46:46 GMT",
"version": "v1"
}
] | 2018-06-19 | [
[
"Damiani",
"Maria Luisa",
""
],
[
"Hachem",
"Fatima",
""
],
[
"Hamza",
"Issa",
""
],
[
"Ranc",
"Nathan",
""
],
[
"Moorcroft",
"Paul",
""
],
[
"Cagnacci",
"Francesca",
""
]
] | We present a framework for the partitioning of a spatial trajectory in a sequence of segments based on spatial density and temporal criteria. The result is a set of temporally separated clusters interleaved by sub-sequences of unclustered points. A major novelty is the proposal of an outlier or noise model based on the distinction between intra-cluster (local noise) and inter-cluster noise (transition): the local noise models the temporary absence from a residence while the transition the definitive departure towards a next residence. We analyze in detail the properties of the model and present a comprehensive solution for the extraction of temporally ordered clusters. The effectiveness of the solution is evaluated first qualitatively and next quantitatively by contrasting the segmentation with ground truth. The ground truth consists of a set of trajectories of labeled points simulating animal movement. Moreover, we show that the approach can streamline the discovery of additional derived patterns, by presenting a novel technique for the analysis of periodic movement. From a methodological perspective, a valuable aspect of this research is that it combines the theoretical investigation with the application and external validation of the segmentation framework. This paves the way to an effective deployment of the solution in broad and challenging fields such as e-science. |
1903.00029 | Jugal Garg | Jugal Garg and Setareh Taki | An Improved Approximation Algorithm for Maximin Shares | Fixed typos and added more details. A two-page abstract appeared in
ACM EC 2020 | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fair division is a fundamental problem in various multi-agent settings, where
the goal is to divide a set of resources among agents in a fair manner. We
study the case where m indivisible items need to be divided among n agents with
additive valuations using the popular fairness notion of maximin share (MMS).
An MMS allocation provides each agent a bundle worth at least her maximin
share. While it is known that such an allocation need not exist, a series of
work provided approximation algorithms for a 2/3-MMS allocation in which each
agent receives a bundle worth at least 2/3 times her maximin share. More
recently, Ghodsi et al. [EC'2018] showed the existence of a 3/4-MMS allocation
and a PTAS to find a (3/4-\epsilon)-MMS allocation for an \epsilon > 0. Most of
the previous works utilize intricate algorithms and require agents' approximate
MMS values, which are computationally expensive to obtain.
In this paper, we develop a new approach that gives a simple algorithm for
showing the existence of a 3/4-MMS allocation. Furthermore, our approach is
powerful enough to be easily extended in two directions: First, we get a
strongly polynomial-time algorithm to find a 3/4-MMS allocation, where we do
not need to approximate the MMS values at all. Second, we show that there
always exists a (3/4 + 1/(12n))-MMS allocation, improving the best previous
factor. This improves the approximation guarantee, most notably for small n. We
note that 3/4 was the best factor known for n> 4.
| [
{
"created": "Thu, 28 Feb 2019 19:08:21 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Feb 2020 17:24:38 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Apr 2021 20:32:03 GMT",
"version": "v3"
}
] | 2021-04-07 | [
[
"Garg",
"Jugal",
""
],
[
"Taki",
"Setareh",
""
]
] | Fair division is a fundamental problem in various multi-agent settings, where the goal is to divide a set of resources among agents in a fair manner. We study the case where m indivisible items need to be divided among n agents with additive valuations using the popular fairness notion of maximin share (MMS). An MMS allocation provides each agent a bundle worth at least her maximin share. While it is known that such an allocation need not exist, a series of work provided approximation algorithms for a 2/3-MMS allocation in which each agent receives a bundle worth at least 2/3 times her maximin share. More recently, Ghodsi et al. [EC'2018] showed the existence of a 3/4-MMS allocation and a PTAS to find a (3/4-\epsilon)-MMS allocation for an \epsilon > 0. Most of the previous works utilize intricate algorithms and require agents' approximate MMS values, which are computationally expensive to obtain. In this paper, we develop a new approach that gives a simple algorithm for showing the existence of a 3/4-MMS allocation. Furthermore, our approach is powerful enough to be easily extended in two directions: First, we get a strongly polynomial-time algorithm to find a 3/4-MMS allocation, where we do not need to approximate the MMS values at all. Second, we show that there always exists a (3/4 + 1/(12n))-MMS allocation, improving the best previous factor. This improves the approximation guarantee, most notably for small n. We note that 3/4 was the best factor known for n> 4. |
2004.09759 | Akhila Sri Manasa Venigalla | Akhila Sri Manasa Venigalla, Dheeraj Vagavolu and Sridhar Chimalakonda | SurviveCovid-19 -- An Educational Game to Facilitate Habituation of
Social Distancing and Other Health Measures for Covid-19 Pandemic | 17 pages, 9 figures and 2 tables | null | null | null | cs.HC cs.CY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Covid-19 has been causing severe loss to the human race. Considering the mode
of spread and severity, it is essential to make it a habit to follow various
safety precautions such as using sanitizers and masks and maintaining social
distancing to prevent the spread of Covid-19. Individuals are widely educated
about the safety measures against the disease through various modes such as
announcements through online or physical awareness campaigns, advertisements in
the media and so on. The younger generations today spend considerably more time
on mobile phones and games. However, there are very few applications or games
aimed to help in practicing safety measures against a pandemic, which is much
lesser in the case of Covid-19. Hence, we propose a 2D survival-based game,
SurviveCovid-19, aimed to educate people about safety precautions to be taken
for Covid-19 outside their homes by incorporating social distancing and usage
of masks and sanitizers in the game. SurviveCovid-19 has been designed as an
Android-based mobile game, along with a desktop (browser) version, and has been
evaluated through a remote quantitative user survey, with 30 volunteers using
the questionnaire based on the MEEGA+ model. The survey results are promising,
with all the survey questions having a mean value greater than 3.5. The game's
quality factor was 69.3, indicating that the game could be classified as
excellent quality, according to the MEEGA+ model.
| [
{
"created": "Tue, 21 Apr 2020 05:24:17 GMT",
"version": "v1"
},
{
"created": "Mon, 3 May 2021 17:47:52 GMT",
"version": "v2"
}
] | 2021-05-04 | [
[
"Venigalla",
"Akhila Sri Manasa",
""
],
[
"Vagavolu",
"Dheeraj",
""
],
[
"Chimalakonda",
"Sridhar",
""
]
] | Covid-19 has been causing severe loss to the human race. Considering the mode of spread and severity, it is essential to make it a habit to follow various safety precautions such as using sanitizers and masks and maintaining social distancing to prevent the spread of Covid-19. Individuals are widely educated about the safety measures against the disease through various modes such as announcements through online or physical awareness campaigns, advertisements in the media and so on. The younger generations today spend considerably more time on mobile phones and games. However, there are very few applications or games aimed to help in practicing safety measures against a pandemic, which is much lesser in the case of Covid-19. Hence, we propose a 2D survival-based game, SurviveCovid-19, aimed to educate people about safety precautions to be taken for Covid-19 outside their homes by incorporating social distancing and usage of masks and sanitizers in the game. SurviveCovid-19 has been designed as an Android-based mobile game, along with a desktop (browser) version, and has been evaluated through a remote quantitative user survey, with 30 volunteers using the questionnaire based on the MEEGA+ model. The survey results are promising, with all the survey questions having a mean value greater than 3.5. The game's quality factor was 69.3, indicating that the game could be classified as excellent quality, according to the MEEGA+ model. |
2111.14556 | Xuran Pan | Xuran Pan, Chunjiang Ge, Rui Lu, Shiji Song, Guanfu Chen, Zeyi Huang,
Gao Huang | On the Integration of Self-Attention and Convolution | Accepted to CVPR2022 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolution and self-attention are two powerful techniques for representation
learning, and they are usually considered as two peer approaches that are
distinct from each other. In this paper, we show that there exists a strong
underlying relation between them, in the sense that the bulk of computations of
these two paradigms are in fact done with the same operation. Specifically, we
first show that a traditional convolution with kernel size k x k can be
decomposed into k^2 individual 1x1 convolutions, followed by shift and
summation operations. Then, we interpret the projections of queries, keys, and
values in self-attention module as multiple 1x1 convolutions, followed by the
computation of attention weights and aggregation of the values. Therefore, the
first stage of both two modules comprises the similar operation. More
importantly, the first stage contributes a dominant computation complexity
(square of the channel size) comparing to the second stage. This observation
naturally leads to an elegant integration of these two seemingly distinct
paradigms, i.e., a mixed model that enjoys the benefit of both self-Attention
and Convolution (ACmix), while having minimum computational overhead compared
to the pure convolution or self-attention counterpart. Extensive experiments
show that our model achieves consistently improved results over competitive
baselines on image recognition and downstream tasks. Code and pre-trained
models will be released at https://github.com/LeapLabTHU/ACmix and
https://gitee.com/mindspore/models.
| [
{
"created": "Mon, 29 Nov 2021 14:37:05 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Mar 2022 07:01:14 GMT",
"version": "v2"
}
] | 2022-03-15 | [
[
"Pan",
"Xuran",
""
],
[
"Ge",
"Chunjiang",
""
],
[
"Lu",
"Rui",
""
],
[
"Song",
"Shiji",
""
],
[
"Chen",
"Guanfu",
""
],
[
"Huang",
"Zeyi",
""
],
[
"Huang",
"Gao",
""
]
] | Convolution and self-attention are two powerful techniques for representation learning, and they are usually considered as two peer approaches that are distinct from each other. In this paper, we show that there exists a strong underlying relation between them, in the sense that the bulk of computations of these two paradigms are in fact done with the same operation. Specifically, we first show that a traditional convolution with kernel size k x k can be decomposed into k^2 individual 1x1 convolutions, followed by shift and summation operations. Then, we interpret the projections of queries, keys, and values in self-attention module as multiple 1x1 convolutions, followed by the computation of attention weights and aggregation of the values. Therefore, the first stage of both two modules comprises the similar operation. More importantly, the first stage contributes a dominant computation complexity (square of the channel size) comparing to the second stage. This observation naturally leads to an elegant integration of these two seemingly distinct paradigms, i.e., a mixed model that enjoys the benefit of both self-Attention and Convolution (ACmix), while having minimum computational overhead compared to the pure convolution or self-attention counterpart. Extensive experiments show that our model achieves consistently improved results over competitive baselines on image recognition and downstream tasks. Code and pre-trained models will be released at https://github.com/LeapLabTHU/ACmix and https://gitee.com/mindspore/models. |
2012.09486 | Kevin Denamganai | Kevin Denamgana\"i and James Alfred Walker | ReferentialGym: A Nomenclature and Framework for Language Emergence &
Grounding in (Visual) Referential Games | Accepted at 4th NeurIPS Workshop on Emergent Communication (EmeCom @
NeurIPS 2020) | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural languages are powerful tools wielded by human beings to communicate
information and co-operate towards common goals. Their values lie in some main
properties like compositionality, hierarchy and recurrent syntax, which
computational linguists have been researching the emergence of in artificial
languages induced by language games. Only relatively recently, the AI community
has started to investigate language emergence and grounding working towards
better human-machine interfaces. For instance, interactive/conversational AI
assistants that are able to relate their vision to the ongoing conversation.
This paper provides two contributions to this research field. Firstly, a
nomenclature is proposed to understand the main initiatives in studying
language emergence and grounding, accounting for the variations in assumptions
and constraints. Secondly, a PyTorch based deep learning framework is
introduced, entitled ReferentialGym, which is dedicated to furthering the
exploration of language emergence and grounding. By providing baseline
implementations of major algorithms and metrics, in addition to many different
features and approaches, ReferentialGym attempts to ease the entry barrier to
the field and provide the community with common implementations.
| [
{
"created": "Thu, 17 Dec 2020 10:22:15 GMT",
"version": "v1"
}
] | 2020-12-18 | [
[
"Denamganaï",
"Kevin",
""
],
[
"Walker",
"James Alfred",
""
]
] | Natural languages are powerful tools wielded by human beings to communicate information and co-operate towards common goals. Their values lie in some main properties like compositionality, hierarchy and recurrent syntax, which computational linguists have been researching the emergence of in artificial languages induced by language games. Only relatively recently, the AI community has started to investigate language emergence and grounding working towards better human-machine interfaces. For instance, interactive/conversational AI assistants that are able to relate their vision to the ongoing conversation. This paper provides two contributions to this research field. Firstly, a nomenclature is proposed to understand the main initiatives in studying language emergence and grounding, accounting for the variations in assumptions and constraints. Secondly, a PyTorch based deep learning framework is introduced, entitled ReferentialGym, which is dedicated to furthering the exploration of language emergence and grounding. By providing baseline implementations of major algorithms and metrics, in addition to many different features and approaches, ReferentialGym attempts to ease the entry barrier to the field and provide the community with common implementations. |
2108.05145 | Zain Alabedeen Ali | Zain Alabedeen Ali and Konstantin Yakovlev | Prioritized SIPP for Multi-Agent Path Finding With Kinematic Constraints | 13 pages, 3 figures, ICR 2021 | null | null | null | cs.RO cs.AI cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-Agent Path Finding (MAPF) is a long-standing problem in Robotics and
Artificial Intelligence in which one needs to find a set of collision-free
paths for a group of mobile agents (robots) operating in the shared workspace.
Due to its importance, the problem is well-studied and multiple optimal and
approximate algorithms are known. However, many of them abstract away from the
kinematic constraints and assume that the agents can accelerate/decelerate
instantaneously. This complicates the application of the algorithms on the real
robots. In this paper, we present a method that mitigates this issue to a
certain extent. The suggested solver is essentially, a prioritized planner
based on the well-known Safe Interval Path Planning (SIPP) algorithm. Within
SIPP we explicitly reason about the speed and the acceleration thus the
constructed plans directly take kinematic constraints of agents into account.
We suggest a range of heuristic functions for that setting and conduct a
thorough empirical evaluation of the suggested algorithm.
| [
{
"created": "Wed, 11 Aug 2021 10:42:11 GMT",
"version": "v1"
}
] | 2021-08-12 | [
[
"Ali",
"Zain Alabedeen",
""
],
[
"Yakovlev",
"Konstantin",
""
]
] | Multi-Agent Path Finding (MAPF) is a long-standing problem in Robotics and Artificial Intelligence in which one needs to find a set of collision-free paths for a group of mobile agents (robots) operating in the shared workspace. Due to its importance, the problem is well-studied and multiple optimal and approximate algorithms are known. However, many of them abstract away from the kinematic constraints and assume that the agents can accelerate/decelerate instantaneously. This complicates the application of the algorithms on the real robots. In this paper, we present a method that mitigates this issue to a certain extent. The suggested solver is essentially, a prioritized planner based on the well-known Safe Interval Path Planning (SIPP) algorithm. Within SIPP we explicitly reason about the speed and the acceleration thus the constructed plans directly take kinematic constraints of agents into account. We suggest a range of heuristic functions for that setting and conduct a thorough empirical evaluation of the suggested algorithm. |
1710.04731 | David Fridovich-Keil | David Fridovich-Keil, Sylvia L. Herbert, Jaime F. Fisac, Sampada
Deglurkar, Claire J. Tomlin | Planning, Fast and Slow: A Framework for Adaptive Real-Time Safe
Trajectory Planning | ICRA, International Conference on Robotics and Automation, ICRA 2018,
8 pages, 9 figures | null | null | null | cs.SY cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motion planning is an extremely well-studied problem in the robotics
community, yet existing work largely falls into one of two categories:
computationally efficient but with few if any safety guarantees, or able to
give stronger guarantees but at high computational cost. This work builds on a
recent development called FaSTrack in which a slow offline computation provides
a modular safety guarantee for a faster online planner. We introduce the notion
of "meta-planning" in which a refined offline computation enables safe
switching between different online planners. This provides autonomous systems
with the ability to adapt motion plans to a priori unknown environments in
real-time as sensor measurements detect new obstacles, and the flexibility to
maneuver differently in the presence of obstacles than they would in free
space, all while maintaining a strict safety guarantee. We demonstrate the
meta-planning algorithm both in simulation and in hardware using a small
Crazyflie 2.0 quadrotor.
| [
{
"created": "Thu, 12 Oct 2017 21:45:24 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Mar 2018 22:05:06 GMT",
"version": "v2"
}
] | 2018-03-08 | [
[
"Fridovich-Keil",
"David",
""
],
[
"Herbert",
"Sylvia L.",
""
],
[
"Fisac",
"Jaime F.",
""
],
[
"Deglurkar",
"Sampada",
""
],
[
"Tomlin",
"Claire J.",
""
]
] | Motion planning is an extremely well-studied problem in the robotics community, yet existing work largely falls into one of two categories: computationally efficient but with few if any safety guarantees, or able to give stronger guarantees but at high computational cost. This work builds on a recent development called FaSTrack in which a slow offline computation provides a modular safety guarantee for a faster online planner. We introduce the notion of "meta-planning" in which a refined offline computation enables safe switching between different online planners. This provides autonomous systems with the ability to adapt motion plans to a priori unknown environments in real-time as sensor measurements detect new obstacles, and the flexibility to maneuver differently in the presence of obstacles than they would in free space, all while maintaining a strict safety guarantee. We demonstrate the meta-planning algorithm both in simulation and in hardware using a small Crazyflie 2.0 quadrotor. |
2304.05615 | Qiang Liu | Qiang Liu, Zhaocheng Liu, Zhenxi Zhu, Shu Wu, Liang Wang | Deep Stable Multi-Interest Learning for Out-of-distribution Sequential
Recommendation | null | null | null | null | cs.IR cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recently, multi-interest models, which extract interests of a user as
multiple representation vectors, have shown promising performances for
sequential recommendation. However, none of existing multi-interest
recommendation models consider the Out-Of-Distribution (OOD) generalization
problem, in which interest distribution may change. Considering multiple
interests of a user are usually highly correlated, the model has chance to
learn spurious correlations between noisy interests and target items. Once the
data distribution changes, the correlations among interests may also change,
and the spurious correlations will mislead the model to make wrong predictions.
To tackle with above OOD generalization problem, we propose a novel
multi-interest network, named DEep Stable Multi-Interest Learning (DESMIL),
which attempts to de-correlate the extracted interests in the model, and thus
spurious correlations can be eliminated. DESMIL applies an attentive module to
extract multiple interests, and then selects the most important one for making
final predictions. Meanwhile, DESMIL incorporates a weighted correlation
estimation loss based on Hilbert-Schmidt Independence Criterion (HSIC), with
which training samples are weighted, to minimize the correlations among
extracted interests. Extensive experiments have been conducted under both OOD
and random settings, and up to 36.8% and 21.7% relative improvements are
achieved respectively.
| [
{
"created": "Wed, 12 Apr 2023 05:13:54 GMT",
"version": "v1"
}
] | 2023-04-13 | [
[
"Liu",
"Qiang",
""
],
[
"Liu",
"Zhaocheng",
""
],
[
"Zhu",
"Zhenxi",
""
],
[
"Wu",
"Shu",
""
],
[
"Wang",
"Liang",
""
]
] | Recently, multi-interest models, which extract interests of a user as multiple representation vectors, have shown promising performances for sequential recommendation. However, none of existing multi-interest recommendation models consider the Out-Of-Distribution (OOD) generalization problem, in which interest distribution may change. Considering multiple interests of a user are usually highly correlated, the model has chance to learn spurious correlations between noisy interests and target items. Once the data distribution changes, the correlations among interests may also change, and the spurious correlations will mislead the model to make wrong predictions. To tackle with above OOD generalization problem, we propose a novel multi-interest network, named DEep Stable Multi-Interest Learning (DESMIL), which attempts to de-correlate the extracted interests in the model, and thus spurious correlations can be eliminated. DESMIL applies an attentive module to extract multiple interests, and then selects the most important one for making final predictions. Meanwhile, DESMIL incorporates a weighted correlation estimation loss based on Hilbert-Schmidt Independence Criterion (HSIC), with which training samples are weighted, to minimize the correlations among extracted interests. Extensive experiments have been conducted under both OOD and random settings, and up to 36.8% and 21.7% relative improvements are achieved respectively. |
1909.12992 | Swapnil Mhaske | Swapnil Mhaske, Predrag Spasojevic, Ahsan Aziz | A Blockage Model for the Open Area Mm-wave Device-to-Device Environment | null | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A significant portion of the 5th generation of wireless networks will operate
in the mm-wave bands. One of the several challenges associated with mm-wave
propagation is to overcome shadowing due to signal blockage caused by
environmental objects. Particularly susceptible are nodes in a device-to-device
network that typically operate at low power and in a blockage prone environment
such as crowded open areas. In this work, we provide an insight into the effect
of blockages on the signal quality for an open area device-to-device scenario.
We propose a blockage model based on the homogeneous Poisson Point Process. The
model provides the average signal attenuation as a soft metric that quantifies
the extent of blockage. This not only indicates whether the signal is blocked
but also measures how much the signal is attenuated due to one or more
blockers. The analytical results are confirmed with the help of Monte Carlo
simulations for real-world blocker placement in the environment.
| [
{
"created": "Sat, 28 Sep 2019 00:14:46 GMT",
"version": "v1"
}
] | 2019-10-01 | [
[
"Mhaske",
"Swapnil",
""
],
[
"Spasojevic",
"Predrag",
""
],
[
"Aziz",
"Ahsan",
""
]
] | A significant portion of the 5th generation of wireless networks will operate in the mm-wave bands. One of the several challenges associated with mm-wave propagation is to overcome shadowing due to signal blockage caused by environmental objects. Particularly susceptible are nodes in a device-to-device network that typically operate at low power and in a blockage prone environment such as crowded open areas. In this work, we provide an insight into the effect of blockages on the signal quality for an open area device-to-device scenario. We propose a blockage model based on the homogeneous Poisson Point Process. The model provides the average signal attenuation as a soft metric that quantifies the extent of blockage. This not only indicates whether the signal is blocked but also measures how much the signal is attenuated due to one or more blockers. The analytical results are confirmed with the help of Monte Carlo simulations for real-world blocker placement in the environment. |
2102.03689 | Lei Yan | Lei Yan, Theodoros Stouraitis, and Sethu Vijayakumar | Decentralized Ability-Aware Adaptive Control for Multi-robot
Collaborative Manipulation | The article has been submitted to IEEE Robotics and Automation
Letters (RA-L) with ICRA 2021 conference option; the article has been
accepted for publication in RA-L | null | null | null | cs.RO cs.MA | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Multi-robot teams can achieve more dexterous, complex and heavier payload
tasks than a single robot, yet effective collaboration is required. Multi-robot
collaboration is extremely challenging due to the different kinematic and
dynamics capabilities of the robots, the limited communication between them,
and the uncertainty of the system parameters. In this paper, a Decentralized
Ability-Aware Adaptive Control is proposed to address these challenges based on
two key features. Firstly, the common manipulation task is represented by the
proposed nominal task ellipsoid, which is used to maximize each robot force
capability online via optimizing its configuration. Secondly, a decentralized
adaptive controller is designed to be Lyapunov stable in spite of heterogeneous
actuation constraints of the robots and uncertain physical parameters of the
object and environment. In the proposed framework, decentralized coordination
and load distribution between the robots is achieved without communication,
while only the control deficiency is broadcast if any of the robots reaches its
force limits. In this case, the object reference trajectory is modified in a
decentralized manner to guarantee stable interaction. Finally, we perform
several numerical and physical simulations to analyse and verify the proposed
method with heterogeneous multi-robot teams in collaborative manipulation
tasks.
| [
{
"created": "Sun, 7 Feb 2021 00:04:39 GMT",
"version": "v1"
}
] | 2021-02-09 | [
[
"Yan",
"Lei",
""
],
[
"Stouraitis",
"Theodoros",
""
],
[
"Vijayakumar",
"Sethu",
""
]
] | Multi-robot teams can achieve more dexterous, complex and heavier payload tasks than a single robot, yet effective collaboration is required. Multi-robot collaboration is extremely challenging due to the different kinematic and dynamics capabilities of the robots, the limited communication between them, and the uncertainty of the system parameters. In this paper, a Decentralized Ability-Aware Adaptive Control is proposed to address these challenges based on two key features. Firstly, the common manipulation task is represented by the proposed nominal task ellipsoid, which is used to maximize each robot force capability online via optimizing its configuration. Secondly, a decentralized adaptive controller is designed to be Lyapunov stable in spite of heterogeneous actuation constraints of the robots and uncertain physical parameters of the object and environment. In the proposed framework, decentralized coordination and load distribution between the robots is achieved without communication, while only the control deficiency is broadcast if any of the robots reaches its force limits. In this case, the object reference trajectory is modified in a decentralized manner to guarantee stable interaction. Finally, we perform several numerical and physical simulations to analyse and verify the proposed method with heterogeneous multi-robot teams in collaborative manipulation tasks. |
2101.08248 | Sam Wiseman | Sam Wiseman, Arturs Backurs, Karl Stratos | Data-to-text Generation by Splicing Together Nearest Neighbors | EMNLP 2021; figures updated/improved | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose to tackle data-to-text generation tasks by directly splicing
together retrieved segments of text from "neighbor" source-target pairs. Unlike
recent work that conditions on retrieved neighbors but generates text
token-by-token, left-to-right, we learn a policy that directly manipulates
segments of neighbor text, by inserting or replacing them in partially
constructed generations. Standard techniques for training such a policy require
an oracle derivation for each generation, and we prove that finding the
shortest such derivation can be reduced to parsing under a particular weighted
context-free grammar. We find that policies learned in this way perform on par
with strong baselines in terms of automatic and human evaluation, but allow for
more interpretable and controllable generation.
| [
{
"created": "Wed, 20 Jan 2021 18:43:11 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Jan 2021 18:44:33 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Sep 2021 15:46:16 GMT",
"version": "v3"
},
{
"created": "Thu, 28 Oct 2021 20:19:35 GMT",
"version": "v4"
}
] | 2021-11-01 | [
[
"Wiseman",
"Sam",
""
],
[
"Backurs",
"Arturs",
""
],
[
"Stratos",
"Karl",
""
]
] | We propose to tackle data-to-text generation tasks by directly splicing together retrieved segments of text from "neighbor" source-target pairs. Unlike recent work that conditions on retrieved neighbors but generates text token-by-token, left-to-right, we learn a policy that directly manipulates segments of neighbor text, by inserting or replacing them in partially constructed generations. Standard techniques for training such a policy require an oracle derivation for each generation, and we prove that finding the shortest such derivation can be reduced to parsing under a particular weighted context-free grammar. We find that policies learned in this way perform on par with strong baselines in terms of automatic and human evaluation, but allow for more interpretable and controllable generation. |
2407.16994 | Jake Watts | Jake R. Watts, Joel Sokol | A Voter-Based Stochastic Rejection-Method Framework for Asymptotically
Safe Language Model Outputs | 7 pages, 2 figures | null | null | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a new method for preventing unsafe or otherwise low
quality large language model (LLM) outputs, by leveraging the stochasticity of
LLMs. We propose a system whereby LLM checkers vote on the acceptability of a
generated output, regenerating it if a threshold of disapproval is reached,
until sufficient checkers approve. We further propose estimators for cost and
failure rate, and based on those estimators and experimental data tailored to
the application, we propose an algorithm that achieves a desired failure rate
at the least possible cost. We demonstrate that, under these models, failure
rate decreases exponentially as a function of cost when voter count and
threshold are chosen according to the algorithm, and that the models reasonably
estimate the actual performance of such a system in action, even with limited
data.
| [
{
"created": "Wed, 24 Jul 2024 04:27:55 GMT",
"version": "v1"
}
] | 2024-07-25 | [
[
"Watts",
"Jake R.",
""
],
[
"Sokol",
"Joel",
""
]
] | This paper proposes a new method for preventing unsafe or otherwise low quality large language model (LLM) outputs, by leveraging the stochasticity of LLMs. We propose a system whereby LLM checkers vote on the acceptability of a generated output, regenerating it if a threshold of disapproval is reached, until sufficient checkers approve. We further propose estimators for cost and failure rate, and based on those estimators and experimental data tailored to the application, we propose an algorithm that achieves a desired failure rate at the least possible cost. We demonstrate that, under these models, failure rate decreases exponentially as a function of cost when voter count and threshold are chosen according to the algorithm, and that the models reasonably estimate the actual performance of such a system in action, even with limited data. |
2311.18525 | Asaf Shabtai | Yizhak Vaisman, Gilad Katz, Yuval Elovici, Asaf Shabtai | Detecting Anomalous Network Communication Patterns Using Graph
Convolutional Networks | null | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To protect an organizations' endpoints from sophisticated cyberattacks,
advanced detection methods are required. In this research, we present
GCNetOmaly: a graph convolutional network (GCN)-based variational autoencoder
(VAE) anomaly detector trained on data that include connection events among
internal and external machines. As input, the proposed GCN-based VAE model
receives two matrices: (i) the normalized adjacency matrix, which represents
the connections among the machines, and (ii) the feature matrix, which includes
various features (demographic, statistical, process-related, and Node2vec
structural features) that are used to profile the individual nodes/machines.
After training the model on data collected for a predefined time window, the
model is applied on the same data; the reconstruction score obtained by the
model for a given machine then serves as the machine's anomaly score.
GCNetOmaly was evaluated on real, large-scale data logged by Carbon Black EDR
from a large financial organization's automated teller machines (ATMs) as well
as communication with Active Directory (AD) servers in two setups: unsupervised
and supervised. The results of our evaluation demonstrate GCNetOmaly's
effectiveness in detecting anomalous behavior of machines on unsupervised data.
| [
{
"created": "Thu, 30 Nov 2023 13:03:49 GMT",
"version": "v1"
}
] | 2023-12-01 | [
[
"Vaisman",
"Yizhak",
""
],
[
"Katz",
"Gilad",
""
],
[
"Elovici",
"Yuval",
""
],
[
"Shabtai",
"Asaf",
""
]
] | To protect an organizations' endpoints from sophisticated cyberattacks, advanced detection methods are required. In this research, we present GCNetOmaly: a graph convolutional network (GCN)-based variational autoencoder (VAE) anomaly detector trained on data that include connection events among internal and external machines. As input, the proposed GCN-based VAE model receives two matrices: (i) the normalized adjacency matrix, which represents the connections among the machines, and (ii) the feature matrix, which includes various features (demographic, statistical, process-related, and Node2vec structural features) that are used to profile the individual nodes/machines. After training the model on data collected for a predefined time window, the model is applied on the same data; the reconstruction score obtained by the model for a given machine then serves as the machine's anomaly score. GCNetOmaly was evaluated on real, large-scale data logged by Carbon Black EDR from a large financial organization's automated teller machines (ATMs) as well as communication with Active Directory (AD) servers in two setups: unsupervised and supervised. The results of our evaluation demonstrate GCNetOmaly's effectiveness in detecting anomalous behavior of machines on unsupervised data. |
2111.02881 | Michael Winter | Michael Winter, Heiko Neumann, R\"udiger Pryss, Thomas Probst, and
Manfred Reichert | Defining Gaze Patterns for Process Model Literacy -- Exploring Visual
Routines in Process Models with Diverse Mappings | null | null | null | null | cs.HC cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Process models depict crucial artifacts for organizations regarding
documentation, communication, and collaboration. The proper comprehension of
such models is essential for an effective application. An important aspect in
process model literacy constitutes the question how the information presented
in process models is extracted and processed by the human visual system? For
such visuospatial tasks, the visual system deploys a set of elemental
operations, from whose compositions different visual routines are produced.
This paper provides insights from an exploratory eye tracking study, in which
visual routines during process model comprehension were contemplated. More
specifically, n = 29 participants were asked to comprehend n = 18 process
models expressed in the Business Process Model and Notation 2.0 reflecting
diverse mappings (i.e., straight, upward, downward) and complexity levels. The
performance measures indicated that even less complex process models pose a
challenge regarding their comprehension. The upward mapping confronted
participants' attention with more challenges, whereas the downward mapping was
comprehended more effectively. Based on recorded eye movements, three gaze
patterns applied during model comprehension were derived. Thereupon, we defined
a general model which identifies visual routines and corresponding elemental
operations during process model comprehension. Finally, implications for
practice as well as research and directions for future work are discussed in
this paper.
| [
{
"created": "Thu, 4 Nov 2021 14:13:48 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Nov 2021 10:58:24 GMT",
"version": "v2"
}
] | 2021-12-01 | [
[
"Winter",
"Michael",
""
],
[
"Neumann",
"Heiko",
""
],
[
"Pryss",
"Rüdiger",
""
],
[
"Probst",
"Thomas",
""
],
[
"Reichert",
"Manfred",
""
]
] | Process models depict crucial artifacts for organizations regarding documentation, communication, and collaboration. The proper comprehension of such models is essential for an effective application. An important aspect in process model literacy constitutes the question how the information presented in process models is extracted and processed by the human visual system? For such visuospatial tasks, the visual system deploys a set of elemental operations, from whose compositions different visual routines are produced. This paper provides insights from an exploratory eye tracking study, in which visual routines during process model comprehension were contemplated. More specifically, n = 29 participants were asked to comprehend n = 18 process models expressed in the Business Process Model and Notation 2.0 reflecting diverse mappings (i.e., straight, upward, downward) and complexity levels. The performance measures indicated that even less complex process models pose a challenge regarding their comprehension. The upward mapping confronted participants' attention with more challenges, whereas the downward mapping was comprehended more effectively. Based on recorded eye movements, three gaze patterns applied during model comprehension were derived. Thereupon, we defined a general model which identifies visual routines and corresponding elemental operations during process model comprehension. Finally, implications for practice as well as research and directions for future work are discussed in this paper. |
1311.4336 | Junming Huang | Junming Huang, Chao Li, Wen-Qiang Wang, Hua-Wei Shen, Guojie Li,
Xue-Qi Cheng | Temporal scaling in information propagation | 13 pages, 2 figures. published on Scientific Reports | Scientific Reports 4, 5334, (2014) | 10.1038/srep05334 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For the study of information propagation, one fundamental problem is
uncovering universal laws governing the dynamics of information propagation.
This problem, from the microscopic perspective, is formulated as estimating the
propagation probability that a piece of information propagates from one
individual to another. Such a propagation probability generally depends on two
major classes of factors: the intrinsic attractiveness of information and the
interactions between individuals. Despite the fact that the temporal effect of
attractiveness is widely studied, temporal laws underlying individual
interactions remain unclear, causing inaccurate prediction of information
propagation on evolving social networks. In this report, we empirically study
the dynamics of information propagation, using the dataset from a
population-scale social media website. We discover a temporal scaling in
information propagation: the probability a message propagates between two
individuals decays with the length of time latency since their latest
interaction, obeying a power-law rule. Leveraging the scaling law, we further
propose a temporal model to estimate future propagation probabilities between
individuals, reducing the error rate of information propagation prediction from
6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
| [
{
"created": "Mon, 18 Nov 2013 11:15:26 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Nov 2013 02:15:14 GMT",
"version": "v2"
},
{
"created": "Wed, 18 Jun 2014 09:55:29 GMT",
"version": "v3"
}
] | 2014-06-19 | [
[
"Huang",
"Junming",
""
],
[
"Li",
"Chao",
""
],
[
"Wang",
"Wen-Qiang",
""
],
[
"Shen",
"Hua-Wei",
""
],
[
"Li",
"Guojie",
""
],
[
"Cheng",
"Xue-Qi",
""
]
] | For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers. |
1601.01770 | Albert Haque | Albert Haque | A MapReduce Approach to NoSQL RDF Databases | Undergraduate Honors Thesis, December 2013, The University of Texas
at Austin, Department of Computer Science. Report# HR-13-13 (honors theses) | null | null | HR-13-13 | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the increased need to house and process large volumes of
data has prompted the need for distributed storage and querying systems. The
growth of machine-readable RDF triples has prompted both industry and academia
to develop new database systems, called NoSQL, with characteristics that differ
from classical databases. Many of these systems compromise ACID properties for
increased horizontal scalability and data availability. This thesis concerns
the development and evaluation of a NoSQL triplestore. Triplestores are
database management systems central to emerging technologies such as the
Semantic Web and linked data. The evaluation spans several benchmarks,
including the two most commonly used in triplestore evaluation, the Berlin
SPARQL Benchmark, and the DBpedia benchmark, a query workload that operates an
RDF representation of Wikipedia. Results reveal that the join algorithm used by
the system plays a critical role in dictating query runtimes. Distributed graph
databases must carefully optimize queries before generating MapReduce query
plans as network traffic for large datasets can become prohibitive if the query
is executed naively.
| [
{
"created": "Fri, 8 Jan 2016 05:04:26 GMT",
"version": "v1"
}
] | 2016-01-11 | [
[
"Haque",
"Albert",
""
]
] | In recent years, the increased need to house and process large volumes of data has prompted the need for distributed storage and querying systems. The growth of machine-readable RDF triples has prompted both industry and academia to develop new database systems, called NoSQL, with characteristics that differ from classical databases. Many of these systems compromise ACID properties for increased horizontal scalability and data availability. This thesis concerns the development and evaluation of a NoSQL triplestore. Triplestores are database management systems central to emerging technologies such as the Semantic Web and linked data. The evaluation spans several benchmarks, including the two most commonly used in triplestore evaluation, the Berlin SPARQL Benchmark, and the DBpedia benchmark, a query workload that operates an RDF representation of Wikipedia. Results reveal that the join algorithm used by the system plays a critical role in dictating query runtimes. Distributed graph databases must carefully optimize queries before generating MapReduce query plans as network traffic for large datasets can become prohibitive if the query is executed naively. |
2109.06601 | Yannic Maus | Keren Censor-Hillel, Yannic Maus, Shahar Romem-Peled, Tigran Tonoyan | Distributed Vertex Cover Reconfiguration | null | null | null | null | cs.DS cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconfiguration schedules, i.e., sequences that gradually transform one
solution of a problem to another while always maintaining feasibility, have
been extensively studied. Most research has dealt with the decision problem of
whether a reconfiguration schedule exists, and the complexity of finding one. A
prime example is the reconfiguration of vertex covers. We initiate the study of
batched vertex cover reconfiguration, which allows to reconfigure multiple
vertices concurrently while requiring that any adversarial reconfiguration
order within a batch maintains feasibility. The latter provides robustness,
e.g., if the simultaneous reconfiguration of a batch cannot be guaranteed. The
quality of a schedule is measured by the number of batches until all nodes are
reconfigured, and its cost, i.e., the maximum size of an intermediate vertex
cover.
To set a baseline for batch reconfiguration, we show that for graphs
belonging to one of the classes $\{\mathsf{cycles, trees, forests, chordal,
cactus, even\text{-}hole\text{-}free, claw\text{-}free}\}$, there are schedules
that use $O(\varepsilon^{-1})$ batches and incur only a $1+\varepsilon$
multiplicative increase in cost over the best sequential schedules. Our main
contribution is to compute such batch schedules in $O(\varepsilon^{-1}\log^*
n)$ distributed time, which we also show to be tight. Further, we show that
once we step out of these graph classes we face a very different situation.
There are graph classes on which no efficient distributed algorithm can obtain
the best (or almost best) existing schedule. Moreover, there are classes of
bounded degree graphs which do not admit any reconfiguration schedules without
incurring a large multiplicative increase in the cost at all.
| [
{
"created": "Tue, 14 Sep 2021 11:45:34 GMT",
"version": "v1"
}
] | 2021-09-15 | [
[
"Censor-Hillel",
"Keren",
""
],
[
"Maus",
"Yannic",
""
],
[
"Romem-Peled",
"Shahar",
""
],
[
"Tonoyan",
"Tigran",
""
]
] | Reconfiguration schedules, i.e., sequences that gradually transform one solution of a problem to another while always maintaining feasibility, have been extensively studied. Most research has dealt with the decision problem of whether a reconfiguration schedule exists, and the complexity of finding one. A prime example is the reconfiguration of vertex covers. We initiate the study of batched vertex cover reconfiguration, which allows to reconfigure multiple vertices concurrently while requiring that any adversarial reconfiguration order within a batch maintains feasibility. The latter provides robustness, e.g., if the simultaneous reconfiguration of a batch cannot be guaranteed. The quality of a schedule is measured by the number of batches until all nodes are reconfigured, and its cost, i.e., the maximum size of an intermediate vertex cover. To set a baseline for batch reconfiguration, we show that for graphs belonging to one of the classes $\{\mathsf{cycles, trees, forests, chordal, cactus, even\text{-}hole\text{-}free, claw\text{-}free}\}$, there are schedules that use $O(\varepsilon^{-1})$ batches and incur only a $1+\varepsilon$ multiplicative increase in cost over the best sequential schedules. Our main contribution is to compute such batch schedules in $O(\varepsilon^{-1}\log^* n)$ distributed time, which we also show to be tight. Further, we show that once we step out of these graph classes we face a very different situation. There are graph classes on which no efficient distributed algorithm can obtain the best (or almost best) existing schedule. Moreover, there are classes of bounded degree graphs which do not admit any reconfiguration schedules without incurring a large multiplicative increase in the cost at all. |
1810.00685 | Arnaud Martin | Kuang Zhou (NPU), Arnaud Martin (DRUID), Quan Pan (NPU) | A belief combination rule for a large number of sources | arXiv admin note: substantial text overlap with arXiv:1707.07999 | Journal of Advances in Information Fusion, 2018, 13 (2) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The theory of belief functions is widely used for data from multiple sources.
Different evidence combination rules have been proposed in this framework
according to the properties of the sources to combine. However, most of these
combination rules are not efficient when there are a large number of sources.
This is due to either the complexity or the existence of an absorbing element
such as the total conflict mass function for the conjunctive based rules when
applied on unreliable evidence. In this paper, based on the assumption that the
majority of sources are reliable, a combination rule for a large number of
sources is proposed using a simple idea: the more common ideas the sources
share, the more reliable these sources are supposed to be. This rule is
adaptable for aggregating a large number of sources which may not all be
reliable. It will keep the spirit of the conjunctive rule to reinforce the
belief on the focal elements with which the sources are in agreement. The mass
on the emptyset will be kept as an indicator of the conflict. The proposed
rule, called LNS-CR (Conjunctive combinationRule for a Large Number of
Sources), is evaluated on synthetic mass functions. The experimental results
verify that the rule can be effectively used to combine a large number of mass
functions and to elicit the major opinion.
| [
{
"created": "Fri, 28 Sep 2018 08:24:26 GMT",
"version": "v1"
}
] | 2018-10-02 | [
[
"Zhou",
"Kuang",
"",
"NPU"
],
[
"Martin",
"Arnaud",
"",
"DRUID"
],
[
"Pan",
"Quan",
"",
"NPU"
]
] | The theory of belief functions is widely used for data from multiple sources. Different evidence combination rules have been proposed in this framework according to the properties of the sources to combine. However, most of these combination rules are not efficient when there are a large number of sources. This is due to either the complexity or the existence of an absorbing element such as the total conflict mass function for the conjunctive based rules when applied on unreliable evidence. In this paper, based on the assumption that the majority of sources are reliable, a combination rule for a large number of sources is proposed using a simple idea: the more common ideas the sources share, the more reliable these sources are supposed to be. This rule is adaptable for aggregating a large number of sources which may not all be reliable. It will keep the spirit of the conjunctive rule to reinforce the belief on the focal elements with which the sources are in agreement. The mass on the emptyset will be kept as an indicator of the conflict. The proposed rule, called LNS-CR (Conjunctive combinationRule for a Large Number of Sources), is evaluated on synthetic mass functions. The experimental results verify that the rule can be effectively used to combine a large number of mass functions and to elicit the major opinion. |
1902.00219 | Kai Jin | Siu-Wing Cheng, Man-Kwun Chiu, Kai Jin | A note on self-improving sorting with hidden partitions | 4pages | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study self-improving sorting with hidden partitions. Our result is an
optimal algorithm which runs in expected time O(H(\pi(I)) + n), where I is the
given input which contains n elements to be sorted, \pi(I) is the output which
are the ranks of all element in I, and H(\pi(I)) denotes the entropy of the
output.
| [
{
"created": "Fri, 1 Feb 2019 08:26:43 GMT",
"version": "v1"
}
] | 2019-02-04 | [
[
"Cheng",
"Siu-Wing",
""
],
[
"Chiu",
"Man-Kwun",
""
],
[
"Jin",
"Kai",
""
]
] | We study self-improving sorting with hidden partitions. Our result is an optimal algorithm which runs in expected time O(H(\pi(I)) + n), where I is the given input which contains n elements to be sorted, \pi(I) is the output which are the ranks of all element in I, and H(\pi(I)) denotes the entropy of the output. |
2208.04227 | Davide Dalle Pezze | Davide Dalle Pezze, Denis Deronjic, Chiara Masiero, Diego Tosato,
Alessandro Beghi, Gian Antonio Susto | A Multi-label Continual Learning Framework to Scale Deep Learning
Approaches for Packaging Equipment Monitoring | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Continual Learning aims to learn from a stream of tasks, being able to
remember at the same time both new and old tasks. While many approaches were
proposed for single-class classification, multi-label classification in the
continual scenario remains a challenging problem. For the first time, we study
multi-label classification in the Domain Incremental Learning scenario.
Moreover, we propose an efficient approach that has a logarithmic complexity
with regard to the number of tasks, and can be applied also in the Class
Incremental Learning scenario. We validate our approach on a real-world
multi-label Alarm Forecasting problem from the packaging industry. For the sake
of reproducibility, the dataset and the code used for the experiments are
publicly available.
| [
{
"created": "Mon, 8 Aug 2022 15:58:39 GMT",
"version": "v1"
}
] | 2022-08-09 | [
[
"Pezze",
"Davide Dalle",
""
],
[
"Deronjic",
"Denis",
""
],
[
"Masiero",
"Chiara",
""
],
[
"Tosato",
"Diego",
""
],
[
"Beghi",
"Alessandro",
""
],
[
"Susto",
"Gian Antonio",
""
]
] | Continual Learning aims to learn from a stream of tasks, being able to remember at the same time both new and old tasks. While many approaches were proposed for single-class classification, multi-label classification in the continual scenario remains a challenging problem. For the first time, we study multi-label classification in the Domain Incremental Learning scenario. Moreover, we propose an efficient approach that has a logarithmic complexity with regard to the number of tasks, and can be applied also in the Class Incremental Learning scenario. We validate our approach on a real-world multi-label Alarm Forecasting problem from the packaging industry. For the sake of reproducibility, the dataset and the code used for the experiments are publicly available. |
2104.09426 | Takaaki Hori | Takaaki Hori, Niko Moritz, Chiori Hori, Jonathan Le Roux | Advanced Long-context End-to-end Speech Recognition Using
Context-expanded Transformers | Submitted to INTERSPEECH 2021 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses end-to-end automatic speech recognition (ASR) for long
audio recordings such as lecture and conversational speeches. Most end-to-end
ASR models are designed to recognize independent utterances, but contextual
information (e.g., speaker or topic) over multiple utterances is known to be
useful for ASR. In our prior work, we proposed a context-expanded Transformer
that accepts multiple consecutive utterances at the same time and predicts an
output sequence for the last utterance, achieving 5-15% relative error
reduction from utterance-based baselines in lecture and conversational ASR
benchmarks. Although the results have shown remarkable performance gain, there
is still potential to further improve the model architecture and the decoding
process. In this paper, we extend our prior work by (1) introducing the
Conformer architecture to further improve the accuracy, (2) accelerating the
decoding process with a novel activation recycling technique, and (3) enabling
streaming decoding with triggered attention. We demonstrate that the extended
Transformer provides state-of-the-art end-to-end ASR performance, obtaining a
17.3% character error rate for the HKUST dataset and 12.0%/6.3% word error
rates for the Switchboard-300 Eval2000 CallHome/Switchboard test sets. The new
decoding method reduces decoding time by more than 50% and further enables
streaming ASR with limited accuracy degradation.
| [
{
"created": "Mon, 19 Apr 2021 16:18:00 GMT",
"version": "v1"
}
] | 2021-04-20 | [
[
"Hori",
"Takaaki",
""
],
[
"Moritz",
"Niko",
""
],
[
"Hori",
"Chiori",
""
],
[
"Roux",
"Jonathan Le",
""
]
] | This paper addresses end-to-end automatic speech recognition (ASR) for long audio recordings such as lecture and conversational speeches. Most end-to-end ASR models are designed to recognize independent utterances, but contextual information (e.g., speaker or topic) over multiple utterances is known to be useful for ASR. In our prior work, we proposed a context-expanded Transformer that accepts multiple consecutive utterances at the same time and predicts an output sequence for the last utterance, achieving 5-15% relative error reduction from utterance-based baselines in lecture and conversational ASR benchmarks. Although the results have shown remarkable performance gain, there is still potential to further improve the model architecture and the decoding process. In this paper, we extend our prior work by (1) introducing the Conformer architecture to further improve the accuracy, (2) accelerating the decoding process with a novel activation recycling technique, and (3) enabling streaming decoding with triggered attention. We demonstrate that the extended Transformer provides state-of-the-art end-to-end ASR performance, obtaining a 17.3% character error rate for the HKUST dataset and 12.0%/6.3% word error rates for the Switchboard-300 Eval2000 CallHome/Switchboard test sets. The new decoding method reduces decoding time by more than 50% and further enables streaming ASR with limited accuracy degradation. |
2208.00331 | Muhammad Abdullah Hanif | Muhammad Abdullah Hanif, Giuseppe Maria Sarda, Alberto Marchisio,
Guido Masera, Maurizio Martina, Muhammad Shafique | CoNLoCNN: Exploiting Correlation and Non-Uniform Quantization for
Energy-Efficient Low-precision Deep Convolutional Neural Networks | 8 pages, 15 figures, 2 tables | null | null | null | cs.AR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In today's era of smart cyber-physical systems, Deep Neural Networks (DNNs)
have become ubiquitous due to their state-of-the-art performance in complex
real-world applications. The high computational complexity of these networks,
which translates to increased energy consumption, is the foremost obstacle
towards deploying large DNNs in resource-constrained systems. Fixed-Point (FP)
implementations achieved through post-training quantization are commonly used
to curtail the energy consumption of these networks. However, the uniform
quantization intervals in FP restrict the bit-width of data structures to large
values due to the need to represent most of the numbers with sufficient
resolution and avoid high quantization errors. In this paper, we leverage the
key insight that (in most of the scenarios) DNN weights and activations are
mostly concentrated near zero and only a few of them have large magnitudes. We
propose CoNLoCNN, a framework to enable energy-efficient low-precision deep
convolutional neural network inference by exploiting: (1) non-uniform
quantization of weights enabling simplification of complex multiplication
operations; and (2) correlation between activation values enabling partial
compensation of quantization errors at low cost without any run-time overheads.
To significantly benefit from non-uniform quantization, we also propose a novel
data representation format, Encoded Low-Precision Binary Signed Digit, to
compress the bit-width of weights while ensuring direct use of the encoded
weight for processing using a novel multiply-and-accumulate (MAC) unit design.
| [
{
"created": "Sun, 31 Jul 2022 01:34:56 GMT",
"version": "v1"
}
] | 2022-08-02 | [
[
"Hanif",
"Muhammad Abdullah",
""
],
[
"Sarda",
"Giuseppe Maria",
""
],
[
"Marchisio",
"Alberto",
""
],
[
"Masera",
"Guido",
""
],
[
"Martina",
"Maurizio",
""
],
[
"Shafique",
"Muhammad",
""
]
] | In today's era of smart cyber-physical systems, Deep Neural Networks (DNNs) have become ubiquitous due to their state-of-the-art performance in complex real-world applications. The high computational complexity of these networks, which translates to increased energy consumption, is the foremost obstacle towards deploying large DNNs in resource-constrained systems. Fixed-Point (FP) implementations achieved through post-training quantization are commonly used to curtail the energy consumption of these networks. However, the uniform quantization intervals in FP restrict the bit-width of data structures to large values due to the need to represent most of the numbers with sufficient resolution and avoid high quantization errors. In this paper, we leverage the key insight that (in most of the scenarios) DNN weights and activations are mostly concentrated near zero and only a few of them have large magnitudes. We propose CoNLoCNN, a framework to enable energy-efficient low-precision deep convolutional neural network inference by exploiting: (1) non-uniform quantization of weights enabling simplification of complex multiplication operations; and (2) correlation between activation values enabling partial compensation of quantization errors at low cost without any run-time overheads. To significantly benefit from non-uniform quantization, we also propose a novel data representation format, Encoded Low-Precision Binary Signed Digit, to compress the bit-width of weights while ensuring direct use of the encoded weight for processing using a novel multiply-and-accumulate (MAC) unit design. |
1210.0408 | Arpit Sharma | Arpit Sharma | A Two Step Perspective for Kripke Structure Reduction | Accepted for Student Research Forum, 39th International Conference on
Current Trends in Theory and Practice of Computer Science (SOFSEM 2013) | null | null | null | cs.FL cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel theoretical framework for the state space
reduction of Kripke structures. We define two equivalence relations, Kripke
minimization equivalence (KME) and weak Kripke minimization equivalence (WKME).
We define the quotient system under these relations and show that these
relations are strictly coarser than strong (bi)simulation and
divergence-sensitive stutter (bi)simulation, respectively. We prove that the
quotient system obtained under KME and WKME preserves linear-time and
stutter-insensitive linear-time properties. Finally, we show that KME is
compositional w.r.t. synchronous parallel composition.
| [
{
"created": "Mon, 1 Oct 2012 14:06:21 GMT",
"version": "v1"
}
] | 2012-10-02 | [
[
"Sharma",
"Arpit",
""
]
] | This paper presents a novel theoretical framework for the state space reduction of Kripke structures. We define two equivalence relations, Kripke minimization equivalence (KME) and weak Kripke minimization equivalence (WKME). We define the quotient system under these relations and show that these relations are strictly coarser than strong (bi)simulation and divergence-sensitive stutter (bi)simulation, respectively. We prove that the quotient system obtained under KME and WKME preserves linear-time and stutter-insensitive linear-time properties. Finally, we show that KME is compositional w.r.t. synchronous parallel composition. |
1301.2884 | Anh Cat Le Ngo | Anh Cat Le Ngo, Kenneth Li-Minn Ang, Jasmine Kah-Phooi Seng, Guoping
Qiu | Wavelet-based Scale Saliency | Partly published in ACIIDS 2013 - Kuala Lumpur Malaysia | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Both pixel-based scale saliency (PSS) and basis project methods focus on
multiscale analysis of data content and structure. Their theoretical relations
and practical combination are previously discussed. However, no models have
ever been proposed for calculating scale saliency on basis-projected
descriptors since then. This paper extend those ideas into mathematical models
and implement them in the wavelet-based scale saliency (WSS). While PSS uses
pixel-value descriptors, WSS treats wavelet sub-bands as basis descriptors. The
paper discusses different wavelet descriptors: discrete wavelet transform
(DWT), wavelet packet transform (DWPT), quaternion wavelet transform (QWT) and
best basis quaternion wavelet packet transform (QWPTBB). WSS saliency maps of
different descriptors are generated and compared against other saliency methods
by both quantitative and quanlitative methods. Quantitative results, ROC
curves, AUC values and NSS values are collected from simulations on Bruce and
Kootstra image databases with human eye-tracking data as ground-truth.
Furthermore, qualitative visual results of saliency maps are analyzed and
compared against each other as well as eye-tracking data inclusive in the
databases.
| [
{
"created": "Mon, 14 Jan 2013 08:36:00 GMT",
"version": "v1"
}
] | 2013-01-15 | [
[
"Ngo",
"Anh Cat Le",
""
],
[
"Ang",
"Kenneth Li-Minn",
""
],
[
"Seng",
"Jasmine Kah-Phooi",
""
],
[
"Qiu",
"Guoping",
""
]
] | Both pixel-based scale saliency (PSS) and basis project methods focus on multiscale analysis of data content and structure. Their theoretical relations and practical combination are previously discussed. However, no models have ever been proposed for calculating scale saliency on basis-projected descriptors since then. This paper extend those ideas into mathematical models and implement them in the wavelet-based scale saliency (WSS). While PSS uses pixel-value descriptors, WSS treats wavelet sub-bands as basis descriptors. The paper discusses different wavelet descriptors: discrete wavelet transform (DWT), wavelet packet transform (DWPT), quaternion wavelet transform (QWT) and best basis quaternion wavelet packet transform (QWPTBB). WSS saliency maps of different descriptors are generated and compared against other saliency methods by both quantitative and quanlitative methods. Quantitative results, ROC curves, AUC values and NSS values are collected from simulations on Bruce and Kootstra image databases with human eye-tracking data as ground-truth. Furthermore, qualitative visual results of saliency maps are analyzed and compared against each other as well as eye-tracking data inclusive in the databases. |
1201.3011 | Stephen G. Kobourov | Stephen G. Kobourov | Spring Embedders and Force Directed Graph Drawing Algorithms | 23 pages, 8 figures | null | null | null | cs.CG cs.DM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Force-directed algorithms are among the most flexible methods for calculating
layouts of simple undirected graphs. Also known as spring embedders, such
algorithms calculate the layout of a graph using only information contained
within the structure of the graph itself, rather than relying on
domain-specific knowledge. Graphs drawn with these algorithms tend to be
aesthetically pleasing, exhibit symmetries, and tend to produce crossing-free
layouts for planar graphs. In this survey we consider several classical
algorithms, starting from Tutte's 1963 barycentric method, and including recent
scalable multiscale methods for large and dynamic graphs.
| [
{
"created": "Sat, 14 Jan 2012 12:49:31 GMT",
"version": "v1"
}
] | 2012-01-17 | [
[
"Kobourov",
"Stephen G.",
""
]
] | Force-directed algorithms are among the most flexible methods for calculating layouts of simple undirected graphs. Also known as spring embedders, such algorithms calculate the layout of a graph using only information contained within the structure of the graph itself, rather than relying on domain-specific knowledge. Graphs drawn with these algorithms tend to be aesthetically pleasing, exhibit symmetries, and tend to produce crossing-free layouts for planar graphs. In this survey we consider several classical algorithms, starting from Tutte's 1963 barycentric method, and including recent scalable multiscale methods for large and dynamic graphs. |
2203.04450 | Yifei Ming | Yifei Ming, Yiyou Sun, Ousmane Dia, Yixuan Li | How to Exploit Hyperspherical Embeddings for Out-of-Distribution
Detection? | Published at ICLR 2023 | The Eleventh International Conference on Learning Representations,
2023 | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Out-of-distribution (OOD) detection is a critical task for reliable machine
learning. Recent advances in representation learning give rise to
distance-based OOD detection, where testing samples are detected as OOD if they
are relatively far away from the centroids or prototypes of in-distribution
(ID) classes. However, prior methods directly take off-the-shelf contrastive
losses that suffice for classifying ID samples, but are not optimally designed
when test inputs contain OOD samples. In this work, we propose CIDER, a novel
representation learning framework that exploits hyperspherical embeddings for
OOD detection. CIDER jointly optimizes two losses to promote strong ID-OOD
separability: a dispersion loss that promotes large angular distances among
different class prototypes, and a compactness loss that encourages samples to
be close to their class prototypes. We analyze and establish the unexplored
relationship between OOD detection performance and the embedding properties in
the hyperspherical space, and demonstrate the importance of dispersion and
compactness. CIDER establishes superior performance, outperforming the latest
rival by 19.36% in FPR95. Code is available at
https://github.com/deeplearning-wisc/cider.
| [
{
"created": "Tue, 8 Mar 2022 23:44:01 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Feb 2023 20:21:57 GMT",
"version": "v2"
},
{
"created": "Sat, 15 Apr 2023 07:25:57 GMT",
"version": "v3"
}
] | 2023-04-18 | [
[
"Ming",
"Yifei",
""
],
[
"Sun",
"Yiyou",
""
],
[
"Dia",
"Ousmane",
""
],
[
"Li",
"Yixuan",
""
]
] | Out-of-distribution (OOD) detection is a critical task for reliable machine learning. Recent advances in representation learning give rise to distance-based OOD detection, where testing samples are detected as OOD if they are relatively far away from the centroids or prototypes of in-distribution (ID) classes. However, prior methods directly take off-the-shelf contrastive losses that suffice for classifying ID samples, but are not optimally designed when test inputs contain OOD samples. In this work, we propose CIDER, a novel representation learning framework that exploits hyperspherical embeddings for OOD detection. CIDER jointly optimizes two losses to promote strong ID-OOD separability: a dispersion loss that promotes large angular distances among different class prototypes, and a compactness loss that encourages samples to be close to their class prototypes. We analyze and establish the unexplored relationship between OOD detection performance and the embedding properties in the hyperspherical space, and demonstrate the importance of dispersion and compactness. CIDER establishes superior performance, outperforming the latest rival by 19.36% in FPR95. Code is available at https://github.com/deeplearning-wisc/cider. |
2212.02911 | Mika H\"am\"al\"ainen | Mika H\"am\"al\"ainen, Khalid Alnajjar, Thierry Poibeau | Modern French Poetry Generation with RoBERTa and GPT-2 | ICCC 2022 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present a novel neural model for modern poetry generation in French. The
model consists of two pretrained neural models that are fine-tuned for the poem
generation task. The encoder of the model is a RoBERTa based one while the
decoder is based on GPT-2. This way the model can benefit from the superior
natural language understanding performance of RoBERTa and the good natural
language generation performance of GPT-2. Our evaluation shows that the model
can create French poetry successfully. On a 5 point scale, the lowest score of
3.57 was given by human judges to typicality and emotionality of the output
poetry while the best score of 3.79 was given to understandability.
| [
{
"created": "Tue, 6 Dec 2022 12:10:14 GMT",
"version": "v1"
}
] | 2022-12-07 | [
[
"Hämäläinen",
"Mika",
""
],
[
"Alnajjar",
"Khalid",
""
],
[
"Poibeau",
"Thierry",
""
]
] | We present a novel neural model for modern poetry generation in French. The model consists of two pretrained neural models that are fine-tuned for the poem generation task. The encoder of the model is a RoBERTa based one while the decoder is based on GPT-2. This way the model can benefit from the superior natural language understanding performance of RoBERTa and the good natural language generation performance of GPT-2. Our evaluation shows that the model can create French poetry successfully. On a 5 point scale, the lowest score of 3.57 was given by human judges to typicality and emotionality of the output poetry while the best score of 3.79 was given to understandability. |
2107.04091 | Grzegorz Dudek | Grzegorz Dudek and Pawe{\l} Pe{\l}ka | Ensembles of Randomized NNs for Pattern-based Time Series Forecasting | arXiv admin note: text overlap with arXiv:2107.01705 | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | In this work, we propose an ensemble forecasting approach based on randomized
neural networks. Improved randomized learning streamlines the fitting abilities
of individual learners by generating network parameters in accordance with the
data and target function features. A pattern-based representation of time
series makes the proposed approach suitable for forecasting time series with
multiple seasonality. We propose six strategies for controlling the diversity
of ensemble members. Case studies conducted on four real-world forecasting
problems verified the effectiveness and superior performance of the proposed
ensemble forecasting approach. It outperformed statistical models as well as
state-of-the-art machine learning models in terms of forecasting accuracy. The
proposed approach has several advantages: fast and easy training, simple
architecture, ease of implementation, high accuracy and the ability to deal
with nonstationarity and multiple seasonality in time series.
| [
{
"created": "Thu, 8 Jul 2021 20:13:50 GMT",
"version": "v1"
}
] | 2021-07-12 | [
[
"Dudek",
"Grzegorz",
""
],
[
"Pełka",
"Paweł",
""
]
] | In this work, we propose an ensemble forecasting approach based on randomized neural networks. Improved randomized learning streamlines the fitting abilities of individual learners by generating network parameters in accordance with the data and target function features. A pattern-based representation of time series makes the proposed approach suitable for forecasting time series with multiple seasonality. We propose six strategies for controlling the diversity of ensemble members. Case studies conducted on four real-world forecasting problems verified the effectiveness and superior performance of the proposed ensemble forecasting approach. It outperformed statistical models as well as state-of-the-art machine learning models in terms of forecasting accuracy. The proposed approach has several advantages: fast and easy training, simple architecture, ease of implementation, high accuracy and the ability to deal with nonstationarity and multiple seasonality in time series. |
2406.01323 | Aurora Zhang | Aurora Zhang, Annette Hosoi | Structural Interventions and the Dynamics of Inequality | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Recent conversations in the algorithmic fairness literature have raised
several concerns with standard conceptions of fairness. First, constraining
predictive algorithms to satisfy fairness benchmarks may lead to non-optimal
outcomes for disadvantaged groups. Second, technical interventions are often
ineffective by themselves, especially when divorced from an understanding of
structural processes that generate social inequality. Inspired by both these
critiques, we construct a common decision-making model, using mortgage loans as
a running example. We show that under some conditions, any choice of decision
threshold will inevitably perpetuate existing disparities in financial
stability unless one deviates from the Pareto optimal policy. Then, we model
the effects of three different types of interventions. We show how different
interventions are recommended depending upon the difficulty of enacting
structural change upon external parameters and depending upon the policymaker's
preferences for equity or efficiency. Counterintuitively, we demonstrate that
preferences for efficiency over equity may lead to recommendations for
interventions that target the under-resourced group. Finally, we simulate the
effects of interventions on a dataset that combines HMDA and Fannie Mae loan
data. This research highlights the ways that structural inequality can be
perpetuated by seemingly unbiased decision mechanisms, and it shows that in
many situations, technical solutions must be paired with external,
context-aware interventions to enact social change.
| [
{
"created": "Mon, 3 Jun 2024 13:44:38 GMT",
"version": "v1"
}
] | 2024-06-04 | [
[
"Zhang",
"Aurora",
""
],
[
"Hosoi",
"Annette",
""
]
] | Recent conversations in the algorithmic fairness literature have raised several concerns with standard conceptions of fairness. First, constraining predictive algorithms to satisfy fairness benchmarks may lead to non-optimal outcomes for disadvantaged groups. Second, technical interventions are often ineffective by themselves, especially when divorced from an understanding of structural processes that generate social inequality. Inspired by both these critiques, we construct a common decision-making model, using mortgage loans as a running example. We show that under some conditions, any choice of decision threshold will inevitably perpetuate existing disparities in financial stability unless one deviates from the Pareto optimal policy. Then, we model the effects of three different types of interventions. We show how different interventions are recommended depending upon the difficulty of enacting structural change upon external parameters and depending upon the policymaker's preferences for equity or efficiency. Counterintuitively, we demonstrate that preferences for efficiency over equity may lead to recommendations for interventions that target the under-resourced group. Finally, we simulate the effects of interventions on a dataset that combines HMDA and Fannie Mae loan data. This research highlights the ways that structural inequality can be perpetuated by seemingly unbiased decision mechanisms, and it shows that in many situations, technical solutions must be paired with external, context-aware interventions to enact social change. |
1804.06426 | Philipp Mayr | Zeljko Carevic, Sascha Sch\"uller, Philipp Mayr, Norbert Fuhr | Contextualised Browsing in a Digital Library's Living Lab | 10 pages, 2 figures, paper accepted at JCDL 2018 | null | 10.1145/3197026.3197054 | null | cs.IR cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contextualisation has proven to be effective in tailoring \linebreak search
results towards the users' information need. While this is true for a basic
query search, the usage of contextual session information during exploratory
search especially on the level of browsing has so far been underexposed in
research. In this paper, we present two approaches that contextualise browsing
on the level of structured metadata in a Digital Library (DL), (1) one variant
bases on document similarity and (2) one variant utilises implicit session
information, such as queries and different document metadata encountered during
the session of a users. We evaluate our approaches in a living lab environment
using a DL in the social sciences and compare our contextualisation approaches
against a non-contextualised approach. For a period of more than three months
we analysed 47,444 unique retrieval sessions that contain search activities on
the level of browsing. Our results show that a contextualisation of browsing
significantly outperforms our baseline in terms of the position of the first
clicked item in the result set. The mean rank of the first clicked document
(measured as mean first relevant - MFR) was 4.52 using a non-contextualised
ranking compared to 3.04 when re-ranking the result lists based on similarity
to the previously viewed document. Furthermore, we observed that both
contextual approaches show a noticeably higher click-through rate. A
contextualisation based on document similarity leads to almost twice as many
document views compared to the non-contextualised ranking.
| [
{
"created": "Tue, 17 Apr 2018 18:30:29 GMT",
"version": "v1"
}
] | 2018-12-10 | [
[
"Carevic",
"Zeljko",
""
],
[
"Schüller",
"Sascha",
""
],
[
"Mayr",
"Philipp",
""
],
[
"Fuhr",
"Norbert",
""
]
] | Contextualisation has proven to be effective in tailoring \linebreak search results towards the users' information need. While this is true for a basic query search, the usage of contextual session information during exploratory search especially on the level of browsing has so far been underexposed in research. In this paper, we present two approaches that contextualise browsing on the level of structured metadata in a Digital Library (DL), (1) one variant bases on document similarity and (2) one variant utilises implicit session information, such as queries and different document metadata encountered during the session of a users. We evaluate our approaches in a living lab environment using a DL in the social sciences and compare our contextualisation approaches against a non-contextualised approach. For a period of more than three months we analysed 47,444 unique retrieval sessions that contain search activities on the level of browsing. Our results show that a contextualisation of browsing significantly outperforms our baseline in terms of the position of the first clicked item in the result set. The mean rank of the first clicked document (measured as mean first relevant - MFR) was 4.52 using a non-contextualised ranking compared to 3.04 when re-ranking the result lists based on similarity to the previously viewed document. Furthermore, we observed that both contextual approaches show a noticeably higher click-through rate. A contextualisation based on document similarity leads to almost twice as many document views compared to the non-contextualised ranking. |
2307.03723 | Alex Milne | Alex Milne, Xianghua Xie | Steel Surface Roughness Parameter Calculations Using Lasers and Machine
Learning Models | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Control of surface texture in strip steel is essential to meet customer
requirements during galvanizing and temper rolling processes. Traditional
methods rely on post-production stylus measurements, while on-line techniques
offer non-contact and real-time measurements of the entire strip. However,
ensuring accurate measurement is imperative for their effective utilization in
the manufacturing pipeline. Moreover, accurate on-line measurements enable
real-time adjustments of manufacturing processing parameters during production,
ensuring consistent quality and the possibility of closed-loop control of the
temper mill. In this study, we leverage state-of-the-art machine learning
models to enhance the transformation of on-line measurements into significantly
a more accurate Ra surface roughness metric. By comparing a selection of
data-driven approaches, including both deep learning and non-deep learning
methods, to the close-form transformation, we evaluate their potential for
improving surface texture control in temper strip steel manufacturing.
| [
{
"created": "Thu, 6 Jul 2023 16:44:03 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Oct 2023 11:21:37 GMT",
"version": "v2"
}
] | 2023-10-03 | [
[
"Milne",
"Alex",
""
],
[
"Xie",
"Xianghua",
""
]
] | Control of surface texture in strip steel is essential to meet customer requirements during galvanizing and temper rolling processes. Traditional methods rely on post-production stylus measurements, while on-line techniques offer non-contact and real-time measurements of the entire strip. However, ensuring accurate measurement is imperative for their effective utilization in the manufacturing pipeline. Moreover, accurate on-line measurements enable real-time adjustments of manufacturing processing parameters during production, ensuring consistent quality and the possibility of closed-loop control of the temper mill. In this study, we leverage state-of-the-art machine learning models to enhance the transformation of on-line measurements into significantly a more accurate Ra surface roughness metric. By comparing a selection of data-driven approaches, including both deep learning and non-deep learning methods, to the close-form transformation, we evaluate their potential for improving surface texture control in temper strip steel manufacturing. |
0809.3942 | Philippe Hoogvorst | Philippe Hoogvorst, Sylvain Guilley, Sumanta Chaudhuri, Jean-Luc
Danger, Taha Beyrouthy and Laurent Fesquet | A Reconfigurable Programmable Logic Block for a Multi-Style Asynchronous
FPGA resistant to Side-Channel Attacks | 29 pages | null | null | null | cs.CR cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Side-channel attacks are efficient attacks against cryptographic devices.
They use only quantities observable from outside, such as the duration and the
power consumption. Attacks against synchronous devices using electric
observations are facilitated by the fact that all transitions occur
simultaneously with some global clock signal. Asynchronous control remove this
synchronization and therefore makes it more difficult for the attacker to
insulate \emph{interesting intervals}. In addition the coding of data in an
asynchronous circuit is inherently more difficult to attack. This article
describes the Programmable Logic Block of an asynchronous FPGA resistant
against \emph{side-channel attacks}. Additionally it can implement different
styles of asynchronous control and of data representation.
| [
{
"created": "Tue, 23 Sep 2008 15:27:06 GMT",
"version": "v1"
}
] | 2008-09-24 | [
[
"Hoogvorst",
"Philippe",
""
],
[
"Guilley",
"Sylvain",
""
],
[
"Chaudhuri",
"Sumanta",
""
],
[
"Danger",
"Jean-Luc",
""
],
[
"Beyrouthy",
"Taha",
""
],
[
"Fesquet",
"Laurent",
""
]
] | Side-channel attacks are efficient attacks against cryptographic devices. They use only quantities observable from outside, such as the duration and the power consumption. Attacks against synchronous devices using electric observations are facilitated by the fact that all transitions occur simultaneously with some global clock signal. Asynchronous control remove this synchronization and therefore makes it more difficult for the attacker to insulate \emph{interesting intervals}. In addition the coding of data in an asynchronous circuit is inherently more difficult to attack. This article describes the Programmable Logic Block of an asynchronous FPGA resistant against \emph{side-channel attacks}. Additionally it can implement different styles of asynchronous control and of data representation. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.