id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1907.04194
Yuhang Ding
Yuhang Ding, Hehe Fan, Mingliang Xu and Yi Yang
Adaptive Exploration for Unsupervised Person Re-Identification
ACM Transactions on Multimedia Computing, Communications and Application (TOMCCAP)
null
10.1145/3369393
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to domain bias, directly deploying a deep person re-identification (re-ID) model trained on one dataset often achieves considerably poor accuracy on another dataset. In this paper, we propose an Adaptive Exploration (AE) method to address the domain-shift problem for re-ID in an unsupervised manner. Specifically, in the target domain, the re-ID model is inducted to 1) maximize distances between all person images and 2) minimize distances between similar person images. In the first case, by treating each person image as an individual class, a non-parametric classifier with a feature memory is exploited to encourage person images to move far away from each other. In the second case, according to a similarity threshold, our method adaptively selects neighborhoods for each person image in the feature space. By treating these similar person images as the same class, the non-parametric classifier forces them to stay closer. However, a problem of the adaptive selection is that, when an image has too many neighborhoods, it is more likely to attract other images as its neighborhoods. As a result, a minority of images may select a large number of neighborhoods while a majority of images have only a few neighborhoods. To address this issue, we additionally integrate a balance strategy into the adaptive selection. We evaluate our methods with two protocols. The first one is called "target-only re-ID", in which only the unlabeled target data is used for training. The second one is called "domain adaptive re-ID", in which both the source data and the target data are used during training. Experimental results on large-scale re-ID datasets demonstrate the effectiveness of our method. Our code has been released at https://github.com/dyh127/Adaptive-Exploration-for-Unsupervised-Person-Re-Identification.
[ { "created": "Tue, 9 Jul 2019 14:36:08 GMT", "version": "v1" }, { "created": "Fri, 10 Apr 2020 05:25:45 GMT", "version": "v2" } ]
2020-04-13
[ [ "Ding", "Yuhang", "" ], [ "Fan", "Hehe", "" ], [ "Xu", "Mingliang", "" ], [ "Yang", "Yi", "" ] ]
Due to domain bias, directly deploying a deep person re-identification (re-ID) model trained on one dataset often achieves considerably poor accuracy on another dataset. In this paper, we propose an Adaptive Exploration (AE) method to address the domain-shift problem for re-ID in an unsupervised manner. Specifically, in the target domain, the re-ID model is inducted to 1) maximize distances between all person images and 2) minimize distances between similar person images. In the first case, by treating each person image as an individual class, a non-parametric classifier with a feature memory is exploited to encourage person images to move far away from each other. In the second case, according to a similarity threshold, our method adaptively selects neighborhoods for each person image in the feature space. By treating these similar person images as the same class, the non-parametric classifier forces them to stay closer. However, a problem of the adaptive selection is that, when an image has too many neighborhoods, it is more likely to attract other images as its neighborhoods. As a result, a minority of images may select a large number of neighborhoods while a majority of images have only a few neighborhoods. To address this issue, we additionally integrate a balance strategy into the adaptive selection. We evaluate our methods with two protocols. The first one is called "target-only re-ID", in which only the unlabeled target data is used for training. The second one is called "domain adaptive re-ID", in which both the source data and the target data are used during training. Experimental results on large-scale re-ID datasets demonstrate the effectiveness of our method. Our code has been released at https://github.com/dyh127/Adaptive-Exploration-for-Unsupervised-Person-Re-Identification.
1805.02457
Parter Merav
Merav Parter
$(\Delta+1)$ Coloring in the Congested Clique Model
Appeared in ICALP'18 (the update version adds a missing part in the deterministic coloring procedure)
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present improved algorithms for the $(\Delta+1)$ (vertex) coloring problem in the Congested-Clique model of distributed computing. In this model, the input is a graph on $n$ nodes, initially each node knows only its incident edges, and per round each two nodes can exchange $O(\log n)$ bits of information. Our key result is a randomized $(\Delta+1)$ vertex coloring algorithm that works in $O(\log\log \Delta \cdot \log^* \Delta)$-rounds. This is achieved by combining the recent breakthrough result of [Chang-Li-Pettie, STOC'18] in the \local\ model and a degree reduction technique. We also get the following results with high probability: (1) $(\Delta+1)$-coloring for $\Delta=O((n/\log n)^{1-\epsilon})$ for any $\epsilon \in (0,1)$, within $O(\log(1/\epsilon)\log^* \Delta)$ rounds, and (2) $(\Delta+\Delta^{1/2+o(1)})$-coloring within $O(\log^* \Delta)$ rounds. Turning to deterministic algorithms, we show a $(\Delta+1)$-coloring algorithm that works in $O(\log \Delta)$ rounds.
[ { "created": "Mon, 7 May 2018 11:57:15 GMT", "version": "v1" }, { "created": "Sun, 12 Jan 2020 12:24:21 GMT", "version": "v2" } ]
2020-01-14
[ [ "Parter", "Merav", "" ] ]
In this paper, we present improved algorithms for the $(\Delta+1)$ (vertex) coloring problem in the Congested-Clique model of distributed computing. In this model, the input is a graph on $n$ nodes, initially each node knows only its incident edges, and per round each two nodes can exchange $O(\log n)$ bits of information. Our key result is a randomized $(\Delta+1)$ vertex coloring algorithm that works in $O(\log\log \Delta \cdot \log^* \Delta)$-rounds. This is achieved by combining the recent breakthrough result of [Chang-Li-Pettie, STOC'18] in the \local\ model and a degree reduction technique. We also get the following results with high probability: (1) $(\Delta+1)$-coloring for $\Delta=O((n/\log n)^{1-\epsilon})$ for any $\epsilon \in (0,1)$, within $O(\log(1/\epsilon)\log^* \Delta)$ rounds, and (2) $(\Delta+\Delta^{1/2+o(1)})$-coloring within $O(\log^* \Delta)$ rounds. Turning to deterministic algorithms, we show a $(\Delta+1)$-coloring algorithm that works in $O(\log \Delta)$ rounds.
2403.11607
Junming Wang
Junming Wang, Zekai Sun, Xiuxian Guan, Tianxiang Shen, Zongyuan Zhang, Tianyang Duan, Dong Huang, Shixiong Zhao, Heming Cui
AGRNav: Efficient and Energy-Saving Autonomous Navigation for Air-Ground Robots in Occlusion-Prone Environments
Accepted to ICRA 2024
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
The exceptional mobility and long endurance of air-ground robots are raising interest in their usage to navigate complex environments (e.g., forests and large buildings). However, such environments often contain occluded and unknown regions, and without accurate prediction of unobserved obstacles, the movement of the air-ground robot often suffers a suboptimal trajectory under existing mapping-based and learning-based navigation methods. In this work, we present AGRNav, a novel framework designed to search for safe and energy-saving air-ground hybrid paths. AGRNav contains a lightweight semantic scene completion network (SCONet) with self-attention to enable accurate obstacle predictions by capturing contextual information and occlusion area features. The framework subsequently employs a query-based method for low-latency updates of prediction results to the grid map. Finally, based on the updated map, the hierarchical path planner efficiently searches for energy-saving paths for navigation. We validate AGRNav's performance through benchmarks in both simulated and real-world environments, demonstrating its superiority over classical and state-of-the-art methods. The open-source code is available at https://github.com/jmwang0117/AGRNav.
[ { "created": "Mon, 18 Mar 2024 09:31:59 GMT", "version": "v1" } ]
2024-03-19
[ [ "Wang", "Junming", "" ], [ "Sun", "Zekai", "" ], [ "Guan", "Xiuxian", "" ], [ "Shen", "Tianxiang", "" ], [ "Zhang", "Zongyuan", "" ], [ "Duan", "Tianyang", "" ], [ "Huang", "Dong", "" ], [ "Zhao", "Shixiong", "" ], [ "Cui", "Heming", "" ] ]
The exceptional mobility and long endurance of air-ground robots are raising interest in their usage to navigate complex environments (e.g., forests and large buildings). However, such environments often contain occluded and unknown regions, and without accurate prediction of unobserved obstacles, the movement of the air-ground robot often suffers a suboptimal trajectory under existing mapping-based and learning-based navigation methods. In this work, we present AGRNav, a novel framework designed to search for safe and energy-saving air-ground hybrid paths. AGRNav contains a lightweight semantic scene completion network (SCONet) with self-attention to enable accurate obstacle predictions by capturing contextual information and occlusion area features. The framework subsequently employs a query-based method for low-latency updates of prediction results to the grid map. Finally, based on the updated map, the hierarchical path planner efficiently searches for energy-saving paths for navigation. We validate AGRNav's performance through benchmarks in both simulated and real-world environments, demonstrating its superiority over classical and state-of-the-art methods. The open-source code is available at https://github.com/jmwang0117/AGRNav.
1304.5774
Mikhail Berlinkov
Mikhail V. Berlinkov
On the probability of being synchronizable
Major revision after the review
null
null
null
cs.FL cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove that a random automaton with $n$ states and any fixed non-singleton alphabet is synchronizing with high probability (modulo an unpublished result about unique highest trees of random graphs). Moreover, we also prove that the convergence rate is exactly $1-\Theta(\frac{1}{n})$ as conjectured by [Cameron, 2011] for the most interesting binary alphabet case. Finally, we present a deterministic algorithm which decides whether a given random automaton is synchronizing in linear in $n$ expected time and prove that it is optimal.
[ { "created": "Sun, 21 Apr 2013 18:16:26 GMT", "version": "v1" }, { "created": "Sun, 27 Oct 2013 05:05:25 GMT", "version": "v10" }, { "created": "Tue, 19 Nov 2013 09:03:27 GMT", "version": "v11" }, { "created": "Wed, 27 Nov 2013 05:58:11 GMT", "version": "v12" }, { "created": "Wed, 8 Jan 2014 07:15:45 GMT", "version": "v13" }, { "created": "Mon, 13 Jan 2014 15:56:16 GMT", "version": "v14" }, { "created": "Mon, 24 Mar 2014 16:09:06 GMT", "version": "v15" }, { "created": "Mon, 16 Feb 2015 17:01:30 GMT", "version": "v16" }, { "created": "Sat, 4 Jul 2015 04:49:56 GMT", "version": "v17" }, { "created": "Tue, 29 Sep 2015 07:47:47 GMT", "version": "v18" }, { "created": "Tue, 17 Nov 2015 13:52:46 GMT", "version": "v19" }, { "created": "Wed, 24 Apr 2013 15:38:07 GMT", "version": "v2" }, { "created": "Wed, 1 Jun 2016 06:27:16 GMT", "version": "v20" }, { "created": "Sat, 6 Jul 2019 00:09:09 GMT", "version": "v21" }, { "created": "Sat, 28 Mar 2020 21:17:54 GMT", "version": "v22" }, { "created": "Mon, 8 Jul 2024 23:51:39 GMT", "version": "v23" }, { "created": "Sun, 28 Apr 2013 06:13:48 GMT", "version": "v3" }, { "created": "Mon, 20 May 2013 10:52:52 GMT", "version": "v4" }, { "created": "Tue, 21 May 2013 17:21:51 GMT", "version": "v5" }, { "created": "Sun, 7 Jul 2013 10:51:19 GMT", "version": "v6" }, { "created": "Sat, 10 Aug 2013 04:07:49 GMT", "version": "v7" }, { "created": "Thu, 15 Aug 2013 18:27:30 GMT", "version": "v8" }, { "created": "Mon, 19 Aug 2013 16:58:36 GMT", "version": "v9" } ]
2024-07-10
[ [ "Berlinkov", "Mikhail V.", "" ] ]
We prove that a random automaton with $n$ states and any fixed non-singleton alphabet is synchronizing with high probability (modulo an unpublished result about unique highest trees of random graphs). Moreover, we also prove that the convergence rate is exactly $1-\Theta(\frac{1}{n})$ as conjectured by [Cameron, 2011] for the most interesting binary alphabet case. Finally, we present a deterministic algorithm which decides whether a given random automaton is synchronizing in linear in $n$ expected time and prove that it is optimal.
1210.4838
Hau Chan
Hau Chan, Michael Ceyko, Luis E. Ortiz
Interdependent Defense Games: Modeling Interdependent Security under Deliberate Attacks
Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (UAI2012)
null
null
UAI-P-2012-PG-152-162
cs.GT cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose interdependent defense (IDD) games, a computational game-theoretic framework to study aspects of the interdependence of risk and security in multi-agent systems under deliberate external attacks. Our model builds upon interdependent security (IDS) games, a model due to Heal and Kunreuther that considers the source of the risk to be the result of a fixed randomizedstrategy. We adapt IDS games to model the attacker's deliberate behavior. We define the attacker's pure-strategy space and utility function and derive appropriate cost functions for the defenders. We provide a complete characterization of mixed-strategy Nash equilibria (MSNE), and design a simple polynomial-time algorithm for computing all of them, for an important subclass of IDD games. In addition, we propose a randominstance generator of (general) IDD games based on a version of the real-world Internet-derived Autonomous Systems (AS) graph (with around 27K nodes and 100K edges), and present promising empirical results using a simple learning heuristics to compute (approximate) MSNE in such games.
[ { "created": "Tue, 16 Oct 2012 17:31:58 GMT", "version": "v1" } ]
2012-10-19
[ [ "Chan", "Hau", "" ], [ "Ceyko", "Michael", "" ], [ "Ortiz", "Luis E.", "" ] ]
We propose interdependent defense (IDD) games, a computational game-theoretic framework to study aspects of the interdependence of risk and security in multi-agent systems under deliberate external attacks. Our model builds upon interdependent security (IDS) games, a model due to Heal and Kunreuther that considers the source of the risk to be the result of a fixed randomizedstrategy. We adapt IDS games to model the attacker's deliberate behavior. We define the attacker's pure-strategy space and utility function and derive appropriate cost functions for the defenders. We provide a complete characterization of mixed-strategy Nash equilibria (MSNE), and design a simple polynomial-time algorithm for computing all of them, for an important subclass of IDD games. In addition, we propose a randominstance generator of (general) IDD games based on a version of the real-world Internet-derived Autonomous Systems (AS) graph (with around 27K nodes and 100K edges), and present promising empirical results using a simple learning heuristics to compute (approximate) MSNE in such games.
1508.05411
Youssef Gahi Mr
Youssef Gahi, Mouhcine Guennoun, Zouhair Guennoun, Khalil El-khatib
On the use of homomorphic encryption to secure cloud computing, services, and routing protocols
Youssef Gahi, PhD dissertation, 2013
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The trend towards delegating data processing to a remote party raises major concerns related to privacy violations for both end-users and service providers. These concerns have attracted the attention of the research community, and several techniques have been proposed to protect against malicious parties by providing secure communication protocols. Most of the proposed techniques, however, require the involvement of a third party, and this by itself can be viewed as another security concern. These security breaches can be avoided by following a new approach that depends on data sorted, managed, and stored in encrypted form at the remote servers. To realize such an approach, the encryption cryptosystem must support algebraic operations over encrypted data. This cryptosystem can be effective in protecting data and supporting the construction of programs that can process encrypted input and produce encrypted output. In fact, the latter programs do not decrypt the input, and therefore, they can be run by an un-trusted party without revealing their data and internal states. Furthermore, such programs prove to be practical in situations where we need to outsource private computations, especially in the context of cloud computing. Homomorphic cryptosystems are perfectly aligned with these objectives as they are a strong foundation for schemes that allow a blind processing of encrypted data without the need to decrypt them. In this dissertation we rely on homomorphic encryption schemes to secure cloud computing, services and routing protocols. We design several circuits that allow for the blind processing and management of data such that malicious parties are denied access to sensitive information. We select five areas to apply our models to. These models are easily customized for many other areas. We also provide prototypes that we use to study the performance and robustness of our models.
[ { "created": "Fri, 21 Aug 2015 21:08:10 GMT", "version": "v1" } ]
2015-12-15
[ [ "Gahi", "Youssef", "" ], [ "Guennoun", "Mouhcine", "" ], [ "Guennoun", "Zouhair", "" ], [ "El-khatib", "Khalil", "" ] ]
The trend towards delegating data processing to a remote party raises major concerns related to privacy violations for both end-users and service providers. These concerns have attracted the attention of the research community, and several techniques have been proposed to protect against malicious parties by providing secure communication protocols. Most of the proposed techniques, however, require the involvement of a third party, and this by itself can be viewed as another security concern. These security breaches can be avoided by following a new approach that depends on data sorted, managed, and stored in encrypted form at the remote servers. To realize such an approach, the encryption cryptosystem must support algebraic operations over encrypted data. This cryptosystem can be effective in protecting data and supporting the construction of programs that can process encrypted input and produce encrypted output. In fact, the latter programs do not decrypt the input, and therefore, they can be run by an un-trusted party without revealing their data and internal states. Furthermore, such programs prove to be practical in situations where we need to outsource private computations, especially in the context of cloud computing. Homomorphic cryptosystems are perfectly aligned with these objectives as they are a strong foundation for schemes that allow a blind processing of encrypted data without the need to decrypt them. In this dissertation we rely on homomorphic encryption schemes to secure cloud computing, services and routing protocols. We design several circuits that allow for the blind processing and management of data such that malicious parties are denied access to sensitive information. We select five areas to apply our models to. These models are easily customized for many other areas. We also provide prototypes that we use to study the performance and robustness of our models.
2307.05986
Alarith Uhde
Alarith Uhde and Tim zum Hoff and Marc Hassenzahl
Beyond Hiding and Revealing: Exploring Effects of Visibility and Form of Interaction on the Witness Experience
23 pages, 4 figures
null
10.1145/3604247
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our interactions with technology do not just shape our individual experiences. They also affect people around us. Although previous research has addressed such "witness" experiences, the actual effect of interaction design on the witness experience remains largely unknown. In an online study (n = 407), we explored how witnesses perceive mid-air gesture-based interactions with a hearing aid, using four video vignettes. We studied witnesses' subjective visibility of manipulations and effects (following Reeves and colleagues' taxonomy), perceived form of interaction, subjective experience, and relationships between these measures. Although visibility patterns matched the intended form, they did not lead to the supposed experience (i.e., "suspenseful" gestures did not lead to suspenseful experiences). The paper illustrates gaps in current research about witness experiences, demonstrates the need to overcome basic hiding/revealing profiles, and indicates a path forward by focusing on aesthetic forms and experiences.
[ { "created": "Wed, 12 Jul 2023 08:01:48 GMT", "version": "v1" } ]
2023-07-13
[ [ "Uhde", "Alarith", "" ], [ "Hoff", "Tim zum", "" ], [ "Hassenzahl", "Marc", "" ] ]
Our interactions with technology do not just shape our individual experiences. They also affect people around us. Although previous research has addressed such "witness" experiences, the actual effect of interaction design on the witness experience remains largely unknown. In an online study (n = 407), we explored how witnesses perceive mid-air gesture-based interactions with a hearing aid, using four video vignettes. We studied witnesses' subjective visibility of manipulations and effects (following Reeves and colleagues' taxonomy), perceived form of interaction, subjective experience, and relationships between these measures. Although visibility patterns matched the intended form, they did not lead to the supposed experience (i.e., "suspenseful" gestures did not lead to suspenseful experiences). The paper illustrates gaps in current research about witness experiences, demonstrates the need to overcome basic hiding/revealing profiles, and indicates a path forward by focusing on aesthetic forms and experiences.
2404.15146
Avi Schwarzschild
Avi Schwarzschild and Zhili Feng and Pratyush Maini and Zachary C. Lipton and J. Zico Kolter
Rethinking LLM Memorization through the Lens of Adversarial Compression
https://locuslab.github.io/acr-memorization
null
null
null
cs.LG cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) trained on web-scale datasets raise substantial concerns regarding permissible data usage. One major question is whether these models "memorize" all their training data or they integrate many data sources in some way more akin to how a human would learn and synthesize information. The answer hinges, to a large degree, on how we define memorization. In this work, we propose the Adversarial Compression Ratio (ACR) as a metric for assessing memorization in LLMs. A given string from the training data is considered memorized if it can be elicited by a prompt (much) shorter than the string itself -- in other words, if these strings can be "compressed" with the model by computing adversarial prompts of fewer tokens. The ACR overcomes the limitations of existing notions of memorization by (i) offering an adversarial view of measuring memorization, especially for monitoring unlearning and compliance; and (ii) allowing for the flexibility to measure memorization for arbitrary strings at a reasonably low compute. Our definition serves as a practical tool for determining when model owners may be violating terms around data usage, providing a potential legal tool and a critical lens through which to address such scenarios.
[ { "created": "Tue, 23 Apr 2024 15:49:37 GMT", "version": "v1" }, { "created": "Mon, 1 Jul 2024 14:43:11 GMT", "version": "v2" } ]
2024-07-02
[ [ "Schwarzschild", "Avi", "" ], [ "Feng", "Zhili", "" ], [ "Maini", "Pratyush", "" ], [ "Lipton", "Zachary C.", "" ], [ "Kolter", "J. Zico", "" ] ]
Large language models (LLMs) trained on web-scale datasets raise substantial concerns regarding permissible data usage. One major question is whether these models "memorize" all their training data or they integrate many data sources in some way more akin to how a human would learn and synthesize information. The answer hinges, to a large degree, on how we define memorization. In this work, we propose the Adversarial Compression Ratio (ACR) as a metric for assessing memorization in LLMs. A given string from the training data is considered memorized if it can be elicited by a prompt (much) shorter than the string itself -- in other words, if these strings can be "compressed" with the model by computing adversarial prompts of fewer tokens. The ACR overcomes the limitations of existing notions of memorization by (i) offering an adversarial view of measuring memorization, especially for monitoring unlearning and compliance; and (ii) allowing for the flexibility to measure memorization for arbitrary strings at a reasonably low compute. Our definition serves as a practical tool for determining when model owners may be violating terms around data usage, providing a potential legal tool and a critical lens through which to address such scenarios.
2402.06954
Rui Ye
Rui Ye, Wenhao Wang, Jingyi Chai, Dihan Li, Zexi Li, Yinda Xu, Yaxin Du, Yanfeng Wang, Siheng Chen
OpenFedLLM: Training Large Language Models on Decentralized Private Data via Federated Learning
28 pages, 3 figures, 16 tables
null
null
null
cs.LG cs.CL cs.DC cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trained on massive publicly available data, large language models (LLMs) have demonstrated tremendous success across various fields. While more data contributes to better performance, a disconcerting reality is that high-quality public data will be exhausted in a few years. In this paper, we offer a potential next step for contemporary LLMs: collaborative and privacy-preserving LLM training on the underutilized distributed private data via federated learning (FL), where multiple data owners collaboratively train a shared model without transmitting raw data. To achieve this, we build a concise, integrated, and research-friendly framework/codebase, named OpenFedLLM. It covers federated instruction tuning for enhancing instruction-following capability, federated value alignment for aligning with human values, and 7 representative FL algorithms. Besides, OpenFedLLM supports training on diverse domains, where we cover 8 training datasets; and provides comprehensive evaluations, where we cover 30+ evaluation metrics. Through extensive experiments, we observe that all FL algorithms outperform local training on training LLMs, demonstrating a clear performance improvement across a variety of settings. Notably, in a financial benchmark, Llama2-7B fine-tuned by applying any FL algorithm can outperform GPT-4 by a significant margin while the model obtained through individual training cannot, demonstrating strong motivation for clients to participate in FL. The code is available at https://github.com/rui-ye/OpenFedLLM.
[ { "created": "Sat, 10 Feb 2024 13:50:11 GMT", "version": "v1" } ]
2024-02-13
[ [ "Ye", "Rui", "" ], [ "Wang", "Wenhao", "" ], [ "Chai", "Jingyi", "" ], [ "Li", "Dihan", "" ], [ "Li", "Zexi", "" ], [ "Xu", "Yinda", "" ], [ "Du", "Yaxin", "" ], [ "Wang", "Yanfeng", "" ], [ "Chen", "Siheng", "" ] ]
Trained on massive publicly available data, large language models (LLMs) have demonstrated tremendous success across various fields. While more data contributes to better performance, a disconcerting reality is that high-quality public data will be exhausted in a few years. In this paper, we offer a potential next step for contemporary LLMs: collaborative and privacy-preserving LLM training on the underutilized distributed private data via federated learning (FL), where multiple data owners collaboratively train a shared model without transmitting raw data. To achieve this, we build a concise, integrated, and research-friendly framework/codebase, named OpenFedLLM. It covers federated instruction tuning for enhancing instruction-following capability, federated value alignment for aligning with human values, and 7 representative FL algorithms. Besides, OpenFedLLM supports training on diverse domains, where we cover 8 training datasets; and provides comprehensive evaluations, where we cover 30+ evaluation metrics. Through extensive experiments, we observe that all FL algorithms outperform local training on training LLMs, demonstrating a clear performance improvement across a variety of settings. Notably, in a financial benchmark, Llama2-7B fine-tuned by applying any FL algorithm can outperform GPT-4 by a significant margin while the model obtained through individual training cannot, demonstrating strong motivation for clients to participate in FL. The code is available at https://github.com/rui-ye/OpenFedLLM.
2301.10460
Yiming Qian
Fenggen Yu, Yiming Qian, Francisca Gil-Ureta, Brian Jackson, Eric Bennett, Hao Zhang
HAL3D: Hierarchical Active Learning for Fine-Grained 3D Part Labeling
Accepted to ICCV 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present the first active learning tool for fine-grained 3D part labeling, a problem which challenges even the most advanced deep learning (DL) methods due to the significant structural variations among the small and intricate parts. For the same reason, the necessary data annotation effort is tremendous, motivating approaches to minimize human involvement. Our labeling tool iteratively verifies or modifies part labels predicted by a deep neural network, with human feedback continually improving the network prediction. To effectively reduce human efforts, we develop two novel features in our tool, hierarchical and symmetry-aware active labeling. Our human-in-the-loop approach, coined HAL3D, achieves 100% accuracy (barring human errors) on any test set with pre-defined hierarchical part labels, with 80% time-saving over manual effort.
[ { "created": "Wed, 25 Jan 2023 08:40:34 GMT", "version": "v1" }, { "created": "Mon, 1 Apr 2024 17:04:01 GMT", "version": "v2" } ]
2024-04-02
[ [ "Yu", "Fenggen", "" ], [ "Qian", "Yiming", "" ], [ "Gil-Ureta", "Francisca", "" ], [ "Jackson", "Brian", "" ], [ "Bennett", "Eric", "" ], [ "Zhang", "Hao", "" ] ]
We present the first active learning tool for fine-grained 3D part labeling, a problem which challenges even the most advanced deep learning (DL) methods due to the significant structural variations among the small and intricate parts. For the same reason, the necessary data annotation effort is tremendous, motivating approaches to minimize human involvement. Our labeling tool iteratively verifies or modifies part labels predicted by a deep neural network, with human feedback continually improving the network prediction. To effectively reduce human efforts, we develop two novel features in our tool, hierarchical and symmetry-aware active labeling. Our human-in-the-loop approach, coined HAL3D, achieves 100% accuracy (barring human errors) on any test set with pre-defined hierarchical part labels, with 80% time-saving over manual effort.
2308.10316
Michael Dinitz
Michael Dinitz and Satyen Kale and Silvio Lattanzi and Sergei Vassilvitskii
Almost Tight Bounds for Differentially Private Densest Subgraph
Revised presentation, added value bound
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the Densest Subgraph (DSG) problem under the additional constraint of differential privacy. DSG is a fundamental theoretical question which plays a central role in graph analytics, and so privacy is a natural requirement. All known private algorithms for Densest Subgraph lose constant multiplicative factors, despite the existence of non-private exact algorithms. We show that, perhaps surprisingly, this loss is not necessary: in both the classic differential privacy model and the LEDP model (local edge differential privacy, introduced recently by Dhulipala et al. [FOCS 2022]), we give $(\epsilon, \delta)$-differentially private algorithms with no multiplicative loss whatsoever. In other words, the loss is \emph{purely additive}. Moreover, our additive losses match or improve the best-known previous additive loss (in any version of differential privacy) when $1/\delta$ is polynomial in $n$, and are almost tight: in the centralized setting, our additive loss is $O(\log n /\epsilon)$ while there is a known lower bound of $\Omega(\sqrt{\log n / \epsilon})$. We also give a number of extensions. First, we show how to extend our techniques to both the node-weighted and the directed versions of the problem. Second, we give a separate algorithm with pure differential privacy (as opposed to approximate DP) but with worse approximation bounds. And third, we give a new algorithm for privately computing the optimal density which implies a separation between the structural problem of privately computing the densest subgraph and the numeric problem of privately computing the density of the densest subgraph.
[ { "created": "Sun, 20 Aug 2023 16:34:18 GMT", "version": "v1" }, { "created": "Mon, 4 Mar 2024 17:39:26 GMT", "version": "v2" }, { "created": "Mon, 8 Apr 2024 00:21:00 GMT", "version": "v3" } ]
2024-04-09
[ [ "Dinitz", "Michael", "" ], [ "Kale", "Satyen", "" ], [ "Lattanzi", "Silvio", "" ], [ "Vassilvitskii", "Sergei", "" ] ]
We study the Densest Subgraph (DSG) problem under the additional constraint of differential privacy. DSG is a fundamental theoretical question which plays a central role in graph analytics, and so privacy is a natural requirement. All known private algorithms for Densest Subgraph lose constant multiplicative factors, despite the existence of non-private exact algorithms. We show that, perhaps surprisingly, this loss is not necessary: in both the classic differential privacy model and the LEDP model (local edge differential privacy, introduced recently by Dhulipala et al. [FOCS 2022]), we give $(\epsilon, \delta)$-differentially private algorithms with no multiplicative loss whatsoever. In other words, the loss is \emph{purely additive}. Moreover, our additive losses match or improve the best-known previous additive loss (in any version of differential privacy) when $1/\delta$ is polynomial in $n$, and are almost tight: in the centralized setting, our additive loss is $O(\log n /\epsilon)$ while there is a known lower bound of $\Omega(\sqrt{\log n / \epsilon})$. We also give a number of extensions. First, we show how to extend our techniques to both the node-weighted and the directed versions of the problem. Second, we give a separate algorithm with pure differential privacy (as opposed to approximate DP) but with worse approximation bounds. And third, we give a new algorithm for privately computing the optimal density which implies a separation between the structural problem of privately computing the densest subgraph and the numeric problem of privately computing the density of the densest subgraph.
1912.02278
Tillmann Miltzow
Jeff Erickson, Ivor van der Hoog, Tillmann Miltzow
Smoothing the gap between NP and ER
31 pages, 11 figures, FOCS 2020, SICOMP 2022
null
null
null
cs.CG cs.CC cs.DM cs.DS cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study algorithmic problems that belong to the complexity class of the existential theory of the reals (ER). A problem is ER-complete if it is as hard as the problem ETR and if it can be written as an ETR formula. Traditionally, these problems are studied in the real RAM, a model of computation that assumes that the storage and comparison of real-valued numbers can be done in constant space and time, with infinite precision. The complexity class ER is often called a real RAM analogue of NP, since the problem ETR can be viewed as the real-valued variant of SAT. In this paper we prove a real RAM analogue to the Cook-Levin theorem which shows that ER membership is equivalent to having a verification algorithm that runs in polynomial-time on a real RAM. This gives an easy proof of ER-membership, as verification algorithms on a real RAM are much more versatile than ETR-formulas. We use this result to construct a framework to study ER-complete problems under smoothed analysis. We show that for a wide class of ER-complete problems, its witness can be represented with logarithmic input-precision by using smoothed analysis on its real RAM verification algorithm. This shows in a formal way that the boundary between NP and ER (formed by inputs whose solution witness needs high input-precision) consists of contrived input. We apply our framework to well-studied ER-complete recognition problems which have the exponential bit phenomenon such as the recognition of realizable order types or the Steinitz problem in fixed dimension.
[ { "created": "Wed, 4 Dec 2019 22:12:17 GMT", "version": "v1" }, { "created": "Fri, 11 Sep 2020 11:28:42 GMT", "version": "v2" }, { "created": "Thu, 18 Nov 2021 16:17:59 GMT", "version": "v3" } ]
2021-11-19
[ [ "Erickson", "Jeff", "" ], [ "van der Hoog", "Ivor", "" ], [ "Miltzow", "Tillmann", "" ] ]
We study algorithmic problems that belong to the complexity class of the existential theory of the reals (ER). A problem is ER-complete if it is as hard as the problem ETR and if it can be written as an ETR formula. Traditionally, these problems are studied in the real RAM, a model of computation that assumes that the storage and comparison of real-valued numbers can be done in constant space and time, with infinite precision. The complexity class ER is often called a real RAM analogue of NP, since the problem ETR can be viewed as the real-valued variant of SAT. In this paper we prove a real RAM analogue to the Cook-Levin theorem which shows that ER membership is equivalent to having a verification algorithm that runs in polynomial-time on a real RAM. This gives an easy proof of ER-membership, as verification algorithms on a real RAM are much more versatile than ETR-formulas. We use this result to construct a framework to study ER-complete problems under smoothed analysis. We show that for a wide class of ER-complete problems, its witness can be represented with logarithmic input-precision by using smoothed analysis on its real RAM verification algorithm. This shows in a formal way that the boundary between NP and ER (formed by inputs whose solution witness needs high input-precision) consists of contrived input. We apply our framework to well-studied ER-complete recognition problems which have the exponential bit phenomenon such as the recognition of realizable order types or the Steinitz problem in fixed dimension.
1503.04067
Zwi Altman
Abdoulaye TALL, Zwi Altman, Eitan Altman (INRIA Sophia Antipolis)
Virtual sectorization: design and self-optimization
VTC2015-Spring, 5th International Workshop on Self-Organizing Networks (IWSON), May 2015, Glasgow, United Kingdom
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Virtual Sectorization (ViSn) aims at covering a confined area such as a traffic hot-spot using a narrow beam. The beam is generated by a remote antenna array located at-or close to the Base Station (BS). This paper develops the ViSn model and provides the guidelines for designing the Virtual Sector (ViS) antenna. In order to mitigate interference between the ViS and the traditional macro sector covering the rest of the area, a Dynamic Spectrum Allocation (DSA) algorithm that self-optimizes the frequency bandwidth split between the macro cell and the ViS is also proposed. The Self-Organizing Network (SON) algorithm is constructed to maximize the proportional fair utility of all the users throughputs. Numerical simulations show the interest in deploying ViSn, and the significant capacity gain brought about by the self-optimized bandwidth sharing with respect to a full reuse of the bandwidth by the ViS.
[ { "created": "Fri, 13 Mar 2015 13:50:13 GMT", "version": "v1" } ]
2015-03-16
[ [ "TALL", "Abdoulaye", "", "INRIA Sophia Antipolis" ], [ "Altman", "Zwi", "", "INRIA Sophia Antipolis" ], [ "Altman", "Eitan", "", "INRIA Sophia Antipolis" ] ]
Virtual Sectorization (ViSn) aims at covering a confined area such as a traffic hot-spot using a narrow beam. The beam is generated by a remote antenna array located at-or close to the Base Station (BS). This paper develops the ViSn model and provides the guidelines for designing the Virtual Sector (ViS) antenna. In order to mitigate interference between the ViS and the traditional macro sector covering the rest of the area, a Dynamic Spectrum Allocation (DSA) algorithm that self-optimizes the frequency bandwidth split between the macro cell and the ViS is also proposed. The Self-Organizing Network (SON) algorithm is constructed to maximize the proportional fair utility of all the users throughputs. Numerical simulations show the interest in deploying ViSn, and the significant capacity gain brought about by the self-optimized bandwidth sharing with respect to a full reuse of the bandwidth by the ViS.
2403.13869
Ruoxuan Bai
Ruoxuan Bai, Jingxuan Yang, Weiduo Gong, Yi Zhang, Qiujing Lu and Shuo Feng
Accurately Predicting Probabilities of Safety-Critical Rare Events for Intelligent Systems
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Intelligent systems are increasingly integral to our daily lives, yet rare safety-critical events present significant latent threats to their practical deployment. Addressing this challenge hinges on accurately predicting the probability of safety-critical events occurring within a given time step from the current state, a metric we define as 'criticality'. The complexity of predicting criticality arises from the extreme data imbalance caused by rare events in high dimensional variables associated with the rare events, a challenge we refer to as the curse of rarity. Existing methods tend to be either overly conservative or prone to overlooking safety-critical events, thus struggling to achieve both high precision and recall rates, which severely limits their applicability. This study endeavors to develop a criticality prediction model that excels in both precision and recall rates for evaluating the criticality of safety-critical autonomous systems. We propose a multi-stage learning framework designed to progressively densify the dataset, mitigating the curse of rarity across stages. To validate our approach, we evaluate it in two cases: lunar lander and bipedal walker scenarios. The results demonstrate that our method surpasses traditional approaches, providing a more accurate and dependable assessment of criticality in intelligent systems.
[ { "created": "Wed, 20 Mar 2024 14:00:29 GMT", "version": "v1" }, { "created": "Fri, 22 Mar 2024 10:59:56 GMT", "version": "v2" }, { "created": "Fri, 5 Apr 2024 15:48:03 GMT", "version": "v3" } ]
2024-04-08
[ [ "Bai", "Ruoxuan", "" ], [ "Yang", "Jingxuan", "" ], [ "Gong", "Weiduo", "" ], [ "Zhang", "Yi", "" ], [ "Lu", "Qiujing", "" ], [ "Feng", "Shuo", "" ] ]
Intelligent systems are increasingly integral to our daily lives, yet rare safety-critical events present significant latent threats to their practical deployment. Addressing this challenge hinges on accurately predicting the probability of safety-critical events occurring within a given time step from the current state, a metric we define as 'criticality'. The complexity of predicting criticality arises from the extreme data imbalance caused by rare events in high dimensional variables associated with the rare events, a challenge we refer to as the curse of rarity. Existing methods tend to be either overly conservative or prone to overlooking safety-critical events, thus struggling to achieve both high precision and recall rates, which severely limits their applicability. This study endeavors to develop a criticality prediction model that excels in both precision and recall rates for evaluating the criticality of safety-critical autonomous systems. We propose a multi-stage learning framework designed to progressively densify the dataset, mitigating the curse of rarity across stages. To validate our approach, we evaluate it in two cases: lunar lander and bipedal walker scenarios. The results demonstrate that our method surpasses traditional approaches, providing a more accurate and dependable assessment of criticality in intelligent systems.
2309.06285
Jerrin Bright
Bavesh Balaji, Jerrin Bright, Harish Prakash, Yuhao Chen, David A Clausi and John Zelek
Jersey Number Recognition using Keyframe Identification from Low-Resolution Broadcast Videos
Accepted in the 6th International Workshop on Multimedia Content Analysis in Sports (MMSports'23) @ ACM Multimedia
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Player identification is a crucial component in vision-driven soccer analytics, enabling various downstream tasks such as player assessment, in-game analysis, and broadcast production. However, automatically detecting jersey numbers from player tracklets in videos presents challenges due to motion blur, low resolution, distortions, and occlusions. Existing methods, utilizing Spatial Transformer Networks, CNNs, and Vision Transformers, have shown success in image data but struggle with real-world video data, where jersey numbers are not visible in most of the frames. Hence, identifying frames that contain the jersey number is a key sub-problem to tackle. To address these issues, we propose a robust keyframe identification module that extracts frames containing essential high-level information about the jersey number. A spatio-temporal network is then employed to model spatial and temporal context and predict the probabilities of jersey numbers in the video. Additionally, we adopt a multi-task loss function to predict the probability distribution of each digit separately. Extensive evaluations on the SoccerNet dataset demonstrate that incorporating our proposed keyframe identification module results in a significant 37.81% and 37.70% increase in the accuracies of 2 different test sets with domain gaps. These results highlight the effectiveness and importance of our approach in tackling the challenges of automatic jersey number detection in sports videos.
[ { "created": "Tue, 12 Sep 2023 14:43:50 GMT", "version": "v1" } ]
2023-09-13
[ [ "Balaji", "Bavesh", "" ], [ "Bright", "Jerrin", "" ], [ "Prakash", "Harish", "" ], [ "Chen", "Yuhao", "" ], [ "Clausi", "David A", "" ], [ "Zelek", "John", "" ] ]
Player identification is a crucial component in vision-driven soccer analytics, enabling various downstream tasks such as player assessment, in-game analysis, and broadcast production. However, automatically detecting jersey numbers from player tracklets in videos presents challenges due to motion blur, low resolution, distortions, and occlusions. Existing methods, utilizing Spatial Transformer Networks, CNNs, and Vision Transformers, have shown success in image data but struggle with real-world video data, where jersey numbers are not visible in most of the frames. Hence, identifying frames that contain the jersey number is a key sub-problem to tackle. To address these issues, we propose a robust keyframe identification module that extracts frames containing essential high-level information about the jersey number. A spatio-temporal network is then employed to model spatial and temporal context and predict the probabilities of jersey numbers in the video. Additionally, we adopt a multi-task loss function to predict the probability distribution of each digit separately. Extensive evaluations on the SoccerNet dataset demonstrate that incorporating our proposed keyframe identification module results in a significant 37.81% and 37.70% increase in the accuracies of 2 different test sets with domain gaps. These results highlight the effectiveness and importance of our approach in tackling the challenges of automatic jersey number detection in sports videos.
1309.1796
Sean Ahern
Sean Ahern and Eric Brugger and Brad Whitlock and Jeremy S. Meredith and Kathleen Biagas and Mark C. Miller and Hank Childs
VisIt: Experiences with Sustainable Software
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/3.0/
The success of the VisIt visualization system has been wholly dependent upon the culture and practices of software development that have fostered its welcome by users and embrace by developers and researchers. In the following paper, we, the founding developers and designers of VisIt, summarize some of the major efforts, both successful and unsuccessful, that we have undertaken in the last thirteen years to foster community, encourage research, create a sustainable open-source development model, measure impact, and support production software. We also provide commentary about the career paths that our development work has engendered.
[ { "created": "Sat, 7 Sep 2013 00:16:52 GMT", "version": "v1" } ]
2013-09-10
[ [ "Ahern", "Sean", "" ], [ "Brugger", "Eric", "" ], [ "Whitlock", "Brad", "" ], [ "Meredith", "Jeremy S.", "" ], [ "Biagas", "Kathleen", "" ], [ "Miller", "Mark C.", "" ], [ "Childs", "Hank", "" ] ]
The success of the VisIt visualization system has been wholly dependent upon the culture and practices of software development that have fostered its welcome by users and embrace by developers and researchers. In the following paper, we, the founding developers and designers of VisIt, summarize some of the major efforts, both successful and unsuccessful, that we have undertaken in the last thirteen years to foster community, encourage research, create a sustainable open-source development model, measure impact, and support production software. We also provide commentary about the career paths that our development work has engendered.
2211.04250
Nishtha Madaan
Nishtha Madaan, Adithya Manjunatha, Hrithik Nambiar, Aviral Kumar Goel, Harivansh Kumar, Diptikalyan Saha, Srikanta Bedathur
DetAIL : A Tool to Automatically Detect and Analyze Drift In Language
null
null
null
null
cs.LG cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning and deep learning-based decision making has become part of today's software. The goal of this work is to ensure that machine learning and deep learning-based systems are as trusted as traditional software. Traditional software is made dependable by following rigorous practice like static analysis, testing, debugging, verifying, and repairing throughout the development and maintenance life-cycle. Similarly for machine learning systems, we need to keep these models up to date so that their performance is not compromised. For this, current systems rely on scheduled re-training of these models as new data kicks in. In this work, we propose to measure the data drift that takes place when new data kicks in so that one can adaptively re-train the models whenever re-training is actually required irrespective of schedules. In addition to that, we generate various explanations at sentence level and dataset level to capture why a given payload text has drifted.
[ { "created": "Thu, 3 Nov 2022 19:50:12 GMT", "version": "v1" } ]
2022-11-09
[ [ "Madaan", "Nishtha", "" ], [ "Manjunatha", "Adithya", "" ], [ "Nambiar", "Hrithik", "" ], [ "Goel", "Aviral Kumar", "" ], [ "Kumar", "Harivansh", "" ], [ "Saha", "Diptikalyan", "" ], [ "Bedathur", "Srikanta", "" ] ]
Machine learning and deep learning-based decision making has become part of today's software. The goal of this work is to ensure that machine learning and deep learning-based systems are as trusted as traditional software. Traditional software is made dependable by following rigorous practice like static analysis, testing, debugging, verifying, and repairing throughout the development and maintenance life-cycle. Similarly for machine learning systems, we need to keep these models up to date so that their performance is not compromised. For this, current systems rely on scheduled re-training of these models as new data kicks in. In this work, we propose to measure the data drift that takes place when new data kicks in so that one can adaptively re-train the models whenever re-training is actually required irrespective of schedules. In addition to that, we generate various explanations at sentence level and dataset level to capture why a given payload text has drifted.
2102.07457
Florian De Vuyst J
Florian De Vuyst
Efficient solvers for shallow-water Saint-Venant equations and debris transportation-deposition models
arXiv admin note: text overlap with arXiv:1607.08710
null
null
null
cs.CE cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This research is aimed at achieving an efficient digital infrastructure for evaluating risks and damages caused by tsunami flooding. It is mainly focused on the suitable modeling of debris dynamics for a simple (but accurate enough) assessment of damages. For different reasons including computational performance and Big Data management issues, we focus our research on Eulerian debris flow modeling. Rather than using complex multiphase debris models, we rather use an empirical transportation and deposition model that takes into account the interaction with the main water flow, friction/contact with the ground but also debris interaction. In particular, for debris interaction, we have used ideas coming from vehicular traffic flow modeling. We introduce a velocity regularization term similar to the so-called ``anticipation term'' in traffic flow modeling that takes into account the local flow between neighboring debris and makes the problem mathematically well-posed. It prevents from the generation of ``Dirac measures of debris'' at shock waves. As a result, the model is able to capture emerging phenomenons like debris aggregation and accumulations, and possibly to react on the main flow by creating hills of debris and make the main stream deviate. We also discuss the way to derive quantities of interest (QoI), especially ``damage functions'' from the debris density and momentum fields. We believe that this original unexplored debris approach can lead to a valuable analysis of tsunami flooding damage assessment with Physics-based damage functions. Numerical experiments show the nice behaviour of the numerical solvers, including the solution of Saint-Venant's shallow water equations and debris dynamics equations.
[ { "created": "Mon, 15 Feb 2021 11:09:35 GMT", "version": "v1" } ]
2021-02-16
[ [ "De Vuyst", "Florian", "" ] ]
This research is aimed at achieving an efficient digital infrastructure for evaluating risks and damages caused by tsunami flooding. It is mainly focused on the suitable modeling of debris dynamics for a simple (but accurate enough) assessment of damages. For different reasons including computational performance and Big Data management issues, we focus our research on Eulerian debris flow modeling. Rather than using complex multiphase debris models, we rather use an empirical transportation and deposition model that takes into account the interaction with the main water flow, friction/contact with the ground but also debris interaction. In particular, for debris interaction, we have used ideas coming from vehicular traffic flow modeling. We introduce a velocity regularization term similar to the so-called ``anticipation term'' in traffic flow modeling that takes into account the local flow between neighboring debris and makes the problem mathematically well-posed. It prevents from the generation of ``Dirac measures of debris'' at shock waves. As a result, the model is able to capture emerging phenomenons like debris aggregation and accumulations, and possibly to react on the main flow by creating hills of debris and make the main stream deviate. We also discuss the way to derive quantities of interest (QoI), especially ``damage functions'' from the debris density and momentum fields. We believe that this original unexplored debris approach can lead to a valuable analysis of tsunami flooding damage assessment with Physics-based damage functions. Numerical experiments show the nice behaviour of the numerical solvers, including the solution of Saint-Venant's shallow water equations and debris dynamics equations.
1905.13686
Federico Baldassarre
Federico Baldassarre, Hossein Azizpour
Explainability Techniques for Graph Convolutional Networks
Accepted at the ICML 2019 Workshop "Learning and Reasoning with Graph-Structured Representations" (poster + spotlight talk)
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Networks are used to make decisions in potentially complex scenarios but it is usually not obvious how or why they made them. In this work, we study the explainability of Graph Network decisions using two main classes of techniques, gradient-based and decomposition-based, on a toy dataset and a chemistry task. Our study sets the ground for future development as well as application to real-world problems.
[ { "created": "Fri, 31 May 2019 15:51:29 GMT", "version": "v1" } ]
2019-06-03
[ [ "Baldassarre", "Federico", "" ], [ "Azizpour", "Hossein", "" ] ]
Graph Networks are used to make decisions in potentially complex scenarios but it is usually not obvious how or why they made them. In this work, we study the explainability of Graph Network decisions using two main classes of techniques, gradient-based and decomposition-based, on a toy dataset and a chemistry task. Our study sets the ground for future development as well as application to real-world problems.
2305.14177
Chris Beeler
Chris Beeler, Sriram Ganapathi Subramanian, Kyle Sprague, Nouha Chatti, Colin Bellinger, Mitchell Shahen, Nicholas Paquin, Mark Baula, Amanuel Dawit, Zihan Yang, Xinkai Li, Mark Crowley, Isaac Tamblyn
ChemGymRL: An Interactive Framework for Reinforcement Learning for Digital Chemistry
19 pages, 13 figures, 2 tables
null
null
null
cs.LG physics.chem-ph
http://creativecommons.org/licenses/by/4.0/
This paper provides a simulated laboratory for making use of Reinforcement Learning (RL) for chemical discovery. Since RL is fairly data intensive, training agents `on-the-fly' by taking actions in the real world is infeasible and possibly dangerous. Moreover, chemical processing and discovery involves challenges which are not commonly found in RL benchmarks and therefore offer a rich space to work in. We introduce a set of highly customizable and open-source RL environments, ChemGymRL, based on the standard Open AI Gym template. ChemGymRL supports a series of interconnected virtual chemical benches where RL agents can operate and train. The paper introduces and details each of these benches using well-known chemical reactions as illustrative examples, and trains a set of standard RL algorithms in each of these benches. Finally, discussion and comparison of the performances of several standard RL methods are provided in addition to a list of directions for future work as a vision for the further development and usage of ChemGymRL.
[ { "created": "Tue, 23 May 2023 15:56:17 GMT", "version": "v1" } ]
2023-05-24
[ [ "Beeler", "Chris", "" ], [ "Subramanian", "Sriram Ganapathi", "" ], [ "Sprague", "Kyle", "" ], [ "Chatti", "Nouha", "" ], [ "Bellinger", "Colin", "" ], [ "Shahen", "Mitchell", "" ], [ "Paquin", "Nicholas", "" ], [ "Baula", "Mark", "" ], [ "Dawit", "Amanuel", "" ], [ "Yang", "Zihan", "" ], [ "Li", "Xinkai", "" ], [ "Crowley", "Mark", "" ], [ "Tamblyn", "Isaac", "" ] ]
This paper provides a simulated laboratory for making use of Reinforcement Learning (RL) for chemical discovery. Since RL is fairly data intensive, training agents `on-the-fly' by taking actions in the real world is infeasible and possibly dangerous. Moreover, chemical processing and discovery involves challenges which are not commonly found in RL benchmarks and therefore offer a rich space to work in. We introduce a set of highly customizable and open-source RL environments, ChemGymRL, based on the standard Open AI Gym template. ChemGymRL supports a series of interconnected virtual chemical benches where RL agents can operate and train. The paper introduces and details each of these benches using well-known chemical reactions as illustrative examples, and trains a set of standard RL algorithms in each of these benches. Finally, discussion and comparison of the performances of several standard RL methods are provided in addition to a list of directions for future work as a vision for the further development and usage of ChemGymRL.
1501.04140
Mahdi Salarian mr
H. Miar Naimi, M. Salarian
A Fast Fractal Image Compression Algorithm Using Predefined Values for Contrast Scaling
5 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper a new fractal image compression algorithm is proposed in which the time of encoding process is considerably reduced. The algorithm exploits a domain pool reduction approach, along with using innovative predefined values for contrast scaling factor, S, instead of scanning the parameter space [0,1]. Within this approach only domain blocks with entropies greater than a threshold are considered. As a novel point, it is assumed that in each step of the encoding process, the domain block with small enough distance shall be found only for the range blocks with low activity (equivalently low entropy). This novel point is used to find reasonable estimations of S, and use them in the encoding process as predefined values, mentioned above. The algorithm has been examined for some well-known images. This result shows that our proposed algorithm considerably reduces the encoding time producing images that are approximately the same in quality.
[ { "created": "Sat, 17 Jan 2015 01:10:32 GMT", "version": "v1" } ]
2015-01-20
[ [ "Naimi", "H. Miar", "" ], [ "Salarian", "M.", "" ] ]
In this paper a new fractal image compression algorithm is proposed in which the time of encoding process is considerably reduced. The algorithm exploits a domain pool reduction approach, along with using innovative predefined values for contrast scaling factor, S, instead of scanning the parameter space [0,1]. Within this approach only domain blocks with entropies greater than a threshold are considered. As a novel point, it is assumed that in each step of the encoding process, the domain block with small enough distance shall be found only for the range blocks with low activity (equivalently low entropy). This novel point is used to find reasonable estimations of S, and use them in the encoding process as predefined values, mentioned above. The algorithm has been examined for some well-known images. This result shows that our proposed algorithm considerably reduces the encoding time producing images that are approximately the same in quality.
1406.4757
Anthony Bagnall Dr
Anthony Bagnall and Jason Lines
An Experimental Evaluation of Nearest Neighbour Time Series Classification
null
null
null
CMP-C14-01
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data mining research into time series classification (TSC) has focussed on alternative distance measures for nearest neighbour classifiers. It is standard practice to use 1-NN with Euclidean or dynamic time warping (DTW) distance as a straw man for comparison. As part of a wider investigation into elastic distance measures for TSC~\cite{lines14elastic}, we perform a series of experiments to test whether this standard practice is valid. Specifically, we compare 1-NN classifiers with Euclidean and DTW distance to standard classifiers, examine whether the performance of 1-NN Euclidean approaches that of 1-NN DTW as the number of cases increases, assess whether there is any benefit of setting $k$ for $k$-NN through cross validation whether it is worth setting the warping path for DTW through cross validation and finally is it better to use a window or weighting for DTW. Based on experiments on 77 problems, we conclude that 1-NN with Euclidean distance is fairly easy to beat but 1-NN with DTW is not, if window size is set through cross validation.
[ { "created": "Wed, 18 Jun 2014 15:09:21 GMT", "version": "v1" } ]
2014-06-19
[ [ "Bagnall", "Anthony", "" ], [ "Lines", "Jason", "" ] ]
Data mining research into time series classification (TSC) has focussed on alternative distance measures for nearest neighbour classifiers. It is standard practice to use 1-NN with Euclidean or dynamic time warping (DTW) distance as a straw man for comparison. As part of a wider investigation into elastic distance measures for TSC~\cite{lines14elastic}, we perform a series of experiments to test whether this standard practice is valid. Specifically, we compare 1-NN classifiers with Euclidean and DTW distance to standard classifiers, examine whether the performance of 1-NN Euclidean approaches that of 1-NN DTW as the number of cases increases, assess whether there is any benefit of setting $k$ for $k$-NN through cross validation whether it is worth setting the warping path for DTW through cross validation and finally is it better to use a window or weighting for DTW. Based on experiments on 77 problems, we conclude that 1-NN with Euclidean distance is fairly easy to beat but 1-NN with DTW is not, if window size is set through cross validation.
1907.07501
Andrew Pitts
Andrew M. Pitts
Typal Heterogeneous Equality Types
13 pages
ACM Trans. Comput. Logic 21, 3, Article 25 (March 2020), 10 pages
10.1145/3379447
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
The usual homogeneous form of equality type in Martin-L\"of Type Theory contains identifications between elements of the same type. By contrast, the heterogeneous form of equality contains identifications between elements of possibly different types. This paper introduces a simple set of axioms for such types. The axioms are equivalent to the combination of systematic elimination rules for both forms of equality, albeit with typal (also known as "propositional") computation properties, together with Streicher's Axiom K, or equivalently, the principle of uniqueness of identity proofs.
[ { "created": "Wed, 17 Jul 2019 13:20:16 GMT", "version": "v1" }, { "created": "Mon, 13 Jan 2020 17:27:29 GMT", "version": "v2" } ]
2022-03-15
[ [ "Pitts", "Andrew M.", "" ] ]
The usual homogeneous form of equality type in Martin-L\"of Type Theory contains identifications between elements of the same type. By contrast, the heterogeneous form of equality contains identifications between elements of possibly different types. This paper introduces a simple set of axioms for such types. The axioms are equivalent to the combination of systematic elimination rules for both forms of equality, albeit with typal (also known as "propositional") computation properties, together with Streicher's Axiom K, or equivalently, the principle of uniqueness of identity proofs.
2111.02655
Zhi-Gang Wang
Zhigang Wang, Ye Deng, Petter Holme, Zengru Di, Linyuan Lv, Jun Wu
Cost-effective Network Disintegration through Targeted Enumeration
9 pages, 5 figures
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding an optimal subset of nodes or links to disintegrate harmful networks is a fundamental problem in network science, with potential applications to anti-terrorism, epidemic control, and many other fields of study. The challenge of the network disintegration problem is to balance the effectiveness and efficiency of strategies. In this paper, we propose a cost-effective targeted enumeration method for network disintegration. The proposed approach includes two stages: searching for candidate objects and identifying an optimal solution. In the first stage, we use rank aggregation to generate a comprehensive ranking of node importance, upon which we identify a small-scale candidate set of nodes to remove. In the second stage, we use an enumeration method to find an optimal combination among the candidate nodes. Extensive experimental results on synthetic and real-world networks demonstrate that the proposed method achieves a satisfying trade-off between effectiveness and efficiency. The introduced two-stage targeted enumeration framework can also be applied to other computationally intractable combinational optimization problems, from team assembly via portfolio investment to drug design.
[ { "created": "Thu, 4 Nov 2021 06:47:11 GMT", "version": "v1" }, { "created": "Fri, 26 Aug 2022 12:20:12 GMT", "version": "v2" } ]
2022-08-29
[ [ "Wang", "Zhigang", "" ], [ "Deng", "Ye", "" ], [ "Holme", "Petter", "" ], [ "Di", "Zengru", "" ], [ "Lv", "Linyuan", "" ], [ "Wu", "Jun", "" ] ]
Finding an optimal subset of nodes or links to disintegrate harmful networks is a fundamental problem in network science, with potential applications to anti-terrorism, epidemic control, and many other fields of study. The challenge of the network disintegration problem is to balance the effectiveness and efficiency of strategies. In this paper, we propose a cost-effective targeted enumeration method for network disintegration. The proposed approach includes two stages: searching for candidate objects and identifying an optimal solution. In the first stage, we use rank aggregation to generate a comprehensive ranking of node importance, upon which we identify a small-scale candidate set of nodes to remove. In the second stage, we use an enumeration method to find an optimal combination among the candidate nodes. Extensive experimental results on synthetic and real-world networks demonstrate that the proposed method achieves a satisfying trade-off between effectiveness and efficiency. The introduced two-stage targeted enumeration framework can also be applied to other computationally intractable combinational optimization problems, from team assembly via portfolio investment to drug design.
2405.17901
Erdem Akag\"und\"uz
Irem Ulku, O. Ozgur Tanriover, Erdem Akag\"und\"uz
Near-Infrared and Low-Rank Adaptation of Vision Transformers in Remote Sensing
7 pages, 3 figures, 3 tables
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Plant health can be monitored dynamically using multispectral sensors that measure Near-Infrared reflectance (NIR). Despite this potential, obtaining and annotating high-resolution NIR images poses a significant challenge for training deep neural networks. Typically, large networks pre-trained on the RGB domain are utilized to fine-tune infrared images. This practice introduces a domain shift issue because of the differing visual traits between RGB and NIR images.As an alternative to fine-tuning, a method called low-rank adaptation (LoRA) enables more efficient training by optimizing rank-decomposition matrices while keeping the original network weights frozen. However, existing parameter-efficient adaptation strategies for remote sensing images focus on RGB images and overlook domain shift issues in the NIR domain. Therefore, this study investigates the potential benefits of using vision transformer (ViT) backbones pre-trained in the RGB domain, with low-rank adaptation for downstream tasks in the NIR domain. Extensive experiments demonstrate that employing LoRA with pre-trained ViT backbones yields the best performance for downstream tasks applied to NIR images.
[ { "created": "Tue, 28 May 2024 07:24:07 GMT", "version": "v1" } ]
2024-05-29
[ [ "Ulku", "Irem", "" ], [ "Tanriover", "O. Ozgur", "" ], [ "Akagündüz", "Erdem", "" ] ]
Plant health can be monitored dynamically using multispectral sensors that measure Near-Infrared reflectance (NIR). Despite this potential, obtaining and annotating high-resolution NIR images poses a significant challenge for training deep neural networks. Typically, large networks pre-trained on the RGB domain are utilized to fine-tune infrared images. This practice introduces a domain shift issue because of the differing visual traits between RGB and NIR images.As an alternative to fine-tuning, a method called low-rank adaptation (LoRA) enables more efficient training by optimizing rank-decomposition matrices while keeping the original network weights frozen. However, existing parameter-efficient adaptation strategies for remote sensing images focus on RGB images and overlook domain shift issues in the NIR domain. Therefore, this study investigates the potential benefits of using vision transformer (ViT) backbones pre-trained in the RGB domain, with low-rank adaptation for downstream tasks in the NIR domain. Extensive experiments demonstrate that employing LoRA with pre-trained ViT backbones yields the best performance for downstream tasks applied to NIR images.
1912.02398
Jie An
Jie An, Haoyi Xiong, Jun Huan and Jiebo Luo
Ultrafast Photorealistic Style Transfer via Neural Architecture Search
null
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The key challenge in photorealistic style transfer is that an algorithm should faithfully transfer the style of a reference photo to a content photo while the generated image should look like one captured by a camera. Although several photorealistic style transfer algorithms have been proposed, they need to rely on post- and/or pre-processing to make the generated images look photorealistic. If we disable the additional processing, these algorithms would fail to produce plausible photorealistic stylization in terms of detail preservation and photorealism. In this work, we propose an effective solution to these issues. Our method consists of a construction step (C-step) to build a photorealistic stylization network and a pruning step (P-step) for acceleration. In the C-step, we propose a dense auto-encoder named PhotoNet based on a carefully designed pre-analysis. PhotoNet integrates a feature aggregation module (BFA) and instance normalized skip links (INSL). To generate faithful stylization, we introduce multiple style transfer modules in the decoder and INSLs. PhotoNet significantly outperforms existing algorithms in terms of both efficiency and effectiveness. In the P-step, we adopt a neural architecture search method to accelerate PhotoNet. We propose an automatic network pruning framework in the manner of teacher-student learning for photorealistic stylization. The network architecture named PhotoNAS resulted from the search achieves significant acceleration over PhotoNet while keeping the stylization effects almost intact. We conduct extensive experiments on both image and video transfer. The results show that our method can produce favorable results while achieving 20-30 times acceleration in comparison with the existing state-of-the-art approaches. It is worth noting that the proposed algorithm accomplishes better performance without any pre- or post-processing.
[ { "created": "Thu, 5 Dec 2019 05:51:54 GMT", "version": "v1" }, { "created": "Mon, 22 Jun 2020 12:56:01 GMT", "version": "v2" } ]
2020-06-23
[ [ "An", "Jie", "" ], [ "Xiong", "Haoyi", "" ], [ "Huan", "Jun", "" ], [ "Luo", "Jiebo", "" ] ]
The key challenge in photorealistic style transfer is that an algorithm should faithfully transfer the style of a reference photo to a content photo while the generated image should look like one captured by a camera. Although several photorealistic style transfer algorithms have been proposed, they need to rely on post- and/or pre-processing to make the generated images look photorealistic. If we disable the additional processing, these algorithms would fail to produce plausible photorealistic stylization in terms of detail preservation and photorealism. In this work, we propose an effective solution to these issues. Our method consists of a construction step (C-step) to build a photorealistic stylization network and a pruning step (P-step) for acceleration. In the C-step, we propose a dense auto-encoder named PhotoNet based on a carefully designed pre-analysis. PhotoNet integrates a feature aggregation module (BFA) and instance normalized skip links (INSL). To generate faithful stylization, we introduce multiple style transfer modules in the decoder and INSLs. PhotoNet significantly outperforms existing algorithms in terms of both efficiency and effectiveness. In the P-step, we adopt a neural architecture search method to accelerate PhotoNet. We propose an automatic network pruning framework in the manner of teacher-student learning for photorealistic stylization. The network architecture named PhotoNAS resulted from the search achieves significant acceleration over PhotoNet while keeping the stylization effects almost intact. We conduct extensive experiments on both image and video transfer. The results show that our method can produce favorable results while achieving 20-30 times acceleration in comparison with the existing state-of-the-art approaches. It is worth noting that the proposed algorithm accomplishes better performance without any pre- or post-processing.
2311.02736
Saeed Saadatnejad
Saeed Saadatnejad, Yang Gao, Hamid Rezatofighi, Alexandre Alahi
JRDB-Traj: A Dataset and Benchmark for Trajectory Forecasting in Crowds
null
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting future trajectories is critical in autonomous navigation, especially in preventing accidents involving humans, where a predictive agent's ability to anticipate in advance is of utmost importance. Trajectory forecasting models, employed in fields such as robotics, autonomous vehicles, and navigation, face challenges in real-world scenarios, often due to the isolation of model components. To address this, we introduce a novel dataset for end-to-end trajectory forecasting, facilitating the evaluation of models in scenarios involving less-than-ideal preceding modules such as tracking. This dataset, an extension of the JRDB dataset, provides comprehensive data, including the locations of all agents, scene images, and point clouds, all from the robot's perspective. The objective is to predict the future positions of agents relative to the robot using raw sensory input data. It bridges the gap between isolated models and practical applications, promoting a deeper understanding of navigation dynamics. Additionally, we introduce a novel metric for assessing trajectory forecasting models in real-world scenarios where ground-truth identities are inaccessible, addressing issues related to undetected or over-detected agents. Researchers are encouraged to use our benchmark for model evaluation and benchmarking.
[ { "created": "Sun, 5 Nov 2023 18:59:31 GMT", "version": "v1" } ]
2023-11-07
[ [ "Saadatnejad", "Saeed", "" ], [ "Gao", "Yang", "" ], [ "Rezatofighi", "Hamid", "" ], [ "Alahi", "Alexandre", "" ] ]
Predicting future trajectories is critical in autonomous navigation, especially in preventing accidents involving humans, where a predictive agent's ability to anticipate in advance is of utmost importance. Trajectory forecasting models, employed in fields such as robotics, autonomous vehicles, and navigation, face challenges in real-world scenarios, often due to the isolation of model components. To address this, we introduce a novel dataset for end-to-end trajectory forecasting, facilitating the evaluation of models in scenarios involving less-than-ideal preceding modules such as tracking. This dataset, an extension of the JRDB dataset, provides comprehensive data, including the locations of all agents, scene images, and point clouds, all from the robot's perspective. The objective is to predict the future positions of agents relative to the robot using raw sensory input data. It bridges the gap between isolated models and practical applications, promoting a deeper understanding of navigation dynamics. Additionally, we introduce a novel metric for assessing trajectory forecasting models in real-world scenarios where ground-truth identities are inaccessible, addressing issues related to undetected or over-detected agents. Researchers are encouraged to use our benchmark for model evaluation and benchmarking.
1806.02739
Alban Laflaqui\`ere Dr
Alban Laflaqui\`ere, J.Kevin O'Regan, Bruno Gas, Alexander Terekhov
Discovering space - Grounding spatial topology and metric regularity in a naive agent's sensorimotor experience
59 pages, 16 figures, submitted to Neural Networks
null
null
null
cs.RO cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In line with the sensorimotor contingency theory, we investigate the problem of the perception of space from a fundamental sensorimotor perspective. Despite its pervasive nature in our perception of the world, the origin of the concept of space remains largely mysterious. For example in the context of artificial perception, this issue is usually circumvented by having engineers pre-define the spatial structure of the problem the agent has to face. We here show that the structure of space can be autonomously discovered by a naive agent in the form of sensorimotor regularities, that correspond to so called compensable sensory experiences: these are experiences that can be generated either by the agent or its environment. By detecting such compensable experiences the agent can infer the topological and metric structure of the external space in which its body is moving. We propose a theoretical description of the nature of these regularities and illustrate the approach on a simulated robotic arm equipped with an eye-like sensor, and which interacts with an object. Finally we show how these regularities can be used to build an internal representation of the sensor's external spatial configuration.
[ { "created": "Thu, 7 Jun 2018 15:51:21 GMT", "version": "v1" }, { "created": "Wed, 3 Oct 2018 11:47:51 GMT", "version": "v2" } ]
2018-10-04
[ [ "Laflaquière", "Alban", "" ], [ "O'Regan", "J. Kevin", "" ], [ "Gas", "Bruno", "" ], [ "Terekhov", "Alexander", "" ] ]
In line with the sensorimotor contingency theory, we investigate the problem of the perception of space from a fundamental sensorimotor perspective. Despite its pervasive nature in our perception of the world, the origin of the concept of space remains largely mysterious. For example in the context of artificial perception, this issue is usually circumvented by having engineers pre-define the spatial structure of the problem the agent has to face. We here show that the structure of space can be autonomously discovered by a naive agent in the form of sensorimotor regularities, that correspond to so called compensable sensory experiences: these are experiences that can be generated either by the agent or its environment. By detecting such compensable experiences the agent can infer the topological and metric structure of the external space in which its body is moving. We propose a theoretical description of the nature of these regularities and illustrate the approach on a simulated robotic arm equipped with an eye-like sensor, and which interacts with an object. Finally we show how these regularities can be used to build an internal representation of the sensor's external spatial configuration.
2005.05754
Angrosh Mandya
Angrosh Mandya, James O'Neill, Danushka Bollegala, and Frans Coenen
Do not let the history haunt you -- Mitigating Compounding Errors in Conversational Question Answering
null
null
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Conversational Question Answering (CoQA) task involves answering a sequence of inter-related conversational questions about a contextual paragraph. Although existing approaches employ human-written ground-truth answers for answering conversational questions at test time, in a realistic scenario, the CoQA model will not have any access to ground-truth answers for the previous questions, compelling the model to rely upon its own previously predicted answers for answering the subsequent questions. In this paper, we find that compounding errors occur when using previously predicted answers at test time, significantly lowering the performance of CoQA systems. To solve this problem, we propose a sampling strategy that dynamically selects between target answers and model predictions during training, thereby closely simulating the situation at test time. Further, we analyse the severity of this phenomena as a function of the question type, conversation length and domain type.
[ { "created": "Tue, 12 May 2020 13:29:38 GMT", "version": "v1" } ]
2020-05-13
[ [ "Mandya", "Angrosh", "" ], [ "O'Neill", "James", "" ], [ "Bollegala", "Danushka", "" ], [ "Coenen", "Frans", "" ] ]
The Conversational Question Answering (CoQA) task involves answering a sequence of inter-related conversational questions about a contextual paragraph. Although existing approaches employ human-written ground-truth answers for answering conversational questions at test time, in a realistic scenario, the CoQA model will not have any access to ground-truth answers for the previous questions, compelling the model to rely upon its own previously predicted answers for answering the subsequent questions. In this paper, we find that compounding errors occur when using previously predicted answers at test time, significantly lowering the performance of CoQA systems. To solve this problem, we propose a sampling strategy that dynamically selects between target answers and model predictions during training, thereby closely simulating the situation at test time. Further, we analyse the severity of this phenomena as a function of the question type, conversation length and domain type.
2407.00935
Qi Zhang
Qi Zhang, Tianqi Du, Haotian Huang, Yifei Wang, Yisen Wang
Look Ahead or Look Around? A Theoretical Comparison Between Autoregressive and Masked Pretraining
null
null
null
null
cs.LG cs.CL
http://creativecommons.org/licenses/by/4.0/
In recent years, the rise of generative self-supervised learning (SSL) paradigms has exhibited impressive performance across visual, language, and multi-modal domains. While the varied designs of generative SSL objectives lead to distinct properties in downstream tasks, a theoretical understanding of these differences remains largely unexplored. In this paper, we establish the first theoretical comparisons between two leading generative SSL paradigms: autoregressive SSL and masked SSL. Through establishing theoretical frameworks, we elucidate the strengths and limitations of autoregressive and masked SSL within the primary evaluation tasks of classification and content generation. Our findings demonstrate that in classification tasks, the flexibility of targeted tokens in masked SSL fosters more inter-sample connections compared to the fixed position of target tokens in autoregressive SSL, which yields superior clustering performance. In content generation tasks, the misalignment between the flexible lengths of test samples and the fixed length of unmasked texts in masked SSL (vs. flexible lengths of conditional texts in autoregressive SSL) hinders its generation performance. To leverage each other's strengths and mitigate weaknesses, we propose diversity-enhanced autoregressive and variable-length masked objectives, which substantially improve the classification performance of autoregressive SSL and the generation performance of masked SSL. Code is available at https://github.com/PKU-ML/LookAheadLookAround.
[ { "created": "Mon, 1 Jul 2024 03:35:59 GMT", "version": "v1" } ]
2024-07-02
[ [ "Zhang", "Qi", "" ], [ "Du", "Tianqi", "" ], [ "Huang", "Haotian", "" ], [ "Wang", "Yifei", "" ], [ "Wang", "Yisen", "" ] ]
In recent years, the rise of generative self-supervised learning (SSL) paradigms has exhibited impressive performance across visual, language, and multi-modal domains. While the varied designs of generative SSL objectives lead to distinct properties in downstream tasks, a theoretical understanding of these differences remains largely unexplored. In this paper, we establish the first theoretical comparisons between two leading generative SSL paradigms: autoregressive SSL and masked SSL. Through establishing theoretical frameworks, we elucidate the strengths and limitations of autoregressive and masked SSL within the primary evaluation tasks of classification and content generation. Our findings demonstrate that in classification tasks, the flexibility of targeted tokens in masked SSL fosters more inter-sample connections compared to the fixed position of target tokens in autoregressive SSL, which yields superior clustering performance. In content generation tasks, the misalignment between the flexible lengths of test samples and the fixed length of unmasked texts in masked SSL (vs. flexible lengths of conditional texts in autoregressive SSL) hinders its generation performance. To leverage each other's strengths and mitigate weaknesses, we propose diversity-enhanced autoregressive and variable-length masked objectives, which substantially improve the classification performance of autoregressive SSL and the generation performance of masked SSL. Code is available at https://github.com/PKU-ML/LookAheadLookAround.
2104.11216
Sumith Kulal
Sumith Kulal, Jiayuan Mao, Alex Aiken, Jiajun Wu
Hierarchical Motion Understanding via Motion Programs
CVPR 2021. First two authors contributed equally. Project page: https://sumith1896.github.io/motion2prog/
null
null
null
cs.CV cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current approaches to video analysis of human motion focus on raw pixels or keypoints as the basic units of reasoning. We posit that adding higher-level motion primitives, which can capture natural coarser units of motion such as backswing or follow-through, can be used to improve downstream analysis tasks. This higher level of abstraction can also capture key features, such as loops of repeated primitives, that are currently inaccessible at lower levels of representation. We therefore introduce Motion Programs, a neuro-symbolic, program-like representation that expresses motions as a composition of high-level primitives. We also present a system for automatically inducing motion programs from videos of human motion and for leveraging motion programs in video synthesis. Experiments show that motion programs can accurately describe a diverse set of human motions and the inferred programs contain semantically meaningful motion primitives, such as arm swings and jumping jacks. Our representation also benefits downstream tasks such as video interpolation and video prediction and outperforms off-the-shelf models. We further demonstrate how these programs can detect diverse kinds of repetitive motion and facilitate interactive video editing.
[ { "created": "Thu, 22 Apr 2021 17:49:59 GMT", "version": "v1" } ]
2021-04-23
[ [ "Kulal", "Sumith", "" ], [ "Mao", "Jiayuan", "" ], [ "Aiken", "Alex", "" ], [ "Wu", "Jiajun", "" ] ]
Current approaches to video analysis of human motion focus on raw pixels or keypoints as the basic units of reasoning. We posit that adding higher-level motion primitives, which can capture natural coarser units of motion such as backswing or follow-through, can be used to improve downstream analysis tasks. This higher level of abstraction can also capture key features, such as loops of repeated primitives, that are currently inaccessible at lower levels of representation. We therefore introduce Motion Programs, a neuro-symbolic, program-like representation that expresses motions as a composition of high-level primitives. We also present a system for automatically inducing motion programs from videos of human motion and for leveraging motion programs in video synthesis. Experiments show that motion programs can accurately describe a diverse set of human motions and the inferred programs contain semantically meaningful motion primitives, such as arm swings and jumping jacks. Our representation also benefits downstream tasks such as video interpolation and video prediction and outperforms off-the-shelf models. We further demonstrate how these programs can detect diverse kinds of repetitive motion and facilitate interactive video editing.
1612.05864
Rahul Singh
Rahul Singh and P.R. Kumar
Dynamic Adaptive Streaming using Index-Based Learning Algorithms
A conference version of this paper appeared in 54th IEEE Conference on Decision and Control (CDC), 2015, pages 7195-7200
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide a unified framework using which we design scalable dynamic adaptive video streaming algorithms based on index based policies (dubbed DAS-IP) to maximize the Quality of Experience (QoE) provided to clients using video streaming services. Due to the distributed nature of our algorithm, it is easily implementable. We begin by considering the simplest set-up of a one-hop wireless network in which an Access Point (AP) transmits video packets to multiple clients over a shared unreliable channel. The video file meant for each client has been fragmented into several packets, and the server maintains multiple copies (each of different quality) of the same video file. Clients maintain individual packet buffers in order to mitigate the effect of uncertainty on video iterruption. Streaming experience, or the Quality of Experience (QoE) of a client depends on several factors: i) starvation/outage probability, i.e., average time duration for which the client does not play video because the buffer is empty, ii) average video quality, iii) average number of starvation periods, iv) temporal variations in video quality etc. We pose the problem of making dynamic streaming decisions in order to maximize the total QoE as a Constrained Markov Decision Process (CMDP). A consideration of the associated dual MDP suggests us that the problem is vastly simplified if the AP is allowed to charge a price per unit bandwidth usage from the clients. More concretely, a "client-by-client" QoE optimization leads to the networkwide QoE maximization, and thus provides us a decentralized streaming algorithm. This enables the clients to themselves decide the optimal streaming choices in each time-slot, and yields us a much desired client-level adaptation algorithm. The optimal policy has an appealing simple threshold structure.
[ { "created": "Sun, 18 Dec 2016 07:27:46 GMT", "version": "v1" } ]
2016-12-20
[ [ "Singh", "Rahul", "" ], [ "Kumar", "P. R.", "" ] ]
We provide a unified framework using which we design scalable dynamic adaptive video streaming algorithms based on index based policies (dubbed DAS-IP) to maximize the Quality of Experience (QoE) provided to clients using video streaming services. Due to the distributed nature of our algorithm, it is easily implementable. We begin by considering the simplest set-up of a one-hop wireless network in which an Access Point (AP) transmits video packets to multiple clients over a shared unreliable channel. The video file meant for each client has been fragmented into several packets, and the server maintains multiple copies (each of different quality) of the same video file. Clients maintain individual packet buffers in order to mitigate the effect of uncertainty on video iterruption. Streaming experience, or the Quality of Experience (QoE) of a client depends on several factors: i) starvation/outage probability, i.e., average time duration for which the client does not play video because the buffer is empty, ii) average video quality, iii) average number of starvation periods, iv) temporal variations in video quality etc. We pose the problem of making dynamic streaming decisions in order to maximize the total QoE as a Constrained Markov Decision Process (CMDP). A consideration of the associated dual MDP suggests us that the problem is vastly simplified if the AP is allowed to charge a price per unit bandwidth usage from the clients. More concretely, a "client-by-client" QoE optimization leads to the networkwide QoE maximization, and thus provides us a decentralized streaming algorithm. This enables the clients to themselves decide the optimal streaming choices in each time-slot, and yields us a much desired client-level adaptation algorithm. The optimal policy has an appealing simple threshold structure.
0810.3453
Douglas Benjamin
Douglas P. Benjamin
Grid Computing in the Collider Detector at Fermilab (CDF) scientific experiment
ICHEP08
null
null
null
cs.DC hep-ex physics.data-an
http://creativecommons.org/licenses/by-nc-sa/3.0/
The computing model for the Collider Detector at Fermilab (CDF) scientific experiment has evolved since the beginning of the experiment. Initially CDF computing was comprised of dedicated resources located in computer farms around the world. With the wide spread acceptance of grid computing in High Energy Physics, CDF computing has migrated to using grid computing extensively. CDF uses computing grids around the world. Each computing grid has required different solutions. The use of portals as interfaces to the collaboration computing resources has proven to be an extremely useful technique allowing the CDF physicists transparently migrate from using dedicated computer farm to using computing located in grid farms often away from Fermilab. Grid computing at CDF continues to evolve as the grid standards and practices change.
[ { "created": "Mon, 20 Oct 2008 02:19:06 GMT", "version": "v1" } ]
2009-09-29
[ [ "Benjamin", "Douglas P.", "" ] ]
The computing model for the Collider Detector at Fermilab (CDF) scientific experiment has evolved since the beginning of the experiment. Initially CDF computing was comprised of dedicated resources located in computer farms around the world. With the wide spread acceptance of grid computing in High Energy Physics, CDF computing has migrated to using grid computing extensively. CDF uses computing grids around the world. Each computing grid has required different solutions. The use of portals as interfaces to the collaboration computing resources has proven to be an extremely useful technique allowing the CDF physicists transparently migrate from using dedicated computer farm to using computing located in grid farms often away from Fermilab. Grid computing at CDF continues to evolve as the grid standards and practices change.
2308.01497
Nicholas Ichien
Nicholas Ichien, Du\v{s}an Stamenkovi\'c, Keith J. Holyoak
Large Language Model Displays Emergent Ability to Interpret Novel Literary Metaphors
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent advances in the performance of large language models (LLMs) have sparked debate over whether, given sufficient training, high-level human abilities emerge in such generic forms of artificial intelligence (AI). Despite the exceptional performance of LLMs on a wide range of tasks involving natural language processing and reasoning, there has been sharp disagreement as to whether their abilities extend to more creative human abilities. A core example is the ability to interpret novel metaphors. Given the enormous and non curated text corpora used to train LLMs, a serious obstacle to designing tests is the requirement of finding novel yet high quality metaphors that are unlikely to have been included in the training data. Here we assessed the ability of GPT4, a state of the art large language model, to provide natural-language interpretations of novel literary metaphors drawn from Serbian poetry and translated into English. Despite exhibiting no signs of having been exposed to these metaphors previously, the AI system consistently produced detailed and incisive interpretations. Human judges, blind to the fact that an AI model was involved, rated metaphor interpretations generated by GPT4 as superior to those provided by a group of college students. In interpreting reversed metaphors, GPT4, as well as humans, exhibited signs of sensitivity to the Gricean cooperative principle. In addition, for several novel English poems GPT4 produced interpretations that were rated as excellent or good by a human literary critic. These results indicate that LLMs such as GPT4 have acquired an emergent ability to interpret complex metaphors, including those embedded in novel poems.
[ { "created": "Thu, 3 Aug 2023 01:46:27 GMT", "version": "v1" }, { "created": "Fri, 13 Oct 2023 15:51:46 GMT", "version": "v2" }, { "created": "Tue, 16 Jan 2024 19:00:56 GMT", "version": "v3" } ]
2024-01-18
[ [ "Ichien", "Nicholas", "" ], [ "Stamenković", "Dušan", "" ], [ "Holyoak", "Keith J.", "" ] ]
Recent advances in the performance of large language models (LLMs) have sparked debate over whether, given sufficient training, high-level human abilities emerge in such generic forms of artificial intelligence (AI). Despite the exceptional performance of LLMs on a wide range of tasks involving natural language processing and reasoning, there has been sharp disagreement as to whether their abilities extend to more creative human abilities. A core example is the ability to interpret novel metaphors. Given the enormous and non curated text corpora used to train LLMs, a serious obstacle to designing tests is the requirement of finding novel yet high quality metaphors that are unlikely to have been included in the training data. Here we assessed the ability of GPT4, a state of the art large language model, to provide natural-language interpretations of novel literary metaphors drawn from Serbian poetry and translated into English. Despite exhibiting no signs of having been exposed to these metaphors previously, the AI system consistently produced detailed and incisive interpretations. Human judges, blind to the fact that an AI model was involved, rated metaphor interpretations generated by GPT4 as superior to those provided by a group of college students. In interpreting reversed metaphors, GPT4, as well as humans, exhibited signs of sensitivity to the Gricean cooperative principle. In addition, for several novel English poems GPT4 produced interpretations that were rated as excellent or good by a human literary critic. These results indicate that LLMs such as GPT4 have acquired an emergent ability to interpret complex metaphors, including those embedded in novel poems.
2311.01403
Kota Kondo
Andrea Tagliabue, Kota Kondo, Tong Zhao, Mason Peterson, Claudius T. Tewari, Jonathan P. How
REAL: Resilience and Adaptation using Large Language Models on Autonomous Aerial Robots
13 pages, 5 figures, conference workshop
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) pre-trained on internet-scale datasets have shown impressive capabilities in code understanding, synthesis, and general purpose question-and-answering. Key to their performance is the substantial prior knowledge acquired during training and their ability to reason over extended sequences of symbols, often presented in natural language. In this work, we aim to harness the extensive long-term reasoning, natural language comprehension, and the available prior knowledge of LLMs for increased resilience and adaptation in autonomous mobile robots. We introduce REAL, an approach for REsilience and Adaptation using LLMs. REAL provides a strategy to employ LLMs as a part of the mission planning and control framework of an autonomous robot. The LLM employed by REAL provides (i) a source of prior knowledge to increase resilience for challenging scenarios that the system had not been explicitly designed for; (ii) a way to interpret natural-language and other log/diagnostic information available in the autonomy stack, for mission planning; (iii) a way to adapt the control inputs using minimal user-provided prior knowledge about the dynamics/kinematics of the robot. We integrate REAL in the autonomy stack of a real multirotor, querying onboard an offboard LLM at 0.1-1.0 Hz as part the robot's mission planning and control feedback loops. We demonstrate in real-world experiments the ability of the LLM to reduce the position tracking errors of a multirotor under the presence of (i) errors in the parameters of the controller and (ii) unmodeled dynamics. We also show (iii) decision making to avoid potentially dangerous scenarios (e.g., robot oscillates) that had not been explicitly accounted for in the initial prompt design.
[ { "created": "Thu, 2 Nov 2023 17:16:21 GMT", "version": "v1" } ]
2023-11-03
[ [ "Tagliabue", "Andrea", "" ], [ "Kondo", "Kota", "" ], [ "Zhao", "Tong", "" ], [ "Peterson", "Mason", "" ], [ "Tewari", "Claudius T.", "" ], [ "How", "Jonathan P.", "" ] ]
Large Language Models (LLMs) pre-trained on internet-scale datasets have shown impressive capabilities in code understanding, synthesis, and general purpose question-and-answering. Key to their performance is the substantial prior knowledge acquired during training and their ability to reason over extended sequences of symbols, often presented in natural language. In this work, we aim to harness the extensive long-term reasoning, natural language comprehension, and the available prior knowledge of LLMs for increased resilience and adaptation in autonomous mobile robots. We introduce REAL, an approach for REsilience and Adaptation using LLMs. REAL provides a strategy to employ LLMs as a part of the mission planning and control framework of an autonomous robot. The LLM employed by REAL provides (i) a source of prior knowledge to increase resilience for challenging scenarios that the system had not been explicitly designed for; (ii) a way to interpret natural-language and other log/diagnostic information available in the autonomy stack, for mission planning; (iii) a way to adapt the control inputs using minimal user-provided prior knowledge about the dynamics/kinematics of the robot. We integrate REAL in the autonomy stack of a real multirotor, querying onboard an offboard LLM at 0.1-1.0 Hz as part the robot's mission planning and control feedback loops. We demonstrate in real-world experiments the ability of the LLM to reduce the position tracking errors of a multirotor under the presence of (i) errors in the parameters of the controller and (ii) unmodeled dynamics. We also show (iii) decision making to avoid potentially dangerous scenarios (e.g., robot oscillates) that had not been explicitly accounted for in the initial prompt design.
2009.01719
Felix Hill Mr
Felix Hill, Olivier Tieleman, Tamara von Glehn, Nathaniel Wong, Hamza Merzic, Stephen Clark
Grounded Language Learning Fast and Slow
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work has shown that large text-based neural language models, trained with conventional supervised learning objectives, acquire a surprising propensity for few- and one-shot learning. Here, we show that an embodied agent situated in a simulated 3D world, and endowed with a novel dual-coding external memory, can exhibit similar one-shot word learning when trained with conventional reinforcement learning algorithms. After a single introduction to a novel object via continuous visual perception and a language prompt ("This is a dax"), the agent can re-identify the object and manipulate it as instructed ("Put the dax on the bed"). In doing so, it seamlessly integrates short-term, within-episode knowledge of the appropriate referent for the word "dax" with long-term lexical and motor knowledge acquired across episodes (i.e. "bed" and "putting"). We find that, under certain training conditions and with a particular memory writing mechanism, the agent's one-shot word-object binding generalizes to novel exemplars within the same ShapeNet category, and is effective in settings with unfamiliar numbers of objects. We further show how dual-coding memory can be exploited as a signal for intrinsic motivation, stimulating the agent to seek names for objects that may be useful for later executing instructions. Together, the results demonstrate that deep neural networks can exploit meta-learning, episodic memory and an explicitly multi-modal environment to account for 'fast-mapping', a fundamental pillar of human cognitive development and a potentially transformative capacity for agents that interact with human users.
[ { "created": "Thu, 3 Sep 2020 14:52:03 GMT", "version": "v1" }, { "created": "Mon, 7 Sep 2020 13:25:12 GMT", "version": "v2" }, { "created": "Tue, 15 Sep 2020 10:56:08 GMT", "version": "v3" }, { "created": "Wed, 14 Oct 2020 14:38:58 GMT", "version": "v4" } ]
2020-10-15
[ [ "Hill", "Felix", "" ], [ "Tieleman", "Olivier", "" ], [ "von Glehn", "Tamara", "" ], [ "Wong", "Nathaniel", "" ], [ "Merzic", "Hamza", "" ], [ "Clark", "Stephen", "" ] ]
Recent work has shown that large text-based neural language models, trained with conventional supervised learning objectives, acquire a surprising propensity for few- and one-shot learning. Here, we show that an embodied agent situated in a simulated 3D world, and endowed with a novel dual-coding external memory, can exhibit similar one-shot word learning when trained with conventional reinforcement learning algorithms. After a single introduction to a novel object via continuous visual perception and a language prompt ("This is a dax"), the agent can re-identify the object and manipulate it as instructed ("Put the dax on the bed"). In doing so, it seamlessly integrates short-term, within-episode knowledge of the appropriate referent for the word "dax" with long-term lexical and motor knowledge acquired across episodes (i.e. "bed" and "putting"). We find that, under certain training conditions and with a particular memory writing mechanism, the agent's one-shot word-object binding generalizes to novel exemplars within the same ShapeNet category, and is effective in settings with unfamiliar numbers of objects. We further show how dual-coding memory can be exploited as a signal for intrinsic motivation, stimulating the agent to seek names for objects that may be useful for later executing instructions. Together, the results demonstrate that deep neural networks can exploit meta-learning, episodic memory and an explicitly multi-modal environment to account for 'fast-mapping', a fundamental pillar of human cognitive development and a potentially transformative capacity for agents that interact with human users.
2210.07838
Gonzalo Mier
Gonzalo Mier and Jo\~ao Valente and Sytze de Bruin
Fields2Cover: An open-source coverage path planning library for unmanned agricultural vehicles
8 pages, 5 figures, 2 tables
null
10.1109/LRA.2023.3248439
null
cs.RO cs.CG
http://creativecommons.org/licenses/by/4.0/
This paper describes Fields2Cover, a novel open source library for coverage path planning (CPP) for agricultural vehicles. While there are several CPP solutions nowadays, there have been limited efforts to unify them into an open source library and provide benchmarking tools to compare their performance. Fields2Cover provides a framework for planning coverage paths, developing novel techniques, and benchmarking state-of-the-art algorithms. The library features a modular and extensible architecture that supports various vehicles and can be used for a variety of applications, including farms. Its core modules are: a headland generator, a swath generator, a route planner and a path planner. An interface to the Robot Operating System (ROS) is also supplied as an add-on. In this paper, the functionalities of the library for planning a coverage path in agriculture are demonstrated using 8 state-of-the-art methods and 7 objective functions in simulation and field experiments.
[ { "created": "Fri, 14 Oct 2022 14:09:29 GMT", "version": "v1" }, { "created": "Fri, 17 Feb 2023 10:35:46 GMT", "version": "v2" } ]
2023-02-27
[ [ "Mier", "Gonzalo", "" ], [ "Valente", "João", "" ], [ "de Bruin", "Sytze", "" ] ]
This paper describes Fields2Cover, a novel open source library for coverage path planning (CPP) for agricultural vehicles. While there are several CPP solutions nowadays, there have been limited efforts to unify them into an open source library and provide benchmarking tools to compare their performance. Fields2Cover provides a framework for planning coverage paths, developing novel techniques, and benchmarking state-of-the-art algorithms. The library features a modular and extensible architecture that supports various vehicles and can be used for a variety of applications, including farms. Its core modules are: a headland generator, a swath generator, a route planner and a path planner. An interface to the Robot Operating System (ROS) is also supplied as an add-on. In this paper, the functionalities of the library for planning a coverage path in agriculture are demonstrated using 8 state-of-the-art methods and 7 objective functions in simulation and field experiments.
2001.05337
Lukas Holzbaur
Lukas Holzbaur, Stanislav Kruglik, Alexey Frolov, Antonia Wachter-Zeh
Secrecy and Accessibility in Distributed Storage
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A distributed storage system (DSS) needs to be efficiently accessible and repairable. Recently, considerable effort has been made towards the latter, while the former is usually not considered, since a trivial solution exists in the form of systematic encoding. However, this is not a viable option when considering storage that has to be secure against eavesdroppers. This work investigates the problem of efficient access to data stored on an DSS under such security constraints. Further, we establish methods to balance the access load, i.e., ensure that each node is accessed equally often. We establish the capacity for the alphabet independent case and give an explicit code construction. For the alphabet-dependent case we give existence results based on a random coding argument.
[ { "created": "Wed, 15 Jan 2020 14:17:58 GMT", "version": "v1" } ]
2020-01-16
[ [ "Holzbaur", "Lukas", "" ], [ "Kruglik", "Stanislav", "" ], [ "Frolov", "Alexey", "" ], [ "Wachter-Zeh", "Antonia", "" ] ]
A distributed storage system (DSS) needs to be efficiently accessible and repairable. Recently, considerable effort has been made towards the latter, while the former is usually not considered, since a trivial solution exists in the form of systematic encoding. However, this is not a viable option when considering storage that has to be secure against eavesdroppers. This work investigates the problem of efficient access to data stored on an DSS under such security constraints. Further, we establish methods to balance the access load, i.e., ensure that each node is accessed equally often. We establish the capacity for the alphabet independent case and give an explicit code construction. For the alphabet-dependent case we give existence results based on a random coding argument.
2108.01123
Teresa Ludermir
Jefferson Souza, Teresa Ludermir
Metodos de Agrupamentos em dois Estagios
20 pages, in Portuguese, 10 figures, 6 tables
null
null
null
cs.LG cs.NE
http://creativecommons.org/licenses/by-nc-nd/4.0/
This work investigates the use of two-stage clustering methods. Four techniques were proposed: SOMK, SOMAK, ASCAK and SOINAK. SOMK is composed of a SOM (Self-Organizing Maps) followed by the K-means algorithm, SOMAK is a combination of SOM followed by the Ant K-means (AK) algorithm, ASCAK is composed by the ASCA (Ant System-based Clustering Algorithm) and AK algorithms, SOINAK is composed by the Self-Organizing Incremental Neural Network (SOINN) and AK. SOINAK presented a better performance among the four proposed techniques when applied to pattern recognition problems.
[ { "created": "Mon, 2 Aug 2021 18:49:39 GMT", "version": "v1" } ]
2021-08-04
[ [ "Souza", "Jefferson", "" ], [ "Ludermir", "Teresa", "" ] ]
This work investigates the use of two-stage clustering methods. Four techniques were proposed: SOMK, SOMAK, ASCAK and SOINAK. SOMK is composed of a SOM (Self-Organizing Maps) followed by the K-means algorithm, SOMAK is a combination of SOM followed by the Ant K-means (AK) algorithm, ASCAK is composed by the ASCA (Ant System-based Clustering Algorithm) and AK algorithms, SOINAK is composed by the Self-Organizing Incremental Neural Network (SOINN) and AK. SOINAK presented a better performance among the four proposed techniques when applied to pattern recognition problems.
1007.0859
Mirco Gelain
M. Gelain and M. S. Pini and F. Rossi and K. B. Venable and T. Walsh
Local search for stable marriage problems
12 pages, Proc. COMSOC 2010 (Third International Workshop on Computational Social Choice)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The stable marriage (SM) problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools, or more generally to any two-sided market. In the classical formulation, n men and n women express their preferences (via a strict total order) over the members of the other sex. Solving a SM problem means finding a stable marriage where stability is an envy-free notion: no man and woman who are not married to each other would both prefer each other to their partners or to being single. We consider both the classical stable marriage problem and one of its useful variations (denoted SMTI) where the men and women express their preferences in the form of an incomplete preference list with ties over a subset of the members of the other sex. Matchings are permitted only with people who appear in these lists, an we try to find a stable matching that marries as many people as possible. Whilst the SM problem is polynomial to solve, the SMTI problem is NP-hard. We propose to tackle both problems via a local search approach, which exploits properties of the problems to reduce the size of the neighborhood and to make local moves efficiently. We evaluate empirically our algorithm for SM problems by measuring its runtime behaviour and its ability to sample the lattice of all possible stable marriages. We evaluate our algorithm for SMTI problems in terms of both its runtime behaviour and its ability to find a maximum cardinality stable marriage.For SM problems, the number of steps of our algorithm grows only as O(nlog(n)), and that it samples very well the set of all stable marriages. It is thus a fair and efficient approach to generate stable marriages.Furthermore, our approach for SMTI problems is able to solve large problems, quickly returning stable matchings of large and often optimal size despite the NP-hardness of this problem.
[ { "created": "Tue, 6 Jul 2010 10:52:44 GMT", "version": "v1" } ]
2010-07-07
[ [ "Gelain", "M.", "" ], [ "Pini", "M. S.", "" ], [ "Rossi", "F.", "" ], [ "Venable", "K. B.", "" ], [ "Walsh", "T.", "" ] ]
The stable marriage (SM) problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools, or more generally to any two-sided market. In the classical formulation, n men and n women express their preferences (via a strict total order) over the members of the other sex. Solving a SM problem means finding a stable marriage where stability is an envy-free notion: no man and woman who are not married to each other would both prefer each other to their partners or to being single. We consider both the classical stable marriage problem and one of its useful variations (denoted SMTI) where the men and women express their preferences in the form of an incomplete preference list with ties over a subset of the members of the other sex. Matchings are permitted only with people who appear in these lists, an we try to find a stable matching that marries as many people as possible. Whilst the SM problem is polynomial to solve, the SMTI problem is NP-hard. We propose to tackle both problems via a local search approach, which exploits properties of the problems to reduce the size of the neighborhood and to make local moves efficiently. We evaluate empirically our algorithm for SM problems by measuring its runtime behaviour and its ability to sample the lattice of all possible stable marriages. We evaluate our algorithm for SMTI problems in terms of both its runtime behaviour and its ability to find a maximum cardinality stable marriage.For SM problems, the number of steps of our algorithm grows only as O(nlog(n)), and that it samples very well the set of all stable marriages. It is thus a fair and efficient approach to generate stable marriages.Furthermore, our approach for SMTI problems is able to solve large problems, quickly returning stable matchings of large and often optimal size despite the NP-hardness of this problem.
2408.01774
Dongyang Xu
Dongyang Xu, Yiran Luo, Tianle Lu, Qingfan Wang, Qing Zhou, Bingbing Nie
STDA: Spatio-Temporal Dual-Encoder Network Incorporating Driver Attention to Predict Driver Behaviors Under Safety-Critical Scenarios
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate behavior prediction for vehicles is essential but challenging for autonomous driving. Most existing studies show satisfying performance under regular scenarios, but most neglected safety-critical scenarios. In this study, a spatio-temporal dual-encoder network named STDA for safety-critical scenarios was developed. Considering the exceptional capabilities of human drivers in terms of situational awareness and comprehending risks, driver attention was incorporated into STDA to facilitate swift identification of the critical regions, which is expected to improve both performance and interpretability. STDA contains four parts: the driver attention prediction module, which predicts driver attention; the fusion module designed to fuse the features between driver attention and raw images; the temporary encoder module used to enhance the capability to interpret dynamic scenes; and the behavior prediction module to predict the behavior. The experiment data are used to train and validate the model. The results show that STDA improves the G-mean from 0.659 to 0.719 when incorporating driver attention and adopting a temporal encoder module. In addition, extensive experimentation has been conducted to validate that the proposed module exhibits robust generalization capabilities and can be seamlessly integrated into other mainstream models.
[ { "created": "Sat, 3 Aug 2024 13:06:04 GMT", "version": "v1" } ]
2024-08-06
[ [ "Xu", "Dongyang", "" ], [ "Luo", "Yiran", "" ], [ "Lu", "Tianle", "" ], [ "Wang", "Qingfan", "" ], [ "Zhou", "Qing", "" ], [ "Nie", "Bingbing", "" ] ]
Accurate behavior prediction for vehicles is essential but challenging for autonomous driving. Most existing studies show satisfying performance under regular scenarios, but most neglected safety-critical scenarios. In this study, a spatio-temporal dual-encoder network named STDA for safety-critical scenarios was developed. Considering the exceptional capabilities of human drivers in terms of situational awareness and comprehending risks, driver attention was incorporated into STDA to facilitate swift identification of the critical regions, which is expected to improve both performance and interpretability. STDA contains four parts: the driver attention prediction module, which predicts driver attention; the fusion module designed to fuse the features between driver attention and raw images; the temporary encoder module used to enhance the capability to interpret dynamic scenes; and the behavior prediction module to predict the behavior. The experiment data are used to train and validate the model. The results show that STDA improves the G-mean from 0.659 to 0.719 when incorporating driver attention and adopting a temporal encoder module. In addition, extensive experimentation has been conducted to validate that the proposed module exhibits robust generalization capabilities and can be seamlessly integrated into other mainstream models.
2401.14412
Hai Duong
Hai Duong, Dong Xu, ThanhVu Nguyen, Matthew B. Dwyer
Harnessing Neuron Stability to Improve DNN Verification
VeriStable and experimental data are available at: https://github.com/veristable/veristable
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Neural Networks (DNN) have emerged as an effective approach to tackling real-world problems. However, like human-written software, DNNs are susceptible to bugs and attacks. This has generated significant interests in developing effective and scalable DNN verification techniques and tools. In this paper, we present VeriStable, a novel extension of recently proposed DPLL-based constraint DNN verification approach. VeriStable leverages the insight that while neuron behavior may be non-linear across the entire DNN input space, at intermediate states computed during verification many neurons may be constrained to have linear behavior - these neurons are stable. Efficiently detecting stable neurons reduces combinatorial complexity without compromising the precision of abstractions. Moreover, the structure of clauses arising in DNN verification problems shares important characteristics with industrial SAT benchmarks. We adapt and incorporate multi-threading and restart optimizations targeting those characteristics to further optimize DPLL-based DNN verification. We evaluate the effectiveness of VeriStable across a range of challenging benchmarks including fully-connected feedforward networks (FNNs), convolutional neural networks (CNNs) and residual networks (ResNets) applied to the standard MNIST and CIFAR datasets. Preliminary results show that VeriStable is competitive and outperforms state-of-the-art DNN verification tools, including $\alpha$-$\beta$-CROWN and MN-BaB, the first and second performers of the VNN-COMP, respectively.
[ { "created": "Fri, 19 Jan 2024 23:48:04 GMT", "version": "v1" } ]
2024-01-29
[ [ "Duong", "Hai", "" ], [ "Xu", "Dong", "" ], [ "Nguyen", "ThanhVu", "" ], [ "Dwyer", "Matthew B.", "" ] ]
Deep Neural Networks (DNN) have emerged as an effective approach to tackling real-world problems. However, like human-written software, DNNs are susceptible to bugs and attacks. This has generated significant interests in developing effective and scalable DNN verification techniques and tools. In this paper, we present VeriStable, a novel extension of recently proposed DPLL-based constraint DNN verification approach. VeriStable leverages the insight that while neuron behavior may be non-linear across the entire DNN input space, at intermediate states computed during verification many neurons may be constrained to have linear behavior - these neurons are stable. Efficiently detecting stable neurons reduces combinatorial complexity without compromising the precision of abstractions. Moreover, the structure of clauses arising in DNN verification problems shares important characteristics with industrial SAT benchmarks. We adapt and incorporate multi-threading and restart optimizations targeting those characteristics to further optimize DPLL-based DNN verification. We evaluate the effectiveness of VeriStable across a range of challenging benchmarks including fully-connected feedforward networks (FNNs), convolutional neural networks (CNNs) and residual networks (ResNets) applied to the standard MNIST and CIFAR datasets. Preliminary results show that VeriStable is competitive and outperforms state-of-the-art DNN verification tools, including $\alpha$-$\beta$-CROWN and MN-BaB, the first and second performers of the VNN-COMP, respectively.
1507.00276
Ehsan Ullah Warriach
Ehsan Ullah Warriach, Tanir Ozcelebi, Johan J. Lukkien
Proactive Dependability Framework for Smart Environment Applications
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Smart environment applications demand novel solutions for managing quality of services, especially availability and reliability at run-time. The underlying systems are changing dynamically due to addition and removal of system components, changing execution environments, and resources depletion. Therefore, in such dynamic systems, the functionality and the performance of smart environment applications can be hampered by faults. In this paper, we follow a proactive approach to anticipate system state at runtime. We present a proactive dependability framework to prevent faults at runtime based on predictive analysis to increase availability and reliability of smart environment applications, and reduce manual user interventions.
[ { "created": "Wed, 1 Jul 2015 16:12:20 GMT", "version": "v1" } ]
2015-07-02
[ [ "Warriach", "Ehsan Ullah", "" ], [ "Ozcelebi", "Tanir", "" ], [ "Lukkien", "Johan J.", "" ] ]
Smart environment applications demand novel solutions for managing quality of services, especially availability and reliability at run-time. The underlying systems are changing dynamically due to addition and removal of system components, changing execution environments, and resources depletion. Therefore, in such dynamic systems, the functionality and the performance of smart environment applications can be hampered by faults. In this paper, we follow a proactive approach to anticipate system state at runtime. We present a proactive dependability framework to prevent faults at runtime based on predictive analysis to increase availability and reliability of smart environment applications, and reduce manual user interventions.
1704.02956
Weifeng Chen
Weifeng Chen, Donglai Xiang, Jia Deng
Surface Normals in the Wild
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of single-image depth estimation for images in the wild. We collect human annotated surface normals and use them to train a neural network that directly predicts pixel-wise depth. We propose two novel loss functions for training with surface normal annotations. Experiments on NYU Depth and our own dataset demonstrate that our approach can significantly improve the quality of depth estimation in the wild.
[ { "created": "Mon, 10 Apr 2017 17:13:00 GMT", "version": "v1" } ]
2017-04-11
[ [ "Chen", "Weifeng", "" ], [ "Xiang", "Donglai", "" ], [ "Deng", "Jia", "" ] ]
We study the problem of single-image depth estimation for images in the wild. We collect human annotated surface normals and use them to train a neural network that directly predicts pixel-wise depth. We propose two novel loss functions for training with surface normal annotations. Experiments on NYU Depth and our own dataset demonstrate that our approach can significantly improve the quality of depth estimation in the wild.
1705.05787
Luiz Gustavo Hafemann
Luiz G. Hafemann, Robert Sabourin, Luiz S. Oliveira
Learning Features for Offline Handwritten Signature Verification using Deep Convolutional Neural Networks
null
null
10.1016/j.patcog.2017.05.012
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Verifying the identity of a person using handwritten signatures is challenging in the presence of skilled forgeries, where a forger has access to a person's signature and deliberately attempt to imitate it. In offline (static) signature verification, the dynamic information of the signature writing process is lost, and it is difficult to design good feature extractors that can distinguish genuine signatures and skilled forgeries. This reflects in a relatively poor performance, with verification errors around 7% in the best systems in the literature. To address both the difficulty of obtaining good features, as well as improve system performance, we propose learning the representations from signature images, in a Writer-Independent format, using Convolutional Neural Networks. In particular, we propose a novel formulation of the problem that includes knowledge of skilled forgeries from a subset of users in the feature learning process, that aims to capture visual cues that distinguish genuine signatures and forgeries regardless of the user. Extensive experiments were conducted on four datasets: GPDS, MCYT, CEDAR and Brazilian PUC-PR datasets. On GPDS-160, we obtained a large improvement in state-of-the-art performance, achieving 1.72% Equal Error Rate, compared to 6.97% in the literature. We also verified that the features generalize beyond the GPDS dataset, surpassing the state-of-the-art performance in the other datasets, without requiring the representation to be fine-tuned to each particular dataset.
[ { "created": "Tue, 16 May 2017 16:08:09 GMT", "version": "v1" } ]
2017-05-17
[ [ "Hafemann", "Luiz G.", "" ], [ "Sabourin", "Robert", "" ], [ "Oliveira", "Luiz S.", "" ] ]
Verifying the identity of a person using handwritten signatures is challenging in the presence of skilled forgeries, where a forger has access to a person's signature and deliberately attempt to imitate it. In offline (static) signature verification, the dynamic information of the signature writing process is lost, and it is difficult to design good feature extractors that can distinguish genuine signatures and skilled forgeries. This reflects in a relatively poor performance, with verification errors around 7% in the best systems in the literature. To address both the difficulty of obtaining good features, as well as improve system performance, we propose learning the representations from signature images, in a Writer-Independent format, using Convolutional Neural Networks. In particular, we propose a novel formulation of the problem that includes knowledge of skilled forgeries from a subset of users in the feature learning process, that aims to capture visual cues that distinguish genuine signatures and forgeries regardless of the user. Extensive experiments were conducted on four datasets: GPDS, MCYT, CEDAR and Brazilian PUC-PR datasets. On GPDS-160, we obtained a large improvement in state-of-the-art performance, achieving 1.72% Equal Error Rate, compared to 6.97% in the literature. We also verified that the features generalize beyond the GPDS dataset, surpassing the state-of-the-art performance in the other datasets, without requiring the representation to be fine-tuned to each particular dataset.
2204.00925
Peng Xu
Peng Xu, Xinwei Deng and Alejandro Salado
A UCB-based Tree Search Approach to Joint Verification-Correction Strategy for Large Scale Systems
23 pages, 10 figures
null
null
null
cs.SE cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Verification planning is a sequential decision-making problem that specifies a set of verification activities (VA) and correction activities (CA) at different phases of system development. While VAs are used to identify errors and defects, CAs also play important roles in system verification as they correct the identified errors and defects. However, current planning methods only consider VAs as decision choices. Because VAs and CAs have different activity spaces, planning a joint verification-correction strategy (JVCS) is still challenging, especially for large-size systems. Here we introduce a UCB-based tree search approach to search for near-optimal JVCSs. First, verification planning is simplified as repeatable bandit problems and an upper confidence bound rule for repeatable bandits (UCBRB) is presented with the optimal regret bound. Next, a tree search algorithm is proposed to search for feasible JVCSs. A tree-based ensemble learning model is also used to extend the tree search algorithm to handle local optimality issues. The proposed approach is evaluated on the notional case of a communication system.
[ { "created": "Sat, 2 Apr 2022 19:21:37 GMT", "version": "v1" } ]
2022-04-05
[ [ "Xu", "Peng", "" ], [ "Deng", "Xinwei", "" ], [ "Salado", "Alejandro", "" ] ]
Verification planning is a sequential decision-making problem that specifies a set of verification activities (VA) and correction activities (CA) at different phases of system development. While VAs are used to identify errors and defects, CAs also play important roles in system verification as they correct the identified errors and defects. However, current planning methods only consider VAs as decision choices. Because VAs and CAs have different activity spaces, planning a joint verification-correction strategy (JVCS) is still challenging, especially for large-size systems. Here we introduce a UCB-based tree search approach to search for near-optimal JVCSs. First, verification planning is simplified as repeatable bandit problems and an upper confidence bound rule for repeatable bandits (UCBRB) is presented with the optimal regret bound. Next, a tree search algorithm is proposed to search for feasible JVCSs. A tree-based ensemble learning model is also used to extend the tree search algorithm to handle local optimality issues. The proposed approach is evaluated on the notional case of a communication system.
2103.14580
Mahdi Namazifar
Mahdi Namazifar, John Malik, Li Erran Li, Gokhan Tur, Dilek Hakkani T\"ur
Correcting Automated and Manual Speech Transcription Errors using Warped Language Models
Submitted to INTERSPEECH
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Masked language models have revolutionized natural language processing systems in the past few years. A recently introduced generalization of masked language models called warped language models are trained to be more robust to the types of errors that appear in automatic or manual transcriptions of spoken language by exposing the language model to the same types of errors during training. In this work we propose a novel approach that takes advantage of the robustness of warped language models to transcription noise for correcting transcriptions of spoken language. We show that our proposed approach is able to achieve up to 10% reduction in word error rates of both automatic and manual transcriptions of spoken language.
[ { "created": "Fri, 26 Mar 2021 16:43:23 GMT", "version": "v1" } ]
2021-03-29
[ [ "Namazifar", "Mahdi", "" ], [ "Malik", "John", "" ], [ "Li", "Li Erran", "" ], [ "Tur", "Gokhan", "" ], [ "Tür", "Dilek Hakkani", "" ] ]
Masked language models have revolutionized natural language processing systems in the past few years. A recently introduced generalization of masked language models called warped language models are trained to be more robust to the types of errors that appear in automatic or manual transcriptions of spoken language by exposing the language model to the same types of errors during training. In this work we propose a novel approach that takes advantage of the robustness of warped language models to transcription noise for correcting transcriptions of spoken language. We show that our proposed approach is able to achieve up to 10% reduction in word error rates of both automatic and manual transcriptions of spoken language.
1609.06261
S.M. Riazul Islam PhD
S. M. Riazul Islam, Nurilla Avazov, Octavia A. Dobre, and Kyung Sup Kwak
Power-Domain Non-Orthogonal Multiple Access (NOMA) in 5G Systems: Potentials and Challenges
41 pages, 19 figures, IEEE Communications Surveys and Tutorials, 2016
null
10.1109/COMST.2016.2621116
null
cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-orthogonal multiple access (NOMA) is one of the promising radio access techniques for performance enhancement in next-generation cellular communications. Compared to orthogonal frequency division multiple access (OFDMA), which is a well-known high-capacity orthogonal multiple access (OMA) technique, NOMA offers a set of desirable benefits, including greater spectrum efficiency. There are different types of NOMA techniques, including power-domain and code-domain. This paper primarily focuses on power-domain NOMA that utilizes superposition coding (SC) at the transmitter and successive interference cancellation (SIC) at the receiver. Various researchers have demonstrated that NOMA can be used effectively to meet both network-level and user-experienced data rate requirements of fifth-generation (5G) technologies. From that perspective, this paper comprehensively surveys the recent progress of NOMA in 5G systems, reviewing the state-of-the-art capacity analysis, power allocation strategies, user fairness, and user-pairing schemes in NOMA. In addition, this paper discusses how NOMA performs when it is integrated with various proven wireless communications techniques, such as cooperative communications, multiple input multiple output (MIMO), beamforming, space time coding, and network coding, among others. Furthermore, this paper discusses several important issues on NOMA implementation and provides some avenues for future research.
[ { "created": "Tue, 20 Sep 2016 17:32:35 GMT", "version": "v1" } ]
2016-11-17
[ [ "Islam", "S. M. Riazul", "" ], [ "Avazov", "Nurilla", "" ], [ "Dobre", "Octavia A.", "" ], [ "Kwak", "Kyung Sup", "" ] ]
Non-orthogonal multiple access (NOMA) is one of the promising radio access techniques for performance enhancement in next-generation cellular communications. Compared to orthogonal frequency division multiple access (OFDMA), which is a well-known high-capacity orthogonal multiple access (OMA) technique, NOMA offers a set of desirable benefits, including greater spectrum efficiency. There are different types of NOMA techniques, including power-domain and code-domain. This paper primarily focuses on power-domain NOMA that utilizes superposition coding (SC) at the transmitter and successive interference cancellation (SIC) at the receiver. Various researchers have demonstrated that NOMA can be used effectively to meet both network-level and user-experienced data rate requirements of fifth-generation (5G) technologies. From that perspective, this paper comprehensively surveys the recent progress of NOMA in 5G systems, reviewing the state-of-the-art capacity analysis, power allocation strategies, user fairness, and user-pairing schemes in NOMA. In addition, this paper discusses how NOMA performs when it is integrated with various proven wireless communications techniques, such as cooperative communications, multiple input multiple output (MIMO), beamforming, space time coding, and network coding, among others. Furthermore, this paper discusses several important issues on NOMA implementation and provides some avenues for future research.
2012.03932
Yasaman Jafari
Yasaman Jafari, Nazanin Sabri, Behnam Bahrak
Investigating the effects of Goodreads challenges on individuals reading habits
null
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
Sharing our goals with others and setting public challenges for ourselves is a topic that has been the center of many discussions. This study examines reading challenges, how participation in them has changed throughout the years, and how they influence users reading productivity. To do so, we analyze Goodreads, a social book cataloging website, with a yearly challenge feature. We further show that gender is a significant factor in how successful individuals are in their challenges. Additionally, we investigate the association between participation in reading challenges and the number of books people read.
[ { "created": "Mon, 7 Dec 2020 18:59:35 GMT", "version": "v1" }, { "created": "Tue, 8 Dec 2020 11:14:22 GMT", "version": "v2" }, { "created": "Mon, 15 Feb 2021 08:51:57 GMT", "version": "v3" } ]
2021-02-16
[ [ "Jafari", "Yasaman", "" ], [ "Sabri", "Nazanin", "" ], [ "Bahrak", "Behnam", "" ] ]
Sharing our goals with others and setting public challenges for ourselves is a topic that has been the center of many discussions. This study examines reading challenges, how participation in them has changed throughout the years, and how they influence users reading productivity. To do so, we analyze Goodreads, a social book cataloging website, with a yearly challenge feature. We further show that gender is a significant factor in how successful individuals are in their challenges. Additionally, we investigate the association between participation in reading challenges and the number of books people read.
2303.14884
Shuangping Huang
Fan Yang, Lei Hu, Xinwu Liu, Shuangping Huang, Zhenghui Gu
A large-scale dataset for end-to-end table recognition in the wild
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Table recognition (TR) is one of the research hotspots in pattern recognition, which aims to extract information from tables in an image. Common table recognition tasks include table detection (TD), table structure recognition (TSR) and table content recognition (TCR). TD is to locate tables in the image, TCR recognizes text content, and TSR recognizes spatial ogical structure. Currently, the end-to-end TR in real scenarios, accomplishing the three sub-tasks simultaneously, is yet an unexplored research area. One major factor that inhibits researchers is the lack of a benchmark dataset. To this end, we propose a new large-scale dataset named Table Recognition Set (TabRecSet) with diverse table forms sourcing from multiple scenarios in the wild, providing complete annotation dedicated to end-to-end TR research. It is the largest and first bi-lingual dataset for end-to-end TR, with 38.1K tables in which 20.4K are in English\, and 17.7K are in Chinese. The samples have diverse forms, such as the border-complete and -incomplete table, regular and irregular table (rotated, distorted, etc.). The scenarios are multiple in the wild, varying from scanned to camera-taken images, documents to Excel tables, educational test papers to financial invoices. The annotations are complete, consisting of the table body spatial annotation, cell spatial logical annotation and text content for TD, TSR and TCR, respectively. The spatial annotation utilizes the polygon instead of the bounding box or quadrilateral adopted by most datasets. The polygon spatial annotation is more suitable for irregular tables that are common in wild scenarios. Additionally, we propose a visualized and interactive annotation tool named TableMe to improve the efficiency and quality of table annotation.
[ { "created": "Mon, 27 Mar 2023 02:48:51 GMT", "version": "v1" } ]
2023-03-28
[ [ "Yang", "Fan", "" ], [ "Hu", "Lei", "" ], [ "Liu", "Xinwu", "" ], [ "Huang", "Shuangping", "" ], [ "Gu", "Zhenghui", "" ] ]
Table recognition (TR) is one of the research hotspots in pattern recognition, which aims to extract information from tables in an image. Common table recognition tasks include table detection (TD), table structure recognition (TSR) and table content recognition (TCR). TD is to locate tables in the image, TCR recognizes text content, and TSR recognizes spatial ogical structure. Currently, the end-to-end TR in real scenarios, accomplishing the three sub-tasks simultaneously, is yet an unexplored research area. One major factor that inhibits researchers is the lack of a benchmark dataset. To this end, we propose a new large-scale dataset named Table Recognition Set (TabRecSet) with diverse table forms sourcing from multiple scenarios in the wild, providing complete annotation dedicated to end-to-end TR research. It is the largest and first bi-lingual dataset for end-to-end TR, with 38.1K tables in which 20.4K are in English\, and 17.7K are in Chinese. The samples have diverse forms, such as the border-complete and -incomplete table, regular and irregular table (rotated, distorted, etc.). The scenarios are multiple in the wild, varying from scanned to camera-taken images, documents to Excel tables, educational test papers to financial invoices. The annotations are complete, consisting of the table body spatial annotation, cell spatial logical annotation and text content for TD, TSR and TCR, respectively. The spatial annotation utilizes the polygon instead of the bounding box or quadrilateral adopted by most datasets. The polygon spatial annotation is more suitable for irregular tables that are common in wild scenarios. Additionally, we propose a visualized and interactive annotation tool named TableMe to improve the efficiency and quality of table annotation.
2104.01037
Perceval Wajsburt
Perceval Wajsburt, Yoann Taill\'e, Xavier Tannier
Effect of depth order on iterative nested named entity recognition models
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper studies the effect of the order of depth of mention on nested named entity recognition (NER) models. NER is an essential task in the extraction of biomedical information, and nested entities are common since medical concepts can assemble to form larger entities. Conventional NER systems only predict disjointed entities. Thus, iterative models for nested NER use multiple predictions to enumerate all entities, imposing a predefined order from largest to smallest or smallest to largest. We design an order-agnostic iterative model and a procedure to choose a custom order during training and prediction. To accommodate for this task, we propose a modification of the Transformer architecture to take into account the entities predicted in the previous steps. We provide a set of experiments to study the model's capabilities and the effects of the order on performance. Finally, we show that the smallest to largest order gives the best results.
[ { "created": "Fri, 2 Apr 2021 13:18:52 GMT", "version": "v1" } ]
2021-04-05
[ [ "Wajsburt", "Perceval", "" ], [ "Taillé", "Yoann", "" ], [ "Tannier", "Xavier", "" ] ]
This paper studies the effect of the order of depth of mention on nested named entity recognition (NER) models. NER is an essential task in the extraction of biomedical information, and nested entities are common since medical concepts can assemble to form larger entities. Conventional NER systems only predict disjointed entities. Thus, iterative models for nested NER use multiple predictions to enumerate all entities, imposing a predefined order from largest to smallest or smallest to largest. We design an order-agnostic iterative model and a procedure to choose a custom order during training and prediction. To accommodate for this task, we propose a modification of the Transformer architecture to take into account the entities predicted in the previous steps. We provide a set of experiments to study the model's capabilities and the effects of the order on performance. Finally, we show that the smallest to largest order gives the best results.
2203.07922
Dat Thanh Tran
Dat Thanh Tran, Juho Kanniainen, Alexandros Iosifidis
How informative is the Order Book Beyond the Best Levels? Machine Learning Perspective
NeurIPS 2021 Workshop on Machine Learning meets Econometrics (MLECON2021)
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research on limit order book markets has been rapidly growing and nowadays high-frequency full order book data is widely available for researchers and practitioners. However, it is common that research papers use the best level data only, which motivates us to ask whether the exclusion of the quotes deeper in the book over multiple price levels causes performance degradation. In this paper, we address this question by using modern Machine Learning (ML) techniques to predict mid-price movements without assuming that limit order book markets represent a linear system. We provide a number of results that are robust across ML prediction models, feature selection algorithms, data sets, and prediction horizons. We find that the best bid and ask levels are systematically identified not only as the most informative levels in the order books, but also to carry most of the information needed for good prediction performance. On the other hand, even if the top-of-the-book levels contain most of the relevant information, to maximize models' performance one should use all data across all the levels. Additionally, the informativeness of the order book levels clearly decreases from the first to the fourth level while the rest of the levels are approximately equally important.
[ { "created": "Tue, 15 Mar 2022 14:04:01 GMT", "version": "v1" } ]
2022-03-16
[ [ "Tran", "Dat Thanh", "" ], [ "Kanniainen", "Juho", "" ], [ "Iosifidis", "Alexandros", "" ] ]
Research on limit order book markets has been rapidly growing and nowadays high-frequency full order book data is widely available for researchers and practitioners. However, it is common that research papers use the best level data only, which motivates us to ask whether the exclusion of the quotes deeper in the book over multiple price levels causes performance degradation. In this paper, we address this question by using modern Machine Learning (ML) techniques to predict mid-price movements without assuming that limit order book markets represent a linear system. We provide a number of results that are robust across ML prediction models, feature selection algorithms, data sets, and prediction horizons. We find that the best bid and ask levels are systematically identified not only as the most informative levels in the order books, but also to carry most of the information needed for good prediction performance. On the other hand, even if the top-of-the-book levels contain most of the relevant information, to maximize models' performance one should use all data across all the levels. Additionally, the informativeness of the order book levels clearly decreases from the first to the fourth level while the rest of the levels are approximately equally important.
2006.07550
Peng Xu
Liang Ding, Peng Xu, Haibo Gao, Zhikai Wang, Ruyi Zhou, Zhaopei Gong, and Guangjun Liu
Fault Tolerant Free Gait and Footstep Planning for Hexapod Robot Based on Monte-Carlo Tree
null
IEEE Robotics and Automation Letters ( Volume: 7, Issue: 2, April 2022)
10.1109/LRA.2021.3133610
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Legged robots can pass through complex field environments by selecting gaits and discrete footholds carefully. Traditional methods plan gait and foothold separately and treat them as the single-step optimal process. However, such processing causes its poor passability in a sparse foothold environment. This paper novelly proposes a coordinative planning method for hexapod robots that regards the planning of gait and foothold as a sequence optimization problem with the consideration of dealing with the harshness of the environment as leg fault. The Monte Carlo tree search algorithm(MCTS) is used to optimize the entire sequence. Two methods, FastMCTS, and SlidingMCTS are proposed to solve some defeats of the standard MCTS applicating in the field of legged robot planning. The proposed planning algorithm combines the fault-tolerant gait method to improve the passability of the algorithm. Finally, compared with other planning methods, experiments on terrains with different densities of footholds and artificially-designed challenging terrain are carried out to verify our methods. All results show that the proposed method dramatically improves the hexapod robot's ability to pass through sparse footholds environment.
[ { "created": "Sat, 13 Jun 2020 03:36:55 GMT", "version": "v1" }, { "created": "Tue, 16 Jun 2020 04:20:58 GMT", "version": "v2" } ]
2023-07-06
[ [ "Ding", "Liang", "" ], [ "Xu", "Peng", "" ], [ "Gao", "Haibo", "" ], [ "Wang", "Zhikai", "" ], [ "Zhou", "Ruyi", "" ], [ "Gong", "Zhaopei", "" ], [ "Liu", "Guangjun", "" ] ]
Legged robots can pass through complex field environments by selecting gaits and discrete footholds carefully. Traditional methods plan gait and foothold separately and treat them as the single-step optimal process. However, such processing causes its poor passability in a sparse foothold environment. This paper novelly proposes a coordinative planning method for hexapod robots that regards the planning of gait and foothold as a sequence optimization problem with the consideration of dealing with the harshness of the environment as leg fault. The Monte Carlo tree search algorithm(MCTS) is used to optimize the entire sequence. Two methods, FastMCTS, and SlidingMCTS are proposed to solve some defeats of the standard MCTS applicating in the field of legged robot planning. The proposed planning algorithm combines the fault-tolerant gait method to improve the passability of the algorithm. Finally, compared with other planning methods, experiments on terrains with different densities of footholds and artificially-designed challenging terrain are carried out to verify our methods. All results show that the proposed method dramatically improves the hexapod robot's ability to pass through sparse footholds environment.
2008.03880
Boris Ivanovic
Boris Ivanovic, Karen Leung, Edward Schmerling, Marco Pavone
Multimodal Deep Generative Models for Trajectory Prediction: A Conditional Variational Autoencoder Approach
8 pages, 3 figures, 2 tables. IEEE Robotics and Automation Letters (RA-L), 2020
null
null
null
cs.RO cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human behavior prediction models enable robots to anticipate how humans may react to their actions, and hence are instrumental to devising safe and proactive robot planning algorithms. However, modeling complex interaction dynamics and capturing the possibility of many possible outcomes in such interactive settings is very challenging, which has recently prompted the study of several different approaches. In this work, we provide a self-contained tutorial on a conditional variational autoencoder (CVAE) approach to human behavior prediction which, at its core, can produce a multimodal probability distribution over future human trajectories conditioned on past interactions and candidate robot future actions. Specifically, the goals of this tutorial paper are to review and build a taxonomy of state-of-the-art methods in human behavior prediction, from physics-based to purely data-driven methods, provide a rigorous yet easily accessible description of a data-driven, CVAE-based approach, highlight important design characteristics that make this an attractive model to use in the context of model-based planning for human-robot interactions, and provide important design considerations when using this class of models.
[ { "created": "Mon, 10 Aug 2020 03:18:27 GMT", "version": "v1" }, { "created": "Sat, 21 Nov 2020 00:13:47 GMT", "version": "v2" } ]
2020-11-24
[ [ "Ivanovic", "Boris", "" ], [ "Leung", "Karen", "" ], [ "Schmerling", "Edward", "" ], [ "Pavone", "Marco", "" ] ]
Human behavior prediction models enable robots to anticipate how humans may react to their actions, and hence are instrumental to devising safe and proactive robot planning algorithms. However, modeling complex interaction dynamics and capturing the possibility of many possible outcomes in such interactive settings is very challenging, which has recently prompted the study of several different approaches. In this work, we provide a self-contained tutorial on a conditional variational autoencoder (CVAE) approach to human behavior prediction which, at its core, can produce a multimodal probability distribution over future human trajectories conditioned on past interactions and candidate robot future actions. Specifically, the goals of this tutorial paper are to review and build a taxonomy of state-of-the-art methods in human behavior prediction, from physics-based to purely data-driven methods, provide a rigorous yet easily accessible description of a data-driven, CVAE-based approach, highlight important design characteristics that make this an attractive model to use in the context of model-based planning for human-robot interactions, and provide important design considerations when using this class of models.
2203.15209
Wanyu Lin
Wanyu Lin, Hao Lan, Hao Wang and Baochun Li
OrphicX: A Causality-Inspired Latent Variable Model for Interpreting Graph Neural Networks
Accepted by CVPR 2022, an oral presentation, source code: https://github.com/WanyuGroup/CVPR2022-OrphicX
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
This paper proposes a new eXplanation framework, called OrphicX, for generating causal explanations for any graph neural networks (GNNs) based on learned latent causal factors. Specifically, we construct a distinct generative model and design an objective function that encourages the generative model to produce causal, compact, and faithful explanations. This is achieved by isolating the causal factors in the latent space of graphs by maximizing the information flow measurements. We theoretically analyze the cause-effect relationships in the proposed causal graph, identify node attributes as confounders between graphs and GNN predictions, and circumvent such confounder effect by leveraging the backdoor adjustment formula. Our framework is compatible with any GNNs, and it does not require access to the process by which the target GNN produces its predictions. In addition, it does not rely on the linear-independence assumption of the explained features, nor require prior knowledge on the graph learning tasks. We show a proof-of-concept of OrphicX on canonical classification problems on graph data. In particular, we analyze the explanatory subgraphs obtained from explanations for molecular graphs (i.e., Mutag) and quantitatively evaluate the explanation performance with frequently occurring subgraph patterns. Empirically, we show that OrphicX can effectively identify the causal semantics for generating causal explanations, significantly outperforming its alternatives.
[ { "created": "Tue, 29 Mar 2022 03:08:33 GMT", "version": "v1" } ]
2022-04-12
[ [ "Lin", "Wanyu", "" ], [ "Lan", "Hao", "" ], [ "Wang", "Hao", "" ], [ "Li", "Baochun", "" ] ]
This paper proposes a new eXplanation framework, called OrphicX, for generating causal explanations for any graph neural networks (GNNs) based on learned latent causal factors. Specifically, we construct a distinct generative model and design an objective function that encourages the generative model to produce causal, compact, and faithful explanations. This is achieved by isolating the causal factors in the latent space of graphs by maximizing the information flow measurements. We theoretically analyze the cause-effect relationships in the proposed causal graph, identify node attributes as confounders between graphs and GNN predictions, and circumvent such confounder effect by leveraging the backdoor adjustment formula. Our framework is compatible with any GNNs, and it does not require access to the process by which the target GNN produces its predictions. In addition, it does not rely on the linear-independence assumption of the explained features, nor require prior knowledge on the graph learning tasks. We show a proof-of-concept of OrphicX on canonical classification problems on graph data. In particular, we analyze the explanatory subgraphs obtained from explanations for molecular graphs (i.e., Mutag) and quantitatively evaluate the explanation performance with frequently occurring subgraph patterns. Empirically, we show that OrphicX can effectively identify the causal semantics for generating causal explanations, significantly outperforming its alternatives.
1908.10486
Xueping Wang
Xueping Wang, Rameswar Panda, Min Liu, Yaonan Wang and Amit K Roy-Chowdhury
Exploiting Global Camera Network Constraints for Unsupervised Video Person Re-identification
This paper has been accepted to IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT)
null
10.1109/TCSVT.2020.3043444
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many unsupervised approaches have been proposed recently for the video-based re-identification problem since annotations of samples across cameras are time-consuming. However, higher-order relationships across the entire camera network are ignored by these methods, leading to contradictory outputs when matching results from different camera pairs are combined. In this paper, we address the problem of unsupervised video-based re-identification by proposing a consistent cross-view matching (CCM) framework, in which global camera network constraints are exploited to guarantee the matched pairs are with consistency. Specifically, we first propose to utilize the first neighbor of each sample to discover relations among samples and find the groups in each camera. Additionally, a cross-view matching strategy followed by global camera network constraints is proposed to explore the matching relationships across the entire camera network. Finally, we learn metric models for camera pairs progressively by alternatively mining consistent cross-view matching pairs and updating metric models using these obtained matches. Rigorous experiments on two widely-used benchmarks for video re-identification demonstrate the superiority of the proposed method over current state-of-the-art unsupervised methods; for example, on the MARS dataset, our method achieves an improvement of 4.2\% over unsupervised methods, and even 2.5\% over one-shot supervision-based methods for rank-1 accuracy.
[ { "created": "Tue, 27 Aug 2019 22:35:43 GMT", "version": "v1" }, { "created": "Fri, 31 Jul 2020 03:00:00 GMT", "version": "v2" }, { "created": "Sat, 12 Dec 2020 04:47:42 GMT", "version": "v3" } ]
2020-12-15
[ [ "Wang", "Xueping", "" ], [ "Panda", "Rameswar", "" ], [ "Liu", "Min", "" ], [ "Wang", "Yaonan", "" ], [ "Roy-Chowdhury", "Amit K", "" ] ]
Many unsupervised approaches have been proposed recently for the video-based re-identification problem since annotations of samples across cameras are time-consuming. However, higher-order relationships across the entire camera network are ignored by these methods, leading to contradictory outputs when matching results from different camera pairs are combined. In this paper, we address the problem of unsupervised video-based re-identification by proposing a consistent cross-view matching (CCM) framework, in which global camera network constraints are exploited to guarantee the matched pairs are with consistency. Specifically, we first propose to utilize the first neighbor of each sample to discover relations among samples and find the groups in each camera. Additionally, a cross-view matching strategy followed by global camera network constraints is proposed to explore the matching relationships across the entire camera network. Finally, we learn metric models for camera pairs progressively by alternatively mining consistent cross-view matching pairs and updating metric models using these obtained matches. Rigorous experiments on two widely-used benchmarks for video re-identification demonstrate the superiority of the proposed method over current state-of-the-art unsupervised methods; for example, on the MARS dataset, our method achieves an improvement of 4.2\% over unsupervised methods, and even 2.5\% over one-shot supervision-based methods for rank-1 accuracy.
1510.01784
Ruining He
Ruining He, Julian McAuley
VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback
AAAI'16
null
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern recommender systems model people and items by discovering or `teasing apart' the underlying dimensions that encode the properties of items and users' preferences toward them. Critically, such dimensions are uncovered based on user feedback, often in implicit form (such as purchase histories, browsing logs, etc.); in addition, some recommender systems make use of side information, such as product attributes, temporal information, or review text. However one important feature that is typically ignored by existing personalized recommendation and ranking methods is the visual appearance of the items being considered. In this paper we propose a scalable factorization model to incorporate visual signals into predictors of people's opinions, which we apply to a selection of large, real-world datasets. We make use of visual features extracted from product images using (pre-trained) deep networks, on top of which we learn an additional layer that uncovers the visual dimensions that best explain the variation in people's feedback. This not only leads to significantly more accurate personalized ranking methods, but also helps to alleviate cold start issues, and qualitatively to analyze the visual dimensions that influence people's opinions.
[ { "created": "Tue, 6 Oct 2015 23:46:15 GMT", "version": "v1" } ]
2016-02-05
[ [ "He", "Ruining", "" ], [ "McAuley", "Julian", "" ] ]
Modern recommender systems model people and items by discovering or `teasing apart' the underlying dimensions that encode the properties of items and users' preferences toward them. Critically, such dimensions are uncovered based on user feedback, often in implicit form (such as purchase histories, browsing logs, etc.); in addition, some recommender systems make use of side information, such as product attributes, temporal information, or review text. However one important feature that is typically ignored by existing personalized recommendation and ranking methods is the visual appearance of the items being considered. In this paper we propose a scalable factorization model to incorporate visual signals into predictors of people's opinions, which we apply to a selection of large, real-world datasets. We make use of visual features extracted from product images using (pre-trained) deep networks, on top of which we learn an additional layer that uncovers the visual dimensions that best explain the variation in people's feedback. This not only leads to significantly more accurate personalized ranking methods, but also helps to alleviate cold start issues, and qualitatively to analyze the visual dimensions that influence people's opinions.
2012.10660
Martin Servin
Waqqas-ur-Rehman Butt and Martin Servin
The importance of silhouette optimization in 3D shape reconstruction system from multiple object scenes
9 pages, 11 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a multi stage 3D shape reconstruction system of multiple object scenes by considering the silhouette inconsistencies in shape-from silhouette SFS method. These inconsistencies are common in multiple view images due to object occlusions in different views, segmentation and shadows or reflection due to objects or light directions. These factors raise huge challenges when attempting to construct the 3D shape by using existing approaches which reconstruct only that part of the volume which projects consistently in all the silhouettes, leaving the rest unreconstructed. As a result, final shape are not robust due to multi view objects occlusion and shadows. In this regard, we consider the primary factors affecting reconstruction by analyzing the multiple images and perform pre-processing steps to optimize the silhouettes. Finally, the 3D shape is reconstructed by using the volumetric approach SFS. Theory and experimental results show that, the performance of the modified algorithm was efficiently improved, which can improve the accuracy of the reconstructed shape and being robust to errors in the silhouettes, volume and computational inexpensive.
[ { "created": "Sat, 19 Dec 2020 11:16:57 GMT", "version": "v1" } ]
2020-12-22
[ [ "Butt", "Waqqas-ur-Rehman", "" ], [ "Servin", "Martin", "" ] ]
This paper presents a multi stage 3D shape reconstruction system of multiple object scenes by considering the silhouette inconsistencies in shape-from silhouette SFS method. These inconsistencies are common in multiple view images due to object occlusions in different views, segmentation and shadows or reflection due to objects or light directions. These factors raise huge challenges when attempting to construct the 3D shape by using existing approaches which reconstruct only that part of the volume which projects consistently in all the silhouettes, leaving the rest unreconstructed. As a result, final shape are not robust due to multi view objects occlusion and shadows. In this regard, we consider the primary factors affecting reconstruction by analyzing the multiple images and perform pre-processing steps to optimize the silhouettes. Finally, the 3D shape is reconstructed by using the volumetric approach SFS. Theory and experimental results show that, the performance of the modified algorithm was efficiently improved, which can improve the accuracy of the reconstructed shape and being robust to errors in the silhouettes, volume and computational inexpensive.
1902.03229
Johannes Kirschner
Johannes Kirschner, Mojm\'ir Mutn\'y, Nicole Hiller, Rasmus Ischebeck, Andreas Krause
Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian optimization is known to be difficult to scale to high dimensions, because the acquisition step requires solving a non-convex optimization problem in the same search space. In order to scale the method and keep its benefits, we propose an algorithm (LineBO) that restricts the problem to a sequence of iteratively chosen one-dimensional sub-problems that can be solved efficiently. We show that our algorithm converges globally and obtains a fast local rate when the function is strongly convex. Further, if the objective has an invariant subspace, our method automatically adapts to the effective dimension without changing the algorithm. When combined with the SafeOpt algorithm to solve the sub-problems, we obtain the first safe Bayesian optimization algorithm with theoretical guarantees applicable in high-dimensional settings. We evaluate our method on multiple synthetic benchmarks, where we obtain competitive performance. Further, we deploy our algorithm to optimize the beam intensity of the Swiss Free Electron Laser with up to 40 parameters while satisfying safe operation constraints.
[ { "created": "Fri, 8 Feb 2019 18:41:24 GMT", "version": "v1" }, { "created": "Tue, 28 May 2019 15:15:51 GMT", "version": "v2" } ]
2019-05-29
[ [ "Kirschner", "Johannes", "" ], [ "Mutný", "Mojmír", "" ], [ "Hiller", "Nicole", "" ], [ "Ischebeck", "Rasmus", "" ], [ "Krause", "Andreas", "" ] ]
Bayesian optimization is known to be difficult to scale to high dimensions, because the acquisition step requires solving a non-convex optimization problem in the same search space. In order to scale the method and keep its benefits, we propose an algorithm (LineBO) that restricts the problem to a sequence of iteratively chosen one-dimensional sub-problems that can be solved efficiently. We show that our algorithm converges globally and obtains a fast local rate when the function is strongly convex. Further, if the objective has an invariant subspace, our method automatically adapts to the effective dimension without changing the algorithm. When combined with the SafeOpt algorithm to solve the sub-problems, we obtain the first safe Bayesian optimization algorithm with theoretical guarantees applicable in high-dimensional settings. We evaluate our method on multiple synthetic benchmarks, where we obtain competitive performance. Further, we deploy our algorithm to optimize the beam intensity of the Swiss Free Electron Laser with up to 40 parameters while satisfying safe operation constraints.
2203.10893
Ignacio Torroba
Ignacio Torroba, Christopher Illife Sprague, John Folkesson
Fully-probabilistic Terrain Modelling with Stochastic Variational Gaussian Process Maps
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Gaussian processes (GPs) are becoming a standard tool to build terrain representations thanks to their capacity to model map uncertainty. This effectively yields a reliability measure of the areas of the map, which can be directly utilized by Bayes filtering algorithms in robot localization problems. A key insight is that this uncertainty can incorporate the noise intrinsic to the terrain surveying process through the GPs ability to train on uncertain inputs (UIs). However, existing techniques to build GP maps with UIs in a tractable manner are restricted in the form and degree of the input noise. In this letter, we propose a flexible and efficient framework to build large-scale GP maps with UIs based on Stochastic Variational GPs and Monte Carlo sampling of the UIs distributions. We validate our mapping approach on a large bathymetric survey collected with an AUV and analyze its performance against the use of deterministic inputs (DI). Finally, we show how using UI SVGP maps yields more accurate particle filter localization results than DI SVGP on a real AUV mission over an entirely predicted area.
[ { "created": "Mon, 21 Mar 2022 11:37:08 GMT", "version": "v1" } ]
2022-03-22
[ [ "Torroba", "Ignacio", "" ], [ "Sprague", "Christopher Illife", "" ], [ "Folkesson", "John", "" ] ]
Gaussian processes (GPs) are becoming a standard tool to build terrain representations thanks to their capacity to model map uncertainty. This effectively yields a reliability measure of the areas of the map, which can be directly utilized by Bayes filtering algorithms in robot localization problems. A key insight is that this uncertainty can incorporate the noise intrinsic to the terrain surveying process through the GPs ability to train on uncertain inputs (UIs). However, existing techniques to build GP maps with UIs in a tractable manner are restricted in the form and degree of the input noise. In this letter, we propose a flexible and efficient framework to build large-scale GP maps with UIs based on Stochastic Variational GPs and Monte Carlo sampling of the UIs distributions. We validate our mapping approach on a large bathymetric survey collected with an AUV and analyze its performance against the use of deterministic inputs (DI). Finally, we show how using UI SVGP maps yields more accurate particle filter localization results than DI SVGP on a real AUV mission over an entirely predicted area.
2209.08604
Abhiroop Ghosh
Abhiroop Ghosh, Kalyanmoy Deb, Erik Goodman, and Ronald Averill
An Interactive Knowledge-based Multi-objective Evolutionary Algorithm Framework for Practical Optimization Problems
15 pages, 10 figures in main document; 6 pages, 6 figures in supplementary document
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experienced users often have useful knowledge and intuition in solving real-world optimization problems. User knowledge can be formulated as inter-variable relationships to assist an optimization algorithm in finding good solutions faster. Such inter-variable interactions can also be automatically learned from high-performing solutions discovered at intermediate iterations in an optimization run - a process called innovization. These relations, if vetted by the users, can be enforced among newly generated solutions to steer the optimization algorithm towards practically promising regions in the search space. Challenges arise for large-scale problems where the number of such variable relationships may be high. This paper proposes an interactive knowledge-based evolutionary multi-objective optimization (IK-EMO) framework that extracts hidden variable-wise relationships as knowledge from evolving high-performing solutions, shares them with users to receive feedback, and applies them back to the optimization process to improve its effectiveness. The knowledge extraction process uses a systematic and elegant graph analysis method which scales well with number of variables. The working of the proposed IK-EMO is demonstrated on three large-scale real-world engineering design problems. The simplicity and elegance of the proposed knowledge extraction process and achievement of high-performing solutions quickly indicate the power of the proposed framework. The results presented should motivate further such interaction-based optimization studies for their routine use in practice.
[ { "created": "Sun, 18 Sep 2022 16:51:01 GMT", "version": "v1" } ]
2022-09-20
[ [ "Ghosh", "Abhiroop", "" ], [ "Deb", "Kalyanmoy", "" ], [ "Goodman", "Erik", "" ], [ "Averill", "Ronald", "" ] ]
Experienced users often have useful knowledge and intuition in solving real-world optimization problems. User knowledge can be formulated as inter-variable relationships to assist an optimization algorithm in finding good solutions faster. Such inter-variable interactions can also be automatically learned from high-performing solutions discovered at intermediate iterations in an optimization run - a process called innovization. These relations, if vetted by the users, can be enforced among newly generated solutions to steer the optimization algorithm towards practically promising regions in the search space. Challenges arise for large-scale problems where the number of such variable relationships may be high. This paper proposes an interactive knowledge-based evolutionary multi-objective optimization (IK-EMO) framework that extracts hidden variable-wise relationships as knowledge from evolving high-performing solutions, shares them with users to receive feedback, and applies them back to the optimization process to improve its effectiveness. The knowledge extraction process uses a systematic and elegant graph analysis method which scales well with number of variables. The working of the proposed IK-EMO is demonstrated on three large-scale real-world engineering design problems. The simplicity and elegance of the proposed knowledge extraction process and achievement of high-performing solutions quickly indicate the power of the proposed framework. The results presented should motivate further such interaction-based optimization studies for their routine use in practice.
2006.06911
Eli Shlizerman
Jingyuan Li, Eli Shlizerman
Iterate & Cluster: Iterative Semi-Supervised Action Recognition
for associated video, see https://www.youtube.com/watch?v=ewuoz2tt73E
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel system for active semi-supervised feature-based action recognition. Given time sequences of features tracked during movements our system clusters the sequences into actions. Our system is based on encoder-decoder unsupervised methods shown to perform clustering by self-organization of their latent representation through the auto-regression task. These methods were tested on human action recognition benchmarks and outperformed non-feature based unsupervised methods and achieved comparable accuracy to skeleton-based supervised methods. However, such methods rely on K-Nearest Neighbours (KNN) associating sequences to actions, and general features with no annotated data would correspond to approximate clusters which could be further enhanced. Our system proposes an iterative semi-supervised method to address this challenge and to actively learn the association of clusters and actions. The method utilizes latent space embedding and clustering of the unsupervised encoder-decoder to guide the selection of sequences to be annotated in each iteration. Each iteration, the selection aims to enhance action recognition accuracy while choosing a small number of sequences for annotation. We test the approach on human skeleton-based action recognition benchmarks assuming that only annotations chosen by our method are available and on mouse movements videos recorded in lab experiments. We show that our system can boost recognition performance with only a small percentage of annotations. The system can be used as an interactive annotation tool to guide labeling efforts for 'in the wild' videos of various objects and actions to reach robust recognition.
[ { "created": "Fri, 12 Jun 2020 02:19:39 GMT", "version": "v1" } ]
2020-06-15
[ [ "Li", "Jingyuan", "" ], [ "Shlizerman", "Eli", "" ] ]
We propose a novel system for active semi-supervised feature-based action recognition. Given time sequences of features tracked during movements our system clusters the sequences into actions. Our system is based on encoder-decoder unsupervised methods shown to perform clustering by self-organization of their latent representation through the auto-regression task. These methods were tested on human action recognition benchmarks and outperformed non-feature based unsupervised methods and achieved comparable accuracy to skeleton-based supervised methods. However, such methods rely on K-Nearest Neighbours (KNN) associating sequences to actions, and general features with no annotated data would correspond to approximate clusters which could be further enhanced. Our system proposes an iterative semi-supervised method to address this challenge and to actively learn the association of clusters and actions. The method utilizes latent space embedding and clustering of the unsupervised encoder-decoder to guide the selection of sequences to be annotated in each iteration. Each iteration, the selection aims to enhance action recognition accuracy while choosing a small number of sequences for annotation. We test the approach on human skeleton-based action recognition benchmarks assuming that only annotations chosen by our method are available and on mouse movements videos recorded in lab experiments. We show that our system can boost recognition performance with only a small percentage of annotations. The system can be used as an interactive annotation tool to guide labeling efforts for 'in the wild' videos of various objects and actions to reach robust recognition.
1409.2762
Thalia Karydi
Efthalia Karydi and Konstantinos G. Margaritis
Parallel and Distributed Collaborative Filtering: A Survey
46 pages
null
null
null
cs.IR cs.DC
http://creativecommons.org/licenses/by-nc-sa/3.0/
Collaborative filtering is amongst the most preferred techniques when implementing recommender systems. Recently, great interest has turned towards parallel and distributed implementations of collaborative filtering algorithms. This work is a survey of the parallel and distributed collaborative filtering implementations, aiming not only to provide a comprehensive presentation of the field's development, but also to offer future research orientation by highlighting the issues that need to be further developed.
[ { "created": "Tue, 9 Sep 2014 14:54:49 GMT", "version": "v1" } ]
2014-09-10
[ [ "Karydi", "Efthalia", "" ], [ "Margaritis", "Konstantinos G.", "" ] ]
Collaborative filtering is amongst the most preferred techniques when implementing recommender systems. Recently, great interest has turned towards parallel and distributed implementations of collaborative filtering algorithms. This work is a survey of the parallel and distributed collaborative filtering implementations, aiming not only to provide a comprehensive presentation of the field's development, but also to offer future research orientation by highlighting the issues that need to be further developed.
1608.07636
Hossein Hosseini
Hossein Hosseini, Sreeram Kannan, Baosen Zhang and Radha Poovendran
Learning Temporal Dependence from Time-Series Data with Latent Variables
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the setting where a collection of time series, modeled as random processes, evolve in a causal manner, and one is interested in learning the graph governing the relationships of these processes. A special case of wide interest and applicability is the setting where the noise is Gaussian and relationships are Markov and linear. We study this setting with two additional features: firstly, each random process has a hidden (latent) state, which we use to model the internal memory possessed by the variables (similar to hidden Markov models). Secondly, each variable can depend on its latent memory state through a random lag (rather than a fixed lag), thus modeling memory recall with differing lags at distinct times. Under this setting, we develop an estimator and prove that under a genericity assumption, the parameters of the model can be learned consistently. We also propose a practical adaption of this estimator, which demonstrates significant performance gains in both synthetic and real-world datasets.
[ { "created": "Sat, 27 Aug 2016 00:25:54 GMT", "version": "v1" } ]
2016-08-30
[ [ "Hosseini", "Hossein", "" ], [ "Kannan", "Sreeram", "" ], [ "Zhang", "Baosen", "" ], [ "Poovendran", "Radha", "" ] ]
We consider the setting where a collection of time series, modeled as random processes, evolve in a causal manner, and one is interested in learning the graph governing the relationships of these processes. A special case of wide interest and applicability is the setting where the noise is Gaussian and relationships are Markov and linear. We study this setting with two additional features: firstly, each random process has a hidden (latent) state, which we use to model the internal memory possessed by the variables (similar to hidden Markov models). Secondly, each variable can depend on its latent memory state through a random lag (rather than a fixed lag), thus modeling memory recall with differing lags at distinct times. Under this setting, we develop an estimator and prove that under a genericity assumption, the parameters of the model can be learned consistently. We also propose a practical adaption of this estimator, which demonstrates significant performance gains in both synthetic and real-world datasets.
2301.12322
Chad Mello
Chad Mello, Troy Weingart and Ethan M. Rudd
Cross-Subject Deep Transfer Models for Evoked Potentials in Brain-Computer Interface
Postprint of a manuscript accepted to the International Conference on Pattern Recognition (ICPR) 2022
null
10.1109/ICPR56361.2022.9956463
null
cs.LG cs.AI cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain Computer Interface (BCI) technologies have the potential to improve the lives of millions of people around the world, whether through assistive technologies or clinical diagnostic tools. Despite advancements in the field, however, at present consumer and clinical viability remains low. A key reason for this is that many of the existing BCI deployments require substantial data collection per end-user, which can be cumbersome, tedious, and error-prone to collect. We address this challenge via a deep learning model, which, when trained across sufficient data from multiple subjects, offers reasonable performance out-of-the-box, and can be customized to novel subjects via a transfer learning process. We demonstrate the fundamental viability of our approach by repurposing an older but well-curated electroencephalography (EEG) dataset and benchmarking against several common approaches/techniques. We then partition this dataset into a transfer learning benchmark and demonstrate that our approach significantly reduces data collection burden per-subject. This suggests that our model and methodology may yield improvements to BCI technologies and enhance their consumer/clinical viability.
[ { "created": "Sun, 29 Jan 2023 02:11:36 GMT", "version": "v1" } ]
2023-01-31
[ [ "Mello", "Chad", "" ], [ "Weingart", "Troy", "" ], [ "Rudd", "Ethan M.", "" ] ]
Brain Computer Interface (BCI) technologies have the potential to improve the lives of millions of people around the world, whether through assistive technologies or clinical diagnostic tools. Despite advancements in the field, however, at present consumer and clinical viability remains low. A key reason for this is that many of the existing BCI deployments require substantial data collection per end-user, which can be cumbersome, tedious, and error-prone to collect. We address this challenge via a deep learning model, which, when trained across sufficient data from multiple subjects, offers reasonable performance out-of-the-box, and can be customized to novel subjects via a transfer learning process. We demonstrate the fundamental viability of our approach by repurposing an older but well-curated electroencephalography (EEG) dataset and benchmarking against several common approaches/techniques. We then partition this dataset into a transfer learning benchmark and demonstrate that our approach significantly reduces data collection burden per-subject. This suggests that our model and methodology may yield improvements to BCI technologies and enhance their consumer/clinical viability.
1107.4652
Yanjun Ma
Yanjun Ma, Jiandong Li, Rui Chen, and Qin Liu
On the Achievability of Interference Alignment for Three-Cell Constant Cellular Interfering Networks
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For a three-cell constant cellular interfering network, a new property of alignment is identified, i.e., interference alignment (IA) solution obtained in an user-cooperation scenario can also be applied in a non-cooperation environment. By using this property, an algorithm is proposed by jointly designing transmit and receive beamforming matrices. Analysis and numerical results show that more degree of freedom (DoF) can be achieved compared with conventional schemes in most cases.
[ { "created": "Sat, 23 Jul 2011 03:45:24 GMT", "version": "v1" }, { "created": "Wed, 26 Oct 2011 02:34:18 GMT", "version": "v2" }, { "created": "Wed, 29 Feb 2012 11:22:57 GMT", "version": "v3" }, { "created": "Thu, 1 Mar 2012 08:19:07 GMT", "version": "v4" } ]
2012-03-02
[ [ "Ma", "Yanjun", "" ], [ "Li", "Jiandong", "" ], [ "Chen", "Rui", "" ], [ "Liu", "Qin", "" ] ]
For a three-cell constant cellular interfering network, a new property of alignment is identified, i.e., interference alignment (IA) solution obtained in an user-cooperation scenario can also be applied in a non-cooperation environment. By using this property, an algorithm is proposed by jointly designing transmit and receive beamforming matrices. Analysis and numerical results show that more degree of freedom (DoF) can be achieved compared with conventional schemes in most cases.
2203.15361
Nenglun Chen
Nenglun Chen, Lei Chu, Hao Pan, Yan Lu and Wenping Wang
Self-Supervised Image Representation Learning with Geometric Set Consistency
Accepted by CVPR 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a method for self-supervised image representation learning under the guidance of 3D geometric consistency. Our intuition is that 3D geometric consistency priors such as smooth regions and surface discontinuities may imply consistent semantics or object boundaries, and can act as strong cues to guide the learning of 2D image representations without semantic labels. Specifically, we introduce 3D geometric consistency into a contrastive learning framework to enforce the feature consistency within image views. We propose to use geometric consistency sets as constraints and adapt the InfoNCE loss accordingly. We show that our learned image representations are general. By fine-tuning our pre-trained representations for various 2D image-based downstream tasks, including semantic segmentation, object detection, and instance segmentation on real-world indoor scene datasets, we achieve superior performance compared with state-of-the-art methods.
[ { "created": "Tue, 29 Mar 2022 08:57:33 GMT", "version": "v1" } ]
2022-03-30
[ [ "Chen", "Nenglun", "" ], [ "Chu", "Lei", "" ], [ "Pan", "Hao", "" ], [ "Lu", "Yan", "" ], [ "Wang", "Wenping", "" ] ]
We propose a method for self-supervised image representation learning under the guidance of 3D geometric consistency. Our intuition is that 3D geometric consistency priors such as smooth regions and surface discontinuities may imply consistent semantics or object boundaries, and can act as strong cues to guide the learning of 2D image representations without semantic labels. Specifically, we introduce 3D geometric consistency into a contrastive learning framework to enforce the feature consistency within image views. We propose to use geometric consistency sets as constraints and adapt the InfoNCE loss accordingly. We show that our learned image representations are general. By fine-tuning our pre-trained representations for various 2D image-based downstream tasks, including semantic segmentation, object detection, and instance segmentation on real-world indoor scene datasets, we achieve superior performance compared with state-of-the-art methods.
1909.09369
Rafael Poyiadzi
Rafael Poyiadzi, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, Peter Flach
FACE: Feasible and Actionable Counterfactual Explanations
Presented at AAAI/ACM Conference on AI, Ethics, and Society 2020
null
10.1145/3375627.3375850
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Work in Counterfactual Explanations tends to focus on the principle of "the closest possible world" that identifies small changes leading to the desired outcome. In this paper we argue that while this approach might initially seem intuitively appealing it exhibits shortcomings not addressed in the current literature. First, a counterfactual example generated by the state-of-the-art systems is not necessarily representative of the underlying data distribution, and may therefore prescribe unachievable goals(e.g., an unsuccessful life insurance applicant with severe disability may be advised to do more sports). Secondly, the counterfactuals may not be based on a "feasible path" between the current state of the subject and the suggested one, making actionable recourse infeasible (e.g., low-skilled unsuccessful mortgage applicants may be told to double their salary, which may be hard without first increasing their skill level). These two shortcomings may render counterfactual explanations impractical and sometimes outright offensive. To address these two major flaws, first of all, we propose a new line of Counterfactual Explanations research aimed at providing actionable and feasible paths to transform a selected instance into one that meets a certain goal. Secondly, we propose FACE: an algorithmically sound way of uncovering these "feasible paths" based on the shortest path distances defined via density-weighted metrics. Our approach generates counterfactuals that are coherent with the underlying data distribution and supported by the "feasible paths" of change, which are achievable and can be tailored to the problem at hand.
[ { "created": "Fri, 20 Sep 2019 08:29:35 GMT", "version": "v1" }, { "created": "Mon, 24 Feb 2020 15:39:07 GMT", "version": "v2" } ]
2020-02-25
[ [ "Poyiadzi", "Rafael", "" ], [ "Sokol", "Kacper", "" ], [ "Santos-Rodriguez", "Raul", "" ], [ "De Bie", "Tijl", "" ], [ "Flach", "Peter", "" ] ]
Work in Counterfactual Explanations tends to focus on the principle of "the closest possible world" that identifies small changes leading to the desired outcome. In this paper we argue that while this approach might initially seem intuitively appealing it exhibits shortcomings not addressed in the current literature. First, a counterfactual example generated by the state-of-the-art systems is not necessarily representative of the underlying data distribution, and may therefore prescribe unachievable goals(e.g., an unsuccessful life insurance applicant with severe disability may be advised to do more sports). Secondly, the counterfactuals may not be based on a "feasible path" between the current state of the subject and the suggested one, making actionable recourse infeasible (e.g., low-skilled unsuccessful mortgage applicants may be told to double their salary, which may be hard without first increasing their skill level). These two shortcomings may render counterfactual explanations impractical and sometimes outright offensive. To address these two major flaws, first of all, we propose a new line of Counterfactual Explanations research aimed at providing actionable and feasible paths to transform a selected instance into one that meets a certain goal. Secondly, we propose FACE: an algorithmically sound way of uncovering these "feasible paths" based on the shortest path distances defined via density-weighted metrics. Our approach generates counterfactuals that are coherent with the underlying data distribution and supported by the "feasible paths" of change, which are achievable and can be tailored to the problem at hand.
2406.11371
Xin Zhang
Feng Huang, Xin Zhang, Yixuan Xu, Xuesong Wang and Xianyu Wu
Video Frame Interpolation for Polarization via Swin-Transformer
18 pages, 10 figures, 7 tables, 73 citations
null
null
null
cs.CV physics.optics
http://creativecommons.org/licenses/by-nc-sa/4.0/
Video Frame Interpolation (VFI) has been extensively explored and demonstrated, yet its application to polarization remains largely unexplored. Due to the selective transmission of light by polarized filters, longer exposure times are typically required to ensure sufficient light intensity, which consequently lower the temporal sample rates. Furthermore, because polarization reflected by objects varies with shooting perspective, focusing solely on estimating pixel displacement is insufficient to accurately reconstruct the intermediate polarization. To tackle these challenges, this study proposes a multi-stage and multi-scale network called Swin-VFI based on the Swin-Transformer and introduces a tailored loss function to facilitate the network's understanding of polarization changes. To ensure the practicality of our proposed method, this study evaluates its interpolated frames in Shape from Polarization (SfP) and Human Shape Reconstruction tasks, comparing them with other state-of-the-art methods such as CAIN, FLAVR, and VFIT. Experimental results demonstrate our approach's superior reconstruction accuracy across all tasks.
[ { "created": "Mon, 17 Jun 2024 09:48:54 GMT", "version": "v1" } ]
2024-06-18
[ [ "Huang", "Feng", "" ], [ "Zhang", "Xin", "" ], [ "Xu", "Yixuan", "" ], [ "Wang", "Xuesong", "" ], [ "Wu", "Xianyu", "" ] ]
Video Frame Interpolation (VFI) has been extensively explored and demonstrated, yet its application to polarization remains largely unexplored. Due to the selective transmission of light by polarized filters, longer exposure times are typically required to ensure sufficient light intensity, which consequently lower the temporal sample rates. Furthermore, because polarization reflected by objects varies with shooting perspective, focusing solely on estimating pixel displacement is insufficient to accurately reconstruct the intermediate polarization. To tackle these challenges, this study proposes a multi-stage and multi-scale network called Swin-VFI based on the Swin-Transformer and introduces a tailored loss function to facilitate the network's understanding of polarization changes. To ensure the practicality of our proposed method, this study evaluates its interpolated frames in Shape from Polarization (SfP) and Human Shape Reconstruction tasks, comparing them with other state-of-the-art methods such as CAIN, FLAVR, and VFIT. Experimental results demonstrate our approach's superior reconstruction accuracy across all tasks.
1508.05077
Gerardo Vega
Gerardo Vega
A characterization of a class of optimal three-weight cyclic codes of dimension 3 over any finite field
Preprint submitted to Finite Fields and Their Applications August 20, 2015
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is well known that the problem of determining the weight distributions of families of cyclic codes is, in general, notoriously difficult. An even harder problem is to find characterizations of families of cyclic codes in terms of their weight distributions. On the other hand, it is also well known that cyclic codes with few weights have a great practical importance in coding theory and cryptography. In particular, cyclic codes having three nonzero weights have been studied by several authors, however, most of these efforts focused on cyclic codes over a prime field. In this work we present a characterization of a class of optimal three-weight cyclic codes of dimension 3 over any finite field. The codes under this characterization are, indeed, optimal in the sense that their lengths reach the Griesmer lower bound for linear codes. Consequently, these codes reach, simultaneously, the best possible coding capacity, and also the best possible capabilities of error detection and correction for linear codes. But because they are cyclic in nature, they also possess a rich algebraic structure that can be utilized in a variety of ways, particularly, in the design of very efficient coding and decoding algorithms. What is also worth pointing out, is the simplicity of the necessary and sufficient numerical conditions that characterize our class of optimal three-weight cyclic codes. As we already pointed out, it is a hard problem to find this kind of characterizations. However, for this particular case the fundamental tool that allowed us to find our characterization was the characterization for all two-weight irreducible cyclic codes that was introduced by B. Schmidt and C. White (2002). Lastly, another feature about the codes in this class, is that their duals seem to have always the same parameters as the best known linear codes.
[ { "created": "Thu, 20 Aug 2015 19:29:26 GMT", "version": "v1" } ]
2015-08-21
[ [ "Vega", "Gerardo", "" ] ]
It is well known that the problem of determining the weight distributions of families of cyclic codes is, in general, notoriously difficult. An even harder problem is to find characterizations of families of cyclic codes in terms of their weight distributions. On the other hand, it is also well known that cyclic codes with few weights have a great practical importance in coding theory and cryptography. In particular, cyclic codes having three nonzero weights have been studied by several authors, however, most of these efforts focused on cyclic codes over a prime field. In this work we present a characterization of a class of optimal three-weight cyclic codes of dimension 3 over any finite field. The codes under this characterization are, indeed, optimal in the sense that their lengths reach the Griesmer lower bound for linear codes. Consequently, these codes reach, simultaneously, the best possible coding capacity, and also the best possible capabilities of error detection and correction for linear codes. But because they are cyclic in nature, they also possess a rich algebraic structure that can be utilized in a variety of ways, particularly, in the design of very efficient coding and decoding algorithms. What is also worth pointing out, is the simplicity of the necessary and sufficient numerical conditions that characterize our class of optimal three-weight cyclic codes. As we already pointed out, it is a hard problem to find this kind of characterizations. However, for this particular case the fundamental tool that allowed us to find our characterization was the characterization for all two-weight irreducible cyclic codes that was introduced by B. Schmidt and C. White (2002). Lastly, another feature about the codes in this class, is that their duals seem to have always the same parameters as the best known linear codes.
1903.10412
Chongsheng Zhang
Chongsheng Zhang and Guowen Peng and Yuefeng Tao and Feifei Fu and Wei Jiang and George Almpanidis and Ke Chen
ShopSign: a Diverse Scene Text Dataset of Chinese Shop Signs in Street Views
10 pages, 2 figures, 5 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce the ShopSign dataset, which is a newly developed natural scene text dataset of Chinese shop signs in street views. Although a few scene text datasets are already publicly available (e.g. ICDAR2015, COCO-Text), there are few images in these datasets that contain Chinese texts/characters. Hence, we collect and annotate the ShopSign dataset to advance research in Chinese scene text detection and recognition. The new dataset has three distinctive characteristics: (1) large-scale: it contains 25,362 Chinese shop sign images, with a total number of 196,010 text-lines. (2) diversity: the images in ShopSign were captured in different scenes, from downtown to developing regions, using more than 50 different mobile phones. (3) difficulty: the dataset is very sparse and imbalanced. It also includes five categories of hard images (mirror, wooden, deformed, exposed and obscure). To illustrate the challenges in ShopSign, we run baseline experiments using state-of-the-art scene text detection methods (including CTPN, TextBoxes++ and EAST), and cross-dataset validation to compare their corresponding performance on the related datasets such as CTW, RCTW and ICPR 2018 MTWI challenge dataset. The sample images and detailed descriptions of our ShopSign dataset are publicly available at: https://github.com/chongshengzhang/shopsign.
[ { "created": "Mon, 25 Mar 2019 15:52:32 GMT", "version": "v1" } ]
2019-03-26
[ [ "Zhang", "Chongsheng", "" ], [ "Peng", "Guowen", "" ], [ "Tao", "Yuefeng", "" ], [ "Fu", "Feifei", "" ], [ "Jiang", "Wei", "" ], [ "Almpanidis", "George", "" ], [ "Chen", "Ke", "" ] ]
In this paper, we introduce the ShopSign dataset, which is a newly developed natural scene text dataset of Chinese shop signs in street views. Although a few scene text datasets are already publicly available (e.g. ICDAR2015, COCO-Text), there are few images in these datasets that contain Chinese texts/characters. Hence, we collect and annotate the ShopSign dataset to advance research in Chinese scene text detection and recognition. The new dataset has three distinctive characteristics: (1) large-scale: it contains 25,362 Chinese shop sign images, with a total number of 196,010 text-lines. (2) diversity: the images in ShopSign were captured in different scenes, from downtown to developing regions, using more than 50 different mobile phones. (3) difficulty: the dataset is very sparse and imbalanced. It also includes five categories of hard images (mirror, wooden, deformed, exposed and obscure). To illustrate the challenges in ShopSign, we run baseline experiments using state-of-the-art scene text detection methods (including CTPN, TextBoxes++ and EAST), and cross-dataset validation to compare their corresponding performance on the related datasets such as CTW, RCTW and ICPR 2018 MTWI challenge dataset. The sample images and detailed descriptions of our ShopSign dataset are publicly available at: https://github.com/chongshengzhang/shopsign.
2003.02931
Barbara Plank
Barbara Plank
Neural Cross-Lingual Transfer and Limited Annotated Data for Named Entity Recognition in Danish
Published at NoDaLiDa 2019; updated (system, data and repository details)
NoDaLiDa, 2019
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Named Entity Recognition (NER) has greatly advanced by the introduction of deep neural architectures. However, the success of these methods depends on large amounts of training data. The scarcity of publicly-available human-labeled datasets has resulted in limited evaluation of existing NER systems, as is the case for Danish. This paper studies the effectiveness of cross-lingual transfer for Danish, evaluates its complementarity to limited gold data, and sheds light on performance of Danish NER.
[ { "created": "Thu, 5 Mar 2020 21:25:00 GMT", "version": "v1" } ]
2020-03-09
[ [ "Plank", "Barbara", "" ] ]
Named Entity Recognition (NER) has greatly advanced by the introduction of deep neural architectures. However, the success of these methods depends on large amounts of training data. The scarcity of publicly-available human-labeled datasets has resulted in limited evaluation of existing NER systems, as is the case for Danish. This paper studies the effectiveness of cross-lingual transfer for Danish, evaluates its complementarity to limited gold data, and sheds light on performance of Danish NER.
1903.05382
Lior Rokach
Eran Fainman, Bracha Shapira, Lior Rokach, Yisroel Mirsky
Online Budgeted Learning for Classifier Induction
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In real-world machine learning applications, there is a cost associated with sampling of different features. Budgeted learning can be used to select which feature-values to acquire from each instance in a dataset, such that the best model is induced under a given constraint. However, this approach is not possible in the domain of online learning since one may not retroactively acquire feature-values from past instances. In online learning, the challenge is to find the optimum set of features to be acquired from each instance upon arrival from a data stream. In this paper we introduce the issue of online budgeted learning and describe a general framework for addressing this challenge. We propose two types of feature value acquisition policies based on the multi-armed bandit problem: random and adaptive. Adaptive policies perform online adjustments according to new information coming from a data stream, while random policies are not sensitive to the information that arrives from the data stream. Our comparative study on five real-world datasets indicates that adaptive policies outperform random policies for most budget limitations and datasets. Furthermore, we found that in some cases adaptive policies achieve near-optimal results.
[ { "created": "Wed, 13 Mar 2019 09:51:33 GMT", "version": "v1" } ]
2019-03-14
[ [ "Fainman", "Eran", "" ], [ "Shapira", "Bracha", "" ], [ "Rokach", "Lior", "" ], [ "Mirsky", "Yisroel", "" ] ]
In real-world machine learning applications, there is a cost associated with sampling of different features. Budgeted learning can be used to select which feature-values to acquire from each instance in a dataset, such that the best model is induced under a given constraint. However, this approach is not possible in the domain of online learning since one may not retroactively acquire feature-values from past instances. In online learning, the challenge is to find the optimum set of features to be acquired from each instance upon arrival from a data stream. In this paper we introduce the issue of online budgeted learning and describe a general framework for addressing this challenge. We propose two types of feature value acquisition policies based on the multi-armed bandit problem: random and adaptive. Adaptive policies perform online adjustments according to new information coming from a data stream, while random policies are not sensitive to the information that arrives from the data stream. Our comparative study on five real-world datasets indicates that adaptive policies outperform random policies for most budget limitations and datasets. Furthermore, we found that in some cases adaptive policies achieve near-optimal results.
1802.02971
Xiaoran Wang
Xiaoran Wang, Benwen Zhang
Comment Generation for Source Code: State of the Art, Challenges and Opportunities
Survey of Automatic Comment Generation for Source Code
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Researches have shown that most effort of today's software development is maintenance and evolution. Developers often use integrated development environments, debuggers, and tools for code search, testing, and program understanding to reduce the tedious tasks. One way to make software development more efficient is to make the program more readable. There have been many approaches proposed and developed for this purpose. Among these approaches, comment generation for source code is gaining more and more attention and has become a popular research area. In this paper, the state of art in comment generation research area are summarized and the challenges and future opportunities are discussed.
[ { "created": "Fri, 5 Jan 2018 02:02:19 GMT", "version": "v1" }, { "created": "Fri, 7 Sep 2018 17:23:21 GMT", "version": "v2" } ]
2018-09-10
[ [ "Wang", "Xiaoran", "" ], [ "Zhang", "Benwen", "" ] ]
Researches have shown that most effort of today's software development is maintenance and evolution. Developers often use integrated development environments, debuggers, and tools for code search, testing, and program understanding to reduce the tedious tasks. One way to make software development more efficient is to make the program more readable. There have been many approaches proposed and developed for this purpose. Among these approaches, comment generation for source code is gaining more and more attention and has become a popular research area. In this paper, the state of art in comment generation research area are summarized and the challenges and future opportunities are discussed.
2001.10809
Maximilian Probst Gutenberg
Maximilian Probst Gutenberg, Christian Wulff-Nilsen
Deterministic Algorithms for Decremental Approximate Shortest Paths: Faster and Simpler
Appeared in SODA'20
null
10.1137/1.9781611975994.154
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
In the decremental $(1+\epsilon)$-approximate Single-Source Shortest Path (SSSP) problem, we are given a graph $G=(V,E)$ with $n = |V|, m = |E|$, undergoing edge deletions, and a distinguished source $s \in V$, and we are asked to process edge deletions efficiently and answer queries for distance estimates $\widetilde{\mathbf{dist}}_G(s,v)$ for each $v \in V$, at any stage, such that $\mathbf{dist}_G(s,v) \leq \widetilde{\mathbf{dist}}_G(s,v) \leq (1+ \epsilon)\mathbf{dist}_G(s,v)$. In the decremental $(1+\epsilon)$-approximate All-Pairs Shortest Path (APSP) problem, we are asked to answer queries for distance estimates $\widetilde{\mathbf{dist}}_G(u,v)$ for every $u,v \in V$. In this article, we consider the problems for undirected, unweighted graphs. We present a new \emph{deterministic} algorithm for the decremental $(1+\epsilon)$-approximate SSSP problem that takes total update time $O(mn^{0.5 + o(1)})$. Our algorithm improves on the currently best algorithm for dense graphs by Chechik and Bernstein [STOC 2016] with total update time $\tilde{O}(n^2)$ and the best existing algorithm for sparse graphs with running time $\tilde{O}(n^{1.25}\sqrt{m})$ [SODA 2017] whenever $m = O(n^{1.5 - o(1)})$. In order to obtain this new algorithm, we develop several new techniques including improved decremental cover data structures for graphs, a more efficient notion of the heavy/light decomposition framework introduced by Chechik and Bernstein and the first clustering technique to maintain a dynamic \emph{sparse} emulator in the deterministic setting. As a by-product, we also obtain a new simple deterministic algorithm for the decremental $(1+\epsilon)$-approximate APSP problem with near-optimal total running time $\tilde{O}(mn /\epsilon)$ matching the time complexity of the sophisticated but rather involved algorithm by Henzinger, Forster and Nanongkai [FOCS 2013].
[ { "created": "Wed, 29 Jan 2020 13:22:22 GMT", "version": "v1" } ]
2020-01-30
[ [ "Gutenberg", "Maximilian Probst", "" ], [ "Wulff-Nilsen", "Christian", "" ] ]
In the decremental $(1+\epsilon)$-approximate Single-Source Shortest Path (SSSP) problem, we are given a graph $G=(V,E)$ with $n = |V|, m = |E|$, undergoing edge deletions, and a distinguished source $s \in V$, and we are asked to process edge deletions efficiently and answer queries for distance estimates $\widetilde{\mathbf{dist}}_G(s,v)$ for each $v \in V$, at any stage, such that $\mathbf{dist}_G(s,v) \leq \widetilde{\mathbf{dist}}_G(s,v) \leq (1+ \epsilon)\mathbf{dist}_G(s,v)$. In the decremental $(1+\epsilon)$-approximate All-Pairs Shortest Path (APSP) problem, we are asked to answer queries for distance estimates $\widetilde{\mathbf{dist}}_G(u,v)$ for every $u,v \in V$. In this article, we consider the problems for undirected, unweighted graphs. We present a new \emph{deterministic} algorithm for the decremental $(1+\epsilon)$-approximate SSSP problem that takes total update time $O(mn^{0.5 + o(1)})$. Our algorithm improves on the currently best algorithm for dense graphs by Chechik and Bernstein [STOC 2016] with total update time $\tilde{O}(n^2)$ and the best existing algorithm for sparse graphs with running time $\tilde{O}(n^{1.25}\sqrt{m})$ [SODA 2017] whenever $m = O(n^{1.5 - o(1)})$. In order to obtain this new algorithm, we develop several new techniques including improved decremental cover data structures for graphs, a more efficient notion of the heavy/light decomposition framework introduced by Chechik and Bernstein and the first clustering technique to maintain a dynamic \emph{sparse} emulator in the deterministic setting. As a by-product, we also obtain a new simple deterministic algorithm for the decremental $(1+\epsilon)$-approximate APSP problem with near-optimal total running time $\tilde{O}(mn /\epsilon)$ matching the time complexity of the sophisticated but rather involved algorithm by Henzinger, Forster and Nanongkai [FOCS 2013].
0806.3681
Alessandro Nordio
Alessandro Nordio, Carla-Fabiana Chiasserini, Emanuele Viterbo
On the d-dimensional Quasi-Equally Spaced Sampling
submitted to IEEE Transactions on Signal Processing
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a class of random matrices that appear in several communication and signal processing applications, and whose asymptotic eigenvalue distribution is closely related to the reconstruction error of an irregularly sampled bandlimited signal. We focus on the case where the random variables characterizing these matrices are d-dimensional vectors, independent, and quasi-equally spaced, i.e., they have an arbitrary distribution and their averages are vertices of a d-dimensional grid. Although a closed form expression of the eigenvalue distribution is still unknown, under these conditions we are able (i) to derive the distribution moments as the matrix size grows to infinity, while its aspect ratio is kept constant, and (ii) to show that the eigenvalue distribution tends to the Marcenko-Pastur law as d->infinity. These results can find application in several fields, as an example we show how they can be used for the estimation of the mean square error provided by linear reconstruction techniques.
[ { "created": "Mon, 23 Jun 2008 13:20:10 GMT", "version": "v1" } ]
2008-06-24
[ [ "Nordio", "Alessandro", "" ], [ "Chiasserini", "Carla-Fabiana", "" ], [ "Viterbo", "Emanuele", "" ] ]
We study a class of random matrices that appear in several communication and signal processing applications, and whose asymptotic eigenvalue distribution is closely related to the reconstruction error of an irregularly sampled bandlimited signal. We focus on the case where the random variables characterizing these matrices are d-dimensional vectors, independent, and quasi-equally spaced, i.e., they have an arbitrary distribution and their averages are vertices of a d-dimensional grid. Although a closed form expression of the eigenvalue distribution is still unknown, under these conditions we are able (i) to derive the distribution moments as the matrix size grows to infinity, while its aspect ratio is kept constant, and (ii) to show that the eigenvalue distribution tends to the Marcenko-Pastur law as d->infinity. These results can find application in several fields, as an example we show how they can be used for the estimation of the mean square error provided by linear reconstruction techniques.
cs/0505009
Arindam Mitra
Arindam Mitra
Human being is a living random number generator
PDF, Revised
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
General wisdom is, mathematical operation is needed to generate number by numbers. It is pointed out that without any mathematical operation true random numbers can be generated by numbers through algorithmic process. It implies that human brain itself is a living true random number generator. Human brain can meet the enormous human demand of true random numbers.
[ { "created": "Tue, 3 May 2005 15:42:24 GMT", "version": "v1" }, { "created": "Tue, 21 Aug 2007 15:28:48 GMT", "version": "v10" }, { "created": "Thu, 23 Aug 2007 15:10:06 GMT", "version": "v11" }, { "created": "Wed, 14 Nov 2007 16:04:01 GMT", "version": "v12" }, { "created": "Fri, 13 Jun 2008 15:41:34 GMT", "version": "v13" }, { "created": "Mon, 14 Jul 2008 13:44:30 GMT", "version": "v14" }, { "created": "Thu, 24 Jul 2008 14:43:57 GMT", "version": "v15" }, { "created": "Sat, 27 Dec 2008 15:56:26 GMT", "version": "v16" }, { "created": "Tue, 16 Jun 2009 10:57:47 GMT", "version": "v17" }, { "created": "Thu, 5 May 2005 13:09:26 GMT", "version": "v2" }, { "created": "Fri, 8 Jul 2005 06:20:22 GMT", "version": "v3" }, { "created": "Thu, 26 Oct 2006 14:34:45 GMT", "version": "v4" }, { "created": "Tue, 9 Jan 2007 15:54:27 GMT", "version": "v5" }, { "created": "Wed, 31 Jan 2007 12:44:08 GMT", "version": "v6" }, { "created": "Wed, 7 Feb 2007 15:48:03 GMT", "version": "v7" }, { "created": "Thu, 8 Mar 2007 14:49:33 GMT", "version": "v8" }, { "created": "Wed, 18 Jul 2007 15:24:15 GMT", "version": "v9" } ]
2009-06-16
[ [ "Mitra", "Arindam", "" ] ]
General wisdom is, mathematical operation is needed to generate number by numbers. It is pointed out that without any mathematical operation true random numbers can be generated by numbers through algorithmic process. It implies that human brain itself is a living true random number generator. Human brain can meet the enormous human demand of true random numbers.
2011.14311
Xiaoxu Li
Xiaoxu Li, Jijie Wu, Zhuo Sun, Zhanyu Ma, Jie Cao, Jing-Hao Xue
BSNet: Bi-Similarity Network for Few-shot Fine-grained Image Classification
IEEE TIP 2020
null
10.1109/TIP.2020.3043128
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Few-shot learning for fine-grained image classification has gained recent attention in computer vision. Among the approaches for few-shot learning, due to the simplicity and effectiveness, metric-based methods are favorably state-of-the-art on many tasks. Most of the metric-based methods assume a single similarity measure and thus obtain a single feature space. However, if samples can simultaneously be well classified via two distinct similarity measures, the samples within a class can distribute more compactly in a smaller feature space, producing more discriminative feature maps. Motivated by this, we propose a so-called \textit{Bi-Similarity Network} (\textit{BSNet}) that consists of a single embedding module and a bi-similarity module of two similarity measures. After the support images and the query images pass through the convolution-based embedding module, the bi-similarity module learns feature maps according to two similarity measures of diverse characteristics. In this way, the model is enabled to learn more discriminative and less similarity-biased features from few shots of fine-grained images, such that the model generalization ability can be significantly improved. Through extensive experiments by slightly modifying established metric/similarity based networks, we show that the proposed approach produces a substantial improvement on several fine-grained image benchmark datasets. Codes are available at: https://github.com/spraise/BSNet
[ { "created": "Sun, 29 Nov 2020 08:38:17 GMT", "version": "v1" } ]
2021-02-03
[ [ "Li", "Xiaoxu", "" ], [ "Wu", "Jijie", "" ], [ "Sun", "Zhuo", "" ], [ "Ma", "Zhanyu", "" ], [ "Cao", "Jie", "" ], [ "Xue", "Jing-Hao", "" ] ]
Few-shot learning for fine-grained image classification has gained recent attention in computer vision. Among the approaches for few-shot learning, due to the simplicity and effectiveness, metric-based methods are favorably state-of-the-art on many tasks. Most of the metric-based methods assume a single similarity measure and thus obtain a single feature space. However, if samples can simultaneously be well classified via two distinct similarity measures, the samples within a class can distribute more compactly in a smaller feature space, producing more discriminative feature maps. Motivated by this, we propose a so-called \textit{Bi-Similarity Network} (\textit{BSNet}) that consists of a single embedding module and a bi-similarity module of two similarity measures. After the support images and the query images pass through the convolution-based embedding module, the bi-similarity module learns feature maps according to two similarity measures of diverse characteristics. In this way, the model is enabled to learn more discriminative and less similarity-biased features from few shots of fine-grained images, such that the model generalization ability can be significantly improved. Through extensive experiments by slightly modifying established metric/similarity based networks, we show that the proposed approach produces a substantial improvement on several fine-grained image benchmark datasets. Codes are available at: https://github.com/spraise/BSNet
1910.07377
Francesco Zola
Francesco Zola, Cristina P\'erez-Sol\'a, Jon Ega\~na Zubia, Maria Eguimendia and Jordi Herrera-Joancomart\'i
Kriptosare.gen, a dockerized Bitcoin testbed: analysis of server performance
9 pages, 4 figures, 1 table, presented during the 10th IFIP International Conference on New Technologies, Mobility and Security (NTMS)
2019 10th IFIP International Conference on New Technologies, Mobility and Security (NTMS) (pp. 1-5). IEEE
10.1109/NTMS.2019.8763809
null
cs.PF cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bitcoin is a peer-to-peer distributed cryptocurrency system, that keeps all transaction history in a public ledger known as blockchain. The Bitcoin network is implicitly pseudoanonymous and its nodes are controlled by independent entities making network analysis difficult. This calls for the development of a fully controlled testing environment. This paper presents Kriptosare.gen, a dockerized automatized Bitcoin testbed, for deploying full-scale custom Bitcoin networks. The testbed is deployed in a single machine executing four different experiments, each one with different network configuration. We perform a cost analysis to investigate how the resources are related with network parameters and provide experimental data quantifying the amount of computational resources needed to run the different types of simulations. Obtained results demonstrate that it is possible to run the testbed with a configuration similar to a real Bitcoin system.
[ { "created": "Wed, 16 Oct 2019 14:36:30 GMT", "version": "v1" } ]
2019-10-17
[ [ "Zola", "Francesco", "" ], [ "Pérez-Solá", "Cristina", "" ], [ "Zubia", "Jon Egaña", "" ], [ "Eguimendia", "Maria", "" ], [ "Herrera-Joancomartí", "Jordi", "" ] ]
Bitcoin is a peer-to-peer distributed cryptocurrency system, that keeps all transaction history in a public ledger known as blockchain. The Bitcoin network is implicitly pseudoanonymous and its nodes are controlled by independent entities making network analysis difficult. This calls for the development of a fully controlled testing environment. This paper presents Kriptosare.gen, a dockerized automatized Bitcoin testbed, for deploying full-scale custom Bitcoin networks. The testbed is deployed in a single machine executing four different experiments, each one with different network configuration. We perform a cost analysis to investigate how the resources are related with network parameters and provide experimental data quantifying the amount of computational resources needed to run the different types of simulations. Obtained results demonstrate that it is possible to run the testbed with a configuration similar to a real Bitcoin system.
1205.3321
Francesco Scarcello
Gianluigi Greco and Francesco Scarcello
Tree Projections and Structural Decomposition Methods: The Power of Local Consistency and Larger Islands of Tractability
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evaluating conjunctive queries and solving constraint satisfaction problems are fundamental problems in database theory and artificial intelligence, respectively. These problems are NP-hard, so that several research efforts have been made in the literature for identifying tractable classes, known as islands of tractability, as well as for devising clever heuristics for solving efficiently real-world instances. Many heuristic approaches are based on enforcing on the given instance a property called local consistency, where (in database terms) each tuple in every query atom matches at least one tuple in every other query atom. Interestingly, it turns out that, for many well-known classes of queries, such as for the acyclic queries, enforcing local consistency is even sufficient to solve the given instance correctly. However, the precise power of such a procedure was unclear, but for some very restricted cases. The paper provides full answers to the long-standing questions about the precise power of algorithms based on enforcing local consistency. The classes of instances where enforcing local consistency turns out to be a correct query-answering procedure are however not efficiently recognizable. In fact, the paper finally focuses on certain subclasses defined in terms of the novel notion of greedy tree projections. These latter classes are shown to be efficiently recognizable and strictly larger than most islands of tractability known so far, both in the general case of tree projections and for specific structural decomposition methods.
[ { "created": "Tue, 15 May 2012 11:00:48 GMT", "version": "v1" }, { "created": "Fri, 28 Dec 2012 18:01:40 GMT", "version": "v2" } ]
2013-01-01
[ [ "Greco", "Gianluigi", "" ], [ "Scarcello", "Francesco", "" ] ]
Evaluating conjunctive queries and solving constraint satisfaction problems are fundamental problems in database theory and artificial intelligence, respectively. These problems are NP-hard, so that several research efforts have been made in the literature for identifying tractable classes, known as islands of tractability, as well as for devising clever heuristics for solving efficiently real-world instances. Many heuristic approaches are based on enforcing on the given instance a property called local consistency, where (in database terms) each tuple in every query atom matches at least one tuple in every other query atom. Interestingly, it turns out that, for many well-known classes of queries, such as for the acyclic queries, enforcing local consistency is even sufficient to solve the given instance correctly. However, the precise power of such a procedure was unclear, but for some very restricted cases. The paper provides full answers to the long-standing questions about the precise power of algorithms based on enforcing local consistency. The classes of instances where enforcing local consistency turns out to be a correct query-answering procedure are however not efficiently recognizable. In fact, the paper finally focuses on certain subclasses defined in terms of the novel notion of greedy tree projections. These latter classes are shown to be efficiently recognizable and strictly larger than most islands of tractability known so far, both in the general case of tree projections and for specific structural decomposition methods.
2401.10447
Yu Yu
Yu Yu, Chao-Han Huck Yang, Tuan Dinh, Sungho Ryu, Jari Kolehmainen, Roger Ren, Denis Filimonov, Prashanth G. Shivakumar, Ankur Gandhe, Ariya Rastow, Jia Xu, Ivan Bulyko, Andreas Stolcke
Investigating Training Strategies and Model Robustness of Low-Rank Adaptation for Language Modeling in Speech Recognition
null
null
null
null
cs.CL cs.AI cs.LG cs.NE cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
The use of low-rank adaptation (LoRA) with frozen pretrained language models (PLMs) has become increasing popular as a mainstream, resource-efficient modeling approach for memory-constrained hardware. In this study, we first explore how to enhance model performance by introducing various LoRA training strategies, achieving relative word error rate reductions of 3.50\% on the public Librispeech dataset and of 3.67\% on an internal dataset in the messaging domain. To further characterize the stability of LoRA-based second-pass speech recognition models, we examine robustness against input perturbations. These perturbations are rooted in homophone replacements and a novel metric called N-best Perturbation-based Rescoring Robustness (NPRR), both designed to measure the relative degradation in the performance of rescoring models. Our experimental results indicate that while advanced variants of LoRA, such as dynamic rank-allocated LoRA, lead to performance degradation in $1$-best perturbation, they alleviate the degradation in $N$-best perturbation. This finding is in comparison to fully-tuned models and vanilla LoRA tuning baselines, suggesting that a comprehensive selection is needed when using LoRA-based adaptation for compute-cost savings and robust language modeling.
[ { "created": "Fri, 19 Jan 2024 01:30:16 GMT", "version": "v1" } ]
2024-01-22
[ [ "Yu", "Yu", "" ], [ "Yang", "Chao-Han Huck", "" ], [ "Dinh", "Tuan", "" ], [ "Ryu", "Sungho", "" ], [ "Kolehmainen", "Jari", "" ], [ "Ren", "Roger", "" ], [ "Filimonov", "Denis", "" ], [ "Shivakumar", "Prashanth G.", "" ], [ "Gandhe", "Ankur", "" ], [ "Rastow", "Ariya", "" ], [ "Xu", "Jia", "" ], [ "Bulyko", "Ivan", "" ], [ "Stolcke", "Andreas", "" ] ]
The use of low-rank adaptation (LoRA) with frozen pretrained language models (PLMs) has become increasing popular as a mainstream, resource-efficient modeling approach for memory-constrained hardware. In this study, we first explore how to enhance model performance by introducing various LoRA training strategies, achieving relative word error rate reductions of 3.50\% on the public Librispeech dataset and of 3.67\% on an internal dataset in the messaging domain. To further characterize the stability of LoRA-based second-pass speech recognition models, we examine robustness against input perturbations. These perturbations are rooted in homophone replacements and a novel metric called N-best Perturbation-based Rescoring Robustness (NPRR), both designed to measure the relative degradation in the performance of rescoring models. Our experimental results indicate that while advanced variants of LoRA, such as dynamic rank-allocated LoRA, lead to performance degradation in $1$-best perturbation, they alleviate the degradation in $N$-best perturbation. This finding is in comparison to fully-tuned models and vanilla LoRA tuning baselines, suggesting that a comprehensive selection is needed when using LoRA-based adaptation for compute-cost savings and robust language modeling.
1810.01018
Zhezhi He
Zhezhi He, Deliang Fan
Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past years, Deep convolution neural network has achieved great success in many artificial intelligence applications. However, its enormous model size and massive computation cost have become the main obstacle for deployment of such powerful algorithm in the low power and resource-limited mobile systems. As the countermeasure to this problem, deep neural networks with ternarized weights (i.e. -1, 0, +1) have been widely explored to greatly reduce the model size and computational cost, with limited accuracy degradation. In this work, we propose a novel ternarized neural network training method which simultaneously optimizes both weights and quantizer during training, differentiating from prior works. Instead of fixed and uniform weight ternarization, we are the first to incorporate the thresholds of weight ternarization into a closed-form representation using the truncated Gaussian approximation, enabling simultaneous optimization of weights and quantizer through back-propagation training. With both of the first and last layer ternarized, the experiments on the ImageNet classification task show that our ternarized ResNet-18/34/50 only has 3.9/2.52/2.16% accuracy degradation in comparison to the full-precision counterparts.
[ { "created": "Tue, 2 Oct 2018 00:04:20 GMT", "version": "v1" } ]
2018-10-03
[ [ "He", "Zhezhi", "" ], [ "Fan", "Deliang", "" ] ]
In the past years, Deep convolution neural network has achieved great success in many artificial intelligence applications. However, its enormous model size and massive computation cost have become the main obstacle for deployment of such powerful algorithm in the low power and resource-limited mobile systems. As the countermeasure to this problem, deep neural networks with ternarized weights (i.e. -1, 0, +1) have been widely explored to greatly reduce the model size and computational cost, with limited accuracy degradation. In this work, we propose a novel ternarized neural network training method which simultaneously optimizes both weights and quantizer during training, differentiating from prior works. Instead of fixed and uniform weight ternarization, we are the first to incorporate the thresholds of weight ternarization into a closed-form representation using the truncated Gaussian approximation, enabling simultaneous optimization of weights and quantizer through back-propagation training. With both of the first and last layer ternarized, the experiments on the ImageNet classification task show that our ternarized ResNet-18/34/50 only has 3.9/2.52/2.16% accuracy degradation in comparison to the full-precision counterparts.
1703.08803
Arnaud Blouin
Val\'eria Lelli and Arnaud Blouin and Benoit Baudry and Fabien Coulon and Olivier Beaudoux
Automatic Detection of GUI Design Smells: The Case of Blob Listener
null
Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS'16), pp.263-274, 2016
10.1145/2933242.2933260
null
cs.SE cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graphical User Interfaces (GUIs) intensively rely on event-driven programming: widgets send GUI events, which capture users' interactions, to dedicated objects called controllers. Controllers implement several GUI listeners that handle these events to produce GUI commands. In this work, we conducted an empirical study on 13 large Java Swing open-source software systems. We study to what extent the number of GUI commands that a GUI listener can produce has an impact on the change-and fault-proneness of the GUI listener code. We identify a new type of design smell, called Blob listener that characterizes GUI listeners that can produce more than two GUI commands. We show that 21 % of the analyzed GUI controllers are Blob listeners. We propose a systematic static code analysis procedure that searches for Blob listener that we implement in InspectorGuidget. We conducted experiments on six software systems for which we manually identified 37 instances of Blob listener. InspectorGuidget successfully detected 36 Blob listeners out of 37. The results exhibit a precision of 97.37 % and a recall of 97.59 %. Finally, we propose coding practices to avoid the use of Blob listeners.
[ { "created": "Sun, 26 Mar 2017 10:40:21 GMT", "version": "v1" } ]
2017-03-28
[ [ "Lelli", "Valéria", "" ], [ "Blouin", "Arnaud", "" ], [ "Baudry", "Benoit", "" ], [ "Coulon", "Fabien", "" ], [ "Beaudoux", "Olivier", "" ] ]
Graphical User Interfaces (GUIs) intensively rely on event-driven programming: widgets send GUI events, which capture users' interactions, to dedicated objects called controllers. Controllers implement several GUI listeners that handle these events to produce GUI commands. In this work, we conducted an empirical study on 13 large Java Swing open-source software systems. We study to what extent the number of GUI commands that a GUI listener can produce has an impact on the change-and fault-proneness of the GUI listener code. We identify a new type of design smell, called Blob listener that characterizes GUI listeners that can produce more than two GUI commands. We show that 21 % of the analyzed GUI controllers are Blob listeners. We propose a systematic static code analysis procedure that searches for Blob listener that we implement in InspectorGuidget. We conducted experiments on six software systems for which we manually identified 37 instances of Blob listener. InspectorGuidget successfully detected 36 Blob listeners out of 37. The results exhibit a precision of 97.37 % and a recall of 97.59 %. Finally, we propose coding practices to avoid the use of Blob listeners.
1906.09548
Phuong-Duy Nguyen
Phuong-Duy Nguyen and Vu Nguyen Ha and Long Bao Le
Computation Offloading and Resource Allocation for Backhaul Limited Cooperative MEC Systems
null
null
10.1109/VTCFall.2019.8891244
null
cs.DC cs.NI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we jointly optimize computation offloading and resource allocation to minimize the weighted sum of energy consumption of all mobile users in a backhaul limited cooperative MEC system with multiple fog servers. Considering the partial offloading strategy and TDMA transmission at each base station, the underlying optimization problem with constraints on maximum task latency and limited computation resource at mobile users and fog servers is non-convex. We propose to convexify the problem exploiting the relationship among some optimization variables from which an optimal algorithm is proposed to solve the resulting problem. We then present numerical results to demonstrate the significant gains of our proposed design compared to conventional designs without exploiting cooperation among fog servers and a greedy algorithm.
[ { "created": "Sun, 23 Jun 2019 03:40:17 GMT", "version": "v1" } ]
2019-11-12
[ [ "Nguyen", "Phuong-Duy", "" ], [ "Ha", "Vu Nguyen", "" ], [ "Le", "Long Bao", "" ] ]
In this paper, we jointly optimize computation offloading and resource allocation to minimize the weighted sum of energy consumption of all mobile users in a backhaul limited cooperative MEC system with multiple fog servers. Considering the partial offloading strategy and TDMA transmission at each base station, the underlying optimization problem with constraints on maximum task latency and limited computation resource at mobile users and fog servers is non-convex. We propose to convexify the problem exploiting the relationship among some optimization variables from which an optimal algorithm is proposed to solve the resulting problem. We then present numerical results to demonstrate the significant gains of our proposed design compared to conventional designs without exploiting cooperation among fog servers and a greedy algorithm.
1803.07385
Maneet Singh
Maneet Singh, Shruti Nagpal, Mayank Vatsa, Richa Singh
Are you eligible? Predicting adulthood from face images via class specific mean autoencoder
Accepted for publication in Pattern Recognition Letters
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting if a person is an adult or a minor has several applications such as inspecting underage driving, preventing purchase of alcohol and tobacco by minors, and granting restricted access. The challenging nature of this problem arises due to the complex and unique physiological changes that are observed with age progression. This paper presents a novel deep learning based formulation, termed as Class Specific Mean Autoencoder, to learn the intra-class similarity and extract class-specific features. We propose that the feature of a particular class if brought similar/closer to the mean feature of that class can help in learning class-specific representations. The proposed formulation is applied for the task of adulthood classification which predicts whether the given face image is of an adult or not. Experiments are performed on two large databases and the results show that the proposed algorithm yields higher classification accuracy compared to existing algorithms and a Commercial-Off-The-Shelf system.
[ { "created": "Tue, 20 Mar 2018 11:58:40 GMT", "version": "v1" } ]
2018-03-21
[ [ "Singh", "Maneet", "" ], [ "Nagpal", "Shruti", "" ], [ "Vatsa", "Mayank", "" ], [ "Singh", "Richa", "" ] ]
Predicting if a person is an adult or a minor has several applications such as inspecting underage driving, preventing purchase of alcohol and tobacco by minors, and granting restricted access. The challenging nature of this problem arises due to the complex and unique physiological changes that are observed with age progression. This paper presents a novel deep learning based formulation, termed as Class Specific Mean Autoencoder, to learn the intra-class similarity and extract class-specific features. We propose that the feature of a particular class if brought similar/closer to the mean feature of that class can help in learning class-specific representations. The proposed formulation is applied for the task of adulthood classification which predicts whether the given face image is of an adult or not. Experiments are performed on two large databases and the results show that the proposed algorithm yields higher classification accuracy compared to existing algorithms and a Commercial-Off-The-Shelf system.
2308.13976
Yu Wang
Yu Wang, Xin Xin, Zaiqiao Meng, Joemon Jose, Fuli Feng
Label Denoising through Cross-Model Agreement
arXiv admin note: substantial text overlap with arXiv:2105.09605
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Learning from corrupted labels is very common in real-world machine-learning applications. Memorizing such noisy labels could affect the learning of the model, leading to sub-optimal performances. In this work, we propose a novel framework to learn robust machine-learning models from noisy labels. Through an empirical study, we find that different models make relatively similar predictions on clean examples, while the predictions on noisy examples vary much more across different models. Motivated by this observation, we propose \em denoising with cross-model agreement \em (DeCA) which aims to minimize the KL-divergence between the true label distributions parameterized by two machine learning models while maximizing the likelihood of data observation. We employ the proposed DeCA on both the binary label scenario and the multiple label scenario. For the binary label scenario, we select implicit feedback recommendation as the downstream task and conduct experiments with four state-of-the-art recommendation models on four datasets. For the multiple-label scenario, the downstream application is image classification on two benchmark datasets. Experimental results demonstrate that the proposed methods significantly improve the model performance compared with normal training and other denoising methods on both binary and multiple-label scenarios.
[ { "created": "Sun, 27 Aug 2023 00:31:04 GMT", "version": "v1" }, { "created": "Fri, 1 Sep 2023 07:38:59 GMT", "version": "v2" }, { "created": "Tue, 19 Dec 2023 04:44:35 GMT", "version": "v3" } ]
2023-12-20
[ [ "Wang", "Yu", "" ], [ "Xin", "Xin", "" ], [ "Meng", "Zaiqiao", "" ], [ "Jose", "Joemon", "" ], [ "Feng", "Fuli", "" ] ]
Learning from corrupted labels is very common in real-world machine-learning applications. Memorizing such noisy labels could affect the learning of the model, leading to sub-optimal performances. In this work, we propose a novel framework to learn robust machine-learning models from noisy labels. Through an empirical study, we find that different models make relatively similar predictions on clean examples, while the predictions on noisy examples vary much more across different models. Motivated by this observation, we propose \em denoising with cross-model agreement \em (DeCA) which aims to minimize the KL-divergence between the true label distributions parameterized by two machine learning models while maximizing the likelihood of data observation. We employ the proposed DeCA on both the binary label scenario and the multiple label scenario. For the binary label scenario, we select implicit feedback recommendation as the downstream task and conduct experiments with four state-of-the-art recommendation models on four datasets. For the multiple-label scenario, the downstream application is image classification on two benchmark datasets. Experimental results demonstrate that the proposed methods significantly improve the model performance compared with normal training and other denoising methods on both binary and multiple-label scenarios.
2112.02213
Seira Hidano Dr
Kento Hasegawa, Kazuki Yamashita, Seira Hidano, Kazuhide Fukushima, Kazuo Hashimoto, Nozomu Togawa
Node-wise Hardware Trojan Detection Based on Graph Learning
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the fourth industrial revolution, securing the protection of the supply chain has become an ever-growing concern. One such cyber threat is a hardware Trojan (HT), a malicious modification to an IC. HTs are often identified in the hardware manufacturing process, but should be removed earlier, when the design is being specified. Machine learning-based HT detection in gate-level netlists is an efficient approach to identify HTs at the early stage. However, feature-based modeling has limitations in discovering an appropriate set of HT features. We thus propose NHTD-GL in this paper, a novel node-wise HT detection method based on graph learning (GL). Given the formal analysis of HT features obtained from domain knowledge, NHTD-GL bridges the gap between graph representation learning and feature-based HT detection. The experimental results demonstrate that NHTD-GL achieves 0.998 detection accuracy and outperforms state-of-the-art node-wise HT detection methods. NHTD-GL extracts HT features without heuristic feature engineering.
[ { "created": "Sat, 4 Dec 2021 01:34:56 GMT", "version": "v1" }, { "created": "Wed, 16 Mar 2022 01:40:06 GMT", "version": "v2" } ]
2022-03-17
[ [ "Hasegawa", "Kento", "" ], [ "Yamashita", "Kazuki", "" ], [ "Hidano", "Seira", "" ], [ "Fukushima", "Kazuhide", "" ], [ "Hashimoto", "Kazuo", "" ], [ "Togawa", "Nozomu", "" ] ]
In the fourth industrial revolution, securing the protection of the supply chain has become an ever-growing concern. One such cyber threat is a hardware Trojan (HT), a malicious modification to an IC. HTs are often identified in the hardware manufacturing process, but should be removed earlier, when the design is being specified. Machine learning-based HT detection in gate-level netlists is an efficient approach to identify HTs at the early stage. However, feature-based modeling has limitations in discovering an appropriate set of HT features. We thus propose NHTD-GL in this paper, a novel node-wise HT detection method based on graph learning (GL). Given the formal analysis of HT features obtained from domain knowledge, NHTD-GL bridges the gap between graph representation learning and feature-based HT detection. The experimental results demonstrate that NHTD-GL achieves 0.998 detection accuracy and outperforms state-of-the-art node-wise HT detection methods. NHTD-GL extracts HT features without heuristic feature engineering.
1908.09758
Ton Chanh Le
Wei-Ngan Chin, Ton Chanh Le, Shengchao Qin
Automated Verification of CountDownLatch
null
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The CountDownLatch (CDL) is a versatile concurrency mechanism that was first introduced in Java 5, and is also being adopted into C++ and C#. Its usage allows one or more threads to exchange resources and synchronize by waiting for some tasks to be completed before others can proceed. In this paper, we propose a new framework for verifying the correctness of concurrent applications that use CDLs. Our framework is built on top of two existing mechanisms, concurrent abstract predicate and fictional separation logic, with some enhancements such as borrowed heap and thread local abstraction. In addition, we propose a new inconsistency detection mechanism based on waits-for relation to guarantee deadlock freedom. Prior concurrency verification works have mostly focused on data-race freedom. As a practical proof of concept, we have implemented this new specification and verification mechanism for CDL in a new tool, called HIPCAP, on top of an existing HIP verifier. We have used this new tool to successfully verify various use cases for CDL.
[ { "created": "Mon, 26 Aug 2019 15:58:27 GMT", "version": "v1" } ]
2019-08-27
[ [ "Chin", "Wei-Ngan", "" ], [ "Le", "Ton Chanh", "" ], [ "Qin", "Shengchao", "" ] ]
The CountDownLatch (CDL) is a versatile concurrency mechanism that was first introduced in Java 5, and is also being adopted into C++ and C#. Its usage allows one or more threads to exchange resources and synchronize by waiting for some tasks to be completed before others can proceed. In this paper, we propose a new framework for verifying the correctness of concurrent applications that use CDLs. Our framework is built on top of two existing mechanisms, concurrent abstract predicate and fictional separation logic, with some enhancements such as borrowed heap and thread local abstraction. In addition, we propose a new inconsistency detection mechanism based on waits-for relation to guarantee deadlock freedom. Prior concurrency verification works have mostly focused on data-race freedom. As a practical proof of concept, we have implemented this new specification and verification mechanism for CDL in a new tool, called HIPCAP, on top of an existing HIP verifier. We have used this new tool to successfully verify various use cases for CDL.
2206.06029
Nils Feldhus
Nils Feldhus, Ajay Madhavan Ravichandran, Sebastian M\"oller
Mediators: Conversational Agents Explaining NLP Model Behavior
Accepted to IJCAI-ECAI 2022 Workshop on Explainable Artificial Intelligence (XAI)
null
null
null
cs.CL cs.AI cs.HC cs.LG
http://creativecommons.org/licenses/by/4.0/
The human-centric explainable artificial intelligence (HCXAI) community has raised the need for framing the explanation process as a conversation between human and machine. In this position paper, we establish desiderata for Mediators, text-based conversational agents which are capable of explaining the behavior of neural models interactively using natural language. From the perspective of natural language processing (NLP) research, we engineer a blueprint of such a Mediator for the task of sentiment analysis and assess how far along current research is on the path towards dialogue-based explanations.
[ { "created": "Mon, 13 Jun 2022 10:31:18 GMT", "version": "v1" } ]
2022-06-14
[ [ "Feldhus", "Nils", "" ], [ "Ravichandran", "Ajay Madhavan", "" ], [ "Möller", "Sebastian", "" ] ]
The human-centric explainable artificial intelligence (HCXAI) community has raised the need for framing the explanation process as a conversation between human and machine. In this position paper, we establish desiderata for Mediators, text-based conversational agents which are capable of explaining the behavior of neural models interactively using natural language. From the perspective of natural language processing (NLP) research, we engineer a blueprint of such a Mediator for the task of sentiment analysis and assess how far along current research is on the path towards dialogue-based explanations.
1805.06652
Jonny O'Dwyer
Jonny O'Dwyer, Niall Murray, Ronan Flynn
Affective computing using speech and eye gaze: a review and bimodal system proposal for continuous affect prediction
Submitted to International Journal of Human-Computer Studies
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Speech has been a widely used modality in the field of affective computing. Recently however, there has been a growing interest in the use of multi-modal affective computing systems. These multi-modal systems incorporate both verbal and non-verbal features for affective computing tasks. Such multi-modal affective computing systems are advantageous for emotion assessment of individuals in audio-video communication environments such as teleconferencing, healthcare, and education. From a review of the literature, the use of eye gaze features extracted from video is a modality that has remained largely unexploited for continuous affect prediction. This work presents a review of the literature within the emotion classification and continuous affect prediction sub-fields of affective computing for both speech and eye gaze modalities. Additionally, continuous affect prediction experiments using speech and eye gaze modalities are presented. A baseline system is proposed using open source software, the performance of which is assessed on a publicly available audio-visual corpus. Further system performance is assessed in a cross-corpus and cross-lingual experiment. The experimental results suggest that eye gaze is an effective supportive modality for speech when used in a bimodal continuous affect prediction system. The addition of eye gaze to speech in a simple feature fusion framework yields a prediction improvement of 6.13% for valence and 1.62% for arousal.
[ { "created": "Thu, 17 May 2018 08:34:49 GMT", "version": "v1" } ]
2018-05-18
[ [ "O'Dwyer", "Jonny", "" ], [ "Murray", "Niall", "" ], [ "Flynn", "Ronan", "" ] ]
Speech has been a widely used modality in the field of affective computing. Recently however, there has been a growing interest in the use of multi-modal affective computing systems. These multi-modal systems incorporate both verbal and non-verbal features for affective computing tasks. Such multi-modal affective computing systems are advantageous for emotion assessment of individuals in audio-video communication environments such as teleconferencing, healthcare, and education. From a review of the literature, the use of eye gaze features extracted from video is a modality that has remained largely unexploited for continuous affect prediction. This work presents a review of the literature within the emotion classification and continuous affect prediction sub-fields of affective computing for both speech and eye gaze modalities. Additionally, continuous affect prediction experiments using speech and eye gaze modalities are presented. A baseline system is proposed using open source software, the performance of which is assessed on a publicly available audio-visual corpus. Further system performance is assessed in a cross-corpus and cross-lingual experiment. The experimental results suggest that eye gaze is an effective supportive modality for speech when used in a bimodal continuous affect prediction system. The addition of eye gaze to speech in a simple feature fusion framework yields a prediction improvement of 6.13% for valence and 1.62% for arousal.
2309.08590
Raphael Reinauer
Raphael Reinauer and Patrick Simianer and Kaden Uhlig and Johannes E. M. Mosig and Joern Wuebker
Neural Machine Translation Models Can Learn to be Few-shot Learners
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The emergent ability of Large Language Models to use a small number of examples to learn to perform in novel domains and tasks, also called in-context learning (ICL). In this work, we show that a much smaller model can be trained to perform ICL by fine-tuning towards a specialized training objective, exemplified on the task of domain adaptation for neural machine translation. With this capacity for ICL, the model can take advantage of relevant few-shot examples to adapt its output towards the domain. We compare the quality of this domain adaptation to traditional supervised techniques and ICL with a 40B-parameter Large Language Model. Our approach allows efficient batch inference on a mix of domains and outperforms state-of-the-art baselines in terms of both translation quality and immediate adaptation rate, i.e. the ability to reproduce a specific term after being shown a single example.
[ { "created": "Fri, 15 Sep 2023 17:44:21 GMT", "version": "v1" } ]
2023-09-18
[ [ "Reinauer", "Raphael", "" ], [ "Simianer", "Patrick", "" ], [ "Uhlig", "Kaden", "" ], [ "Mosig", "Johannes E. M.", "" ], [ "Wuebker", "Joern", "" ] ]
The emergent ability of Large Language Models to use a small number of examples to learn to perform in novel domains and tasks, also called in-context learning (ICL). In this work, we show that a much smaller model can be trained to perform ICL by fine-tuning towards a specialized training objective, exemplified on the task of domain adaptation for neural machine translation. With this capacity for ICL, the model can take advantage of relevant few-shot examples to adapt its output towards the domain. We compare the quality of this domain adaptation to traditional supervised techniques and ICL with a 40B-parameter Large Language Model. Our approach allows efficient batch inference on a mix of domains and outperforms state-of-the-art baselines in terms of both translation quality and immediate adaptation rate, i.e. the ability to reproduce a specific term after being shown a single example.
2312.06680
Ruichen Zhang
Ruichen Zhang
Perceptual Similarity guidance and text guidance optimization for Editing Real Images using Guided Diffusion Models
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When using a diffusion model for image editing, there are times when the modified image can differ greatly from the source. To address this, we apply a dual-guidance approach to maintain high fidelity to the original in areas that are not altered. First, we employ text-guided optimization, using text embeddings to direct latent space and classifier-free guidance. Second, we use perceptual similarity guidance, optimizing latent vectors with posterior sampling via Tweedie formula during the reverse process. This method ensures the realistic rendering of both the edited elements and the preservation of the unedited parts of the original image.
[ { "created": "Sat, 9 Dec 2023 02:55:35 GMT", "version": "v1" } ]
2023-12-13
[ [ "Zhang", "Ruichen", "" ] ]
When using a diffusion model for image editing, there are times when the modified image can differ greatly from the source. To address this, we apply a dual-guidance approach to maintain high fidelity to the original in areas that are not altered. First, we employ text-guided optimization, using text embeddings to direct latent space and classifier-free guidance. Second, we use perceptual similarity guidance, optimizing latent vectors with posterior sampling via Tweedie formula during the reverse process. This method ensures the realistic rendering of both the edited elements and the preservation of the unedited parts of the original image.
2112.11270
Attila Klenik
Attila Klenik and Andr\'as Pataricza
Adding semantics to measurements: Ontology-guided, systematic performance analysis
36 pages
null
null
null
cs.PF
http://creativecommons.org/licenses/by/4.0/
The design and operation of modern software systems exhibit a shift towards virtualization, containerization and service-based orchestration. Performance capacity engineering and resource utilization tuning become priority requirements in such environments. Measurement-based performance evaluation is the cornerstone of capacity engineering and designing for performance. Moreover, the increasing complexity of systems necessitates rigorous performance analysis approaches. However, empirical performance analysis lacks sophisticated model-based support similar to the functional design of the system. The paper proposes an ontology-based approach for facilitating and guiding the empirical evaluation throughout its various steps. Hyperledger Fabric (HLF), an open-source blockchain platform by the Linux Foundation, is modelled and evaluated as a pilot example of the approach, using the standard TPC-C performance benchmark workload.
[ { "created": "Tue, 21 Dec 2021 14:50:25 GMT", "version": "v1" } ]
2021-12-22
[ [ "Klenik", "Attila", "" ], [ "Pataricza", "András", "" ] ]
The design and operation of modern software systems exhibit a shift towards virtualization, containerization and service-based orchestration. Performance capacity engineering and resource utilization tuning become priority requirements in such environments. Measurement-based performance evaluation is the cornerstone of capacity engineering and designing for performance. Moreover, the increasing complexity of systems necessitates rigorous performance analysis approaches. However, empirical performance analysis lacks sophisticated model-based support similar to the functional design of the system. The paper proposes an ontology-based approach for facilitating and guiding the empirical evaluation throughout its various steps. Hyperledger Fabric (HLF), an open-source blockchain platform by the Linux Foundation, is modelled and evaluated as a pilot example of the approach, using the standard TPC-C performance benchmark workload.
1812.07205
Xavier Bost
Xavier Bost (LIA), Georges Linar\`es (LIA), Serigne Gueye (LIA)
Audiovisual speaker diarization of TV series
null
2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Apr 2015, Brisbane, Australia. IEEE, pp.4799-4803, 2015
10.1109/ICASSP.2015.7178882
null
cs.MM cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Speaker diarization may be difficult to achieve when applied to narrative films, where speakers usually talk in adverse acoustic conditions: background music, sound effects, wide variations in intonation may hide the inter-speaker variability and make audio-based speaker diarization approaches error prone. On the other hand, such fictional movies exhibit strong regularities at the image level, particularly within dialogue scenes. In this paper, we propose to perform speaker diarization within dialogue scenes of TV series by combining the audio and video modalities: speaker diarization is first performed by using each modality, the two resulting partitions of the instance set are then optimally matched, before the remaining instances, corresponding to cases of disagreement between both modalities, are finally processed. The results obtained by applying such a multi-modal approach to fictional films turn out to outperform those obtained by relying on a single modality.
[ { "created": "Tue, 18 Dec 2018 07:21:36 GMT", "version": "v1" }, { "created": "Sat, 29 Dec 2018 14:59:28 GMT", "version": "v2" } ]
2019-01-01
[ [ "Bost", "Xavier", "", "LIA" ], [ "Linarès", "Georges", "", "LIA" ], [ "Gueye", "Serigne", "", "LIA" ] ]
Speaker diarization may be difficult to achieve when applied to narrative films, where speakers usually talk in adverse acoustic conditions: background music, sound effects, wide variations in intonation may hide the inter-speaker variability and make audio-based speaker diarization approaches error prone. On the other hand, such fictional movies exhibit strong regularities at the image level, particularly within dialogue scenes. In this paper, we propose to perform speaker diarization within dialogue scenes of TV series by combining the audio and video modalities: speaker diarization is first performed by using each modality, the two resulting partitions of the instance set are then optimally matched, before the remaining instances, corresponding to cases of disagreement between both modalities, are finally processed. The results obtained by applying such a multi-modal approach to fictional films turn out to outperform those obtained by relying on a single modality.
2008.08547
Harish Tayyar Madabushi PhD
Wah Meng Lim and Harish Tayyar Madabushi
UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Pre-trained language model word representation, such as BERT, have been extremely successful in several Natural Language Processing tasks significantly improving on the state-of-the-art. This can largely be attributed to their ability to better capture semantic information contained within a sentence. Several tasks, however, can benefit from information available at a corpus level, such as Term Frequency-Inverse Document Frequency (TF-IDF). In this work we test the effectiveness of integrating this information with BERT on the task of identifying abuse on social media and show that integrating this information with BERT does indeed significantly improve performance. We participate in Sub-Task A (abuse detection) wherein we achieve a score within two points of the top performing team and in Sub-Task B (target detection) wherein we are ranked 4 of the 44 participating teams.
[ { "created": "Wed, 19 Aug 2020 16:47:15 GMT", "version": "v1" } ]
2020-08-20
[ [ "Lim", "Wah Meng", "" ], [ "Madabushi", "Harish Tayyar", "" ] ]
Pre-trained language model word representation, such as BERT, have been extremely successful in several Natural Language Processing tasks significantly improving on the state-of-the-art. This can largely be attributed to their ability to better capture semantic information contained within a sentence. Several tasks, however, can benefit from information available at a corpus level, such as Term Frequency-Inverse Document Frequency (TF-IDF). In this work we test the effectiveness of integrating this information with BERT on the task of identifying abuse on social media and show that integrating this information with BERT does indeed significantly improve performance. We participate in Sub-Task A (abuse detection) wherein we achieve a score within two points of the top performing team and in Sub-Task B (target detection) wherein we are ranked 4 of the 44 participating teams.
2304.03384
Tianyi Zhang
Tianyi Zhang and Matthew Johnson-Roberson
Beyond NeRF Underwater: Learning Neural Reflectance Fields for True Color Correction of Marine Imagery
Robotics and Automation Letters (RA-L) VOL. 8, NO. 10, OCTOBER 2023
null
10.1109/LRA.2023.3307287
null
cs.CV cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Underwater imagery often exhibits distorted coloration as a result of light-water interactions, which complicates the study of benthic environments in marine biology and geography. In this research, we propose an algorithm to restore the true color (albedo) in underwater imagery by jointly learning the effects of the medium and neural scene representations. Our approach models water effects as a combination of light attenuation with distance and backscattered light. The proposed neural scene representation is based on a neural reflectance field model, which learns albedos, normals, and volume densities of the underwater environment. We introduce a logistic regression model to separate water from the scene and apply distinct light physics during training. Our method avoids the need to estimate complex backscatter effects in water by employing several approximations, enhancing sampling efficiency and numerical stability during training. The proposed technique integrates underwater light effects into a volume rendering framework with end-to-end differentiability. Experimental results on both synthetic and real-world data demonstrate that our method effectively restores true color from underwater imagery, outperforming existing approaches in terms of color consistency.
[ { "created": "Thu, 6 Apr 2023 21:29:34 GMT", "version": "v1" }, { "created": "Wed, 30 Aug 2023 22:20:50 GMT", "version": "v2" } ]
2023-09-01
[ [ "Zhang", "Tianyi", "" ], [ "Johnson-Roberson", "Matthew", "" ] ]
Underwater imagery often exhibits distorted coloration as a result of light-water interactions, which complicates the study of benthic environments in marine biology and geography. In this research, we propose an algorithm to restore the true color (albedo) in underwater imagery by jointly learning the effects of the medium and neural scene representations. Our approach models water effects as a combination of light attenuation with distance and backscattered light. The proposed neural scene representation is based on a neural reflectance field model, which learns albedos, normals, and volume densities of the underwater environment. We introduce a logistic regression model to separate water from the scene and apply distinct light physics during training. Our method avoids the need to estimate complex backscatter effects in water by employing several approximations, enhancing sampling efficiency and numerical stability during training. The proposed technique integrates underwater light effects into a volume rendering framework with end-to-end differentiability. Experimental results on both synthetic and real-world data demonstrate that our method effectively restores true color from underwater imagery, outperforming existing approaches in terms of color consistency.
2401.13311
Rohan Wadhawan
Rohan Wadhawan, Hritik Bansal, Kai-Wei Chang, Nanyun Peng
ConTextual: Evaluating Context-Sensitive Text-Rich Visual Reasoning in Large Multimodal Models
null
PMLR 235:49733-49787, 2024
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many real-world tasks require an agent to reason jointly over text and visual objects, (e.g., navigating in public spaces), which we refer to as context-sensitive text-rich visual reasoning. Specifically, these tasks require an understanding of the context in which the text interacts with visual elements within an image. However, there is a lack of existing datasets to benchmark the state-of-the-art multimodal models' capability on context-sensitive text-rich visual reasoning. In this paper, we introduce ConTextual, a novel dataset featuring human-crafted instructions that require context-sensitive reasoning for text-rich images. We conduct experiments to assess the performance of 14 foundation models (GPT-4V, Gemini-Pro-Vision, LLaVA-Next) and establish a human performance baseline. Further, we perform human evaluations of the model responses and observe a significant performance gap of 30.8% between GPT-4V (the current best-performing Large Multimodal Model) and human performance. Our fine-grained analysis reveals that GPT-4V encounters difficulties interpreting time-related data and infographics. However, it demonstrates proficiency in comprehending abstract visual contexts such as memes and quotes. Finally, our qualitative analysis uncovers various factors contributing to poor performance including lack of precise visual perception and hallucinations. Our dataset, code, and leaderboard can be found on the project page https://con-textual.github.io/
[ { "created": "Wed, 24 Jan 2024 09:07:11 GMT", "version": "v1" }, { "created": "Sun, 16 Jun 2024 00:38:24 GMT", "version": "v2" }, { "created": "Tue, 16 Jul 2024 03:36:29 GMT", "version": "v3" } ]
2024-07-30
[ [ "Wadhawan", "Rohan", "" ], [ "Bansal", "Hritik", "" ], [ "Chang", "Kai-Wei", "" ], [ "Peng", "Nanyun", "" ] ]
Many real-world tasks require an agent to reason jointly over text and visual objects, (e.g., navigating in public spaces), which we refer to as context-sensitive text-rich visual reasoning. Specifically, these tasks require an understanding of the context in which the text interacts with visual elements within an image. However, there is a lack of existing datasets to benchmark the state-of-the-art multimodal models' capability on context-sensitive text-rich visual reasoning. In this paper, we introduce ConTextual, a novel dataset featuring human-crafted instructions that require context-sensitive reasoning for text-rich images. We conduct experiments to assess the performance of 14 foundation models (GPT-4V, Gemini-Pro-Vision, LLaVA-Next) and establish a human performance baseline. Further, we perform human evaluations of the model responses and observe a significant performance gap of 30.8% between GPT-4V (the current best-performing Large Multimodal Model) and human performance. Our fine-grained analysis reveals that GPT-4V encounters difficulties interpreting time-related data and infographics. However, it demonstrates proficiency in comprehending abstract visual contexts such as memes and quotes. Finally, our qualitative analysis uncovers various factors contributing to poor performance including lack of precise visual perception and hallucinations. Our dataset, code, and leaderboard can be found on the project page https://con-textual.github.io/
2002.11433
Chunhua Shen
Yifan Liu, Chunhua Shen, Changqian Yu, Jingdong Wang
Efficient Semantic Video Segmentation with Per-frame Inference
Accepted to Proc. Eur. Conf. Computer Vision (ECCV), 2020
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
For semantic segmentation, most existing real-time deep models trained with each frame independently may produce inconsistent results for a video sequence. Advanced methods take into considerations the correlations in the video sequence, e.g., by propagating the results to the neighboring frames using optical flow, or extracting the frame representations with other frames, which may lead to inaccurate results or unbalanced latency. In this work, we process efficient semantic video segmentation in a per-frame fashion during the inference process. Different from previous per-frame models, we explicitly consider the temporal consistency among frames as extra constraints during the training process and embed the temporal consistency into the segmentation network. Therefore, in the inference process, we can process each frame independently with no latency, and improve the temporal consistency with no extra computational cost and post-processing. We employ compact models for real-time execution. To narrow the performance gap between compact models and large models, new knowledge distillation methods are designed. Our results outperform previous keyframe based methods with a better trade-off between the accuracy and the inference speed on popular benchmarks, including the Cityscapes and Camvid. The temporal consistency is also improved compared with corresponding baselines which are trained with each frame independently. Code is available at: https://tinyurl.com/segment-video
[ { "created": "Wed, 26 Feb 2020 12:24:32 GMT", "version": "v1" }, { "created": "Fri, 17 Jul 2020 12:57:29 GMT", "version": "v2" } ]
2020-07-20
[ [ "Liu", "Yifan", "" ], [ "Shen", "Chunhua", "" ], [ "Yu", "Changqian", "" ], [ "Wang", "Jingdong", "" ] ]
For semantic segmentation, most existing real-time deep models trained with each frame independently may produce inconsistent results for a video sequence. Advanced methods take into considerations the correlations in the video sequence, e.g., by propagating the results to the neighboring frames using optical flow, or extracting the frame representations with other frames, which may lead to inaccurate results or unbalanced latency. In this work, we process efficient semantic video segmentation in a per-frame fashion during the inference process. Different from previous per-frame models, we explicitly consider the temporal consistency among frames as extra constraints during the training process and embed the temporal consistency into the segmentation network. Therefore, in the inference process, we can process each frame independently with no latency, and improve the temporal consistency with no extra computational cost and post-processing. We employ compact models for real-time execution. To narrow the performance gap between compact models and large models, new knowledge distillation methods are designed. Our results outperform previous keyframe based methods with a better trade-off between the accuracy and the inference speed on popular benchmarks, including the Cityscapes and Camvid. The temporal consistency is also improved compared with corresponding baselines which are trained with each frame independently. Code is available at: https://tinyurl.com/segment-video
1710.00112
Sayed Hadi Hashemi
Faraz Faghri, Sayed Hadi Hashemi, Mohammad Babaeizadeh, Mike A. Nalls, Saurabh Sinha, Roy H. Campbell
Toward Scalable Machine Learning and Data Mining: the Bioinformatics Case
null
null
null
null
cs.DC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In an effort to overcome the data deluge in computational biology and bioinformatics and to facilitate bioinformatics research in the era of big data, we identify some of the most influential algorithms that have been widely used in the bioinformatics community. These top data mining and machine learning algorithms cover classification, clustering, regression, graphical model-based learning, and dimensionality reduction. The goal of this study is to guide the focus of scalable computing experts in the endeavor of applying new storage and scalable computation designs to bioinformatics algorithms that merit their attention most, following the engineering maxim of "optimize the common case".
[ { "created": "Fri, 29 Sep 2017 22:29:19 GMT", "version": "v1" } ]
2017-10-03
[ [ "Faghri", "Faraz", "" ], [ "Hashemi", "Sayed Hadi", "" ], [ "Babaeizadeh", "Mohammad", "" ], [ "Nalls", "Mike A.", "" ], [ "Sinha", "Saurabh", "" ], [ "Campbell", "Roy H.", "" ] ]
In an effort to overcome the data deluge in computational biology and bioinformatics and to facilitate bioinformatics research in the era of big data, we identify some of the most influential algorithms that have been widely used in the bioinformatics community. These top data mining and machine learning algorithms cover classification, clustering, regression, graphical model-based learning, and dimensionality reduction. The goal of this study is to guide the focus of scalable computing experts in the endeavor of applying new storage and scalable computation designs to bioinformatics algorithms that merit their attention most, following the engineering maxim of "optimize the common case".
2301.12467
Julio Hurtado
Julio Hurtado and Dario Salvati and Rudy Semola and Mattia Bosio and Vincenzo Lomonaco
Continual Learning for Predictive Maintenance: Overview and Challenges
null
null
10.1016/j.iswa.2023.200251
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Deep learning techniques have become one of the main propellers for solving engineering problems effectively and efficiently. For instance, Predictive Maintenance methods have been used to improve predictions of when maintenance is needed on different machines and operative contexts. However, deep learning methods are not without limitations, as these models are normally trained on a fixed distribution that only reflects the current state of the problem. Due to internal or external factors, the state of the problem can change, and the performance decreases due to the lack of generalization and adaptation. Contrary to this stationary training set, real-world applications change their environments constantly, creating the need to constantly adapt the model to evolving scenarios. To aid in this endeavor, Continual Learning methods propose ways to constantly adapt prediction models and incorporate new knowledge after deployment. Despite the advantages of these techniques, there are still challenges to applying them to real-world problems. In this work, we present a brief introduction to predictive maintenance, non-stationary environments, and continual learning, together with an extensive review of the current state of applying continual learning in real-world applications and specifically in predictive maintenance. We then discuss the current challenges of both predictive maintenance and continual learning, proposing future directions at the intersection of both areas. Finally, we propose a novel way to create benchmarks that favor the application of continuous learning methods in more realistic environments, giving specific examples of predictive maintenance.
[ { "created": "Sun, 29 Jan 2023 15:32:53 GMT", "version": "v1" }, { "created": "Thu, 29 Jun 2023 08:55:12 GMT", "version": "v2" } ]
2023-06-30
[ [ "Hurtado", "Julio", "" ], [ "Salvati", "Dario", "" ], [ "Semola", "Rudy", "" ], [ "Bosio", "Mattia", "" ], [ "Lomonaco", "Vincenzo", "" ] ]
Deep learning techniques have become one of the main propellers for solving engineering problems effectively and efficiently. For instance, Predictive Maintenance methods have been used to improve predictions of when maintenance is needed on different machines and operative contexts. However, deep learning methods are not without limitations, as these models are normally trained on a fixed distribution that only reflects the current state of the problem. Due to internal or external factors, the state of the problem can change, and the performance decreases due to the lack of generalization and adaptation. Contrary to this stationary training set, real-world applications change their environments constantly, creating the need to constantly adapt the model to evolving scenarios. To aid in this endeavor, Continual Learning methods propose ways to constantly adapt prediction models and incorporate new knowledge after deployment. Despite the advantages of these techniques, there are still challenges to applying them to real-world problems. In this work, we present a brief introduction to predictive maintenance, non-stationary environments, and continual learning, together with an extensive review of the current state of applying continual learning in real-world applications and specifically in predictive maintenance. We then discuss the current challenges of both predictive maintenance and continual learning, proposing future directions at the intersection of both areas. Finally, we propose a novel way to create benchmarks that favor the application of continuous learning methods in more realistic environments, giving specific examples of predictive maintenance.