id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1805.02426
Qiaosheng Zhang
Qiaosheng Zhang, Mayank Bakshi, Sidharth Jaggi
Covert Communication over Adversarially Jammed Channels
Accepted for publication in IEEE Transactions on Information Theory
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Suppose that a transmitter Alice potentially wishes to communicate with a receiver Bob over an adversarially jammed binary channel. An active adversary James eavesdrops on their communication over a binary symmetric channel (BSC(q)), and may maliciously flip (up to) a certain fraction p of their transmitted bits based on his observations. We consider a setting where the communication must be simultaneously covert as well as reliable, i.e., James should be unable to accurately distinguish whether or not Alice is communicating, while Bob should be able to correctly recover Alice's message with high probability regardless of the adversarial jamming strategy. We show that, unlike the setting with passive adversaries, covert communication against active adversaries requires Alice and Bob to have a shared key (of length at least Omega(log n)) even when Bob has a better channel than James. We present lower and upper bounds on the information-theoretically optimal throughput as a function of the channel parameters, the desired level of covertness, and the amount of shared key available. These bounds match for a wide range of parameters of interest. We also develop a computationally efficient coding scheme (based on concatenated codes) when the amount of shared key available is $\Omega(\sqrt{n} \log n)$, and further show that this scheme can be implemented with much less amount of shared key when the adversary is assumed to be computationally bounded.
[ { "created": "Mon, 7 May 2018 10:04:38 GMT", "version": "v1" }, { "created": "Tue, 17 Sep 2019 07:05:45 GMT", "version": "v2" }, { "created": "Thu, 24 Jun 2021 01:53:36 GMT", "version": "v3" } ]
2021-06-25
[ [ "Zhang", "Qiaosheng", "" ], [ "Bakshi", "Mayank", "" ], [ "Jaggi", "Sidharth", "" ] ]
Suppose that a transmitter Alice potentially wishes to communicate with a receiver Bob over an adversarially jammed binary channel. An active adversary James eavesdrops on their communication over a binary symmetric channel (BSC(q)), and may maliciously flip (up to) a certain fraction p of their transmitted bits based on his observations. We consider a setting where the communication must be simultaneously covert as well as reliable, i.e., James should be unable to accurately distinguish whether or not Alice is communicating, while Bob should be able to correctly recover Alice's message with high probability regardless of the adversarial jamming strategy. We show that, unlike the setting with passive adversaries, covert communication against active adversaries requires Alice and Bob to have a shared key (of length at least Omega(log n)) even when Bob has a better channel than James. We present lower and upper bounds on the information-theoretically optimal throughput as a function of the channel parameters, the desired level of covertness, and the amount of shared key available. These bounds match for a wide range of parameters of interest. We also develop a computationally efficient coding scheme (based on concatenated codes) when the amount of shared key available is $\Omega(\sqrt{n} \log n)$, and further show that this scheme can be implemented with much less amount of shared key when the adversary is assumed to be computationally bounded.
2311.09553
Anubha Kabra
Anubha Kabra, Sanketh Rangreji, Yash Mathur, Aman Madaan, Emmy Liu, Graham Neubig
Program-Aided Reasoners (better) Know What They Know
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prior work shows that program-aided reasoning, in which large language models (LLMs) are combined with programs written in programming languages such as Python, can significantly improve accuracy on various reasoning tasks. However, while accuracy is essential, it is also important for such reasoners to "know what they know", which can be quantified through the calibration of the model. In this paper, we compare the calibration of Program Aided Language Models (PAL) and text-based Chain-of-thought (COT) prompting techniques over 5 datasets and 2 model types: LLaMA models and OpenAI models. Our results indicate that PAL leads to improved calibration in 75% of the instances. Our analysis uncovers that prompting styles that produce lesser diversity in generations also have more calibrated results, and thus we also experiment with inducing lower generation diversity using temperature scaling and find that for certain temperatures, PAL is not only more accurate but is also more calibrated than COT. Overall, we demonstrate that, in the majority of cases, program-aided reasoners better know what they know than text-based counterparts.
[ { "created": "Thu, 16 Nov 2023 04:17:49 GMT", "version": "v1" } ]
2023-11-17
[ [ "Kabra", "Anubha", "" ], [ "Rangreji", "Sanketh", "" ], [ "Mathur", "Yash", "" ], [ "Madaan", "Aman", "" ], [ "Liu", "Emmy", "" ], [ "Neubig", "Graham", "" ] ]
Prior work shows that program-aided reasoning, in which large language models (LLMs) are combined with programs written in programming languages such as Python, can significantly improve accuracy on various reasoning tasks. However, while accuracy is essential, it is also important for such reasoners to "know what they know", which can be quantified through the calibration of the model. In this paper, we compare the calibration of Program Aided Language Models (PAL) and text-based Chain-of-thought (COT) prompting techniques over 5 datasets and 2 model types: LLaMA models and OpenAI models. Our results indicate that PAL leads to improved calibration in 75% of the instances. Our analysis uncovers that prompting styles that produce lesser diversity in generations also have more calibrated results, and thus we also experiment with inducing lower generation diversity using temperature scaling and find that for certain temperatures, PAL is not only more accurate but is also more calibrated than COT. Overall, we demonstrate that, in the majority of cases, program-aided reasoners better know what they know than text-based counterparts.
2401.11032
Catherine Han
Catherine Han and Anne Li and Deepak Kumar and Zakir Durumeric
PressProtect: Helping Journalists Navigate Social Media in the Face of Online Harassment
null
null
null
null
cs.CY cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Social media has become a critical tool for journalists to disseminate their work, engage with their audience, and connect with sources. Unfortunately, journalists also regularly endure significant online harassment on social media platforms, ranging from personal attacks to doxxing to threats of physical harm. In this paper, we seek to understand how we can make social media usable for journalists who face constant digital harassment. To begin, we conduct a set of need-finding interviews to understand where existing platform tools and newsroom resources fall short in adequately protecting journalists. We map journalists' unmet needs to concrete design goals, which we use to build PressProtect, an interface that provides journalists greater agency over engaging with readers on Twitter/X. Through user testing with eight journalists, we evaluate PressProtect and find that participants felt it effectively protected them against harassment and could also generalize to serve other visible and vulnerable groups. We conclude with a discussion of our findings and recommendations for social platforms hoping to build defensive defaults for journalists facing online harassment.
[ { "created": "Fri, 19 Jan 2024 21:16:41 GMT", "version": "v1" } ]
2024-01-23
[ [ "Han", "Catherine", "" ], [ "Li", "Anne", "" ], [ "Kumar", "Deepak", "" ], [ "Durumeric", "Zakir", "" ] ]
Social media has become a critical tool for journalists to disseminate their work, engage with their audience, and connect with sources. Unfortunately, journalists also regularly endure significant online harassment on social media platforms, ranging from personal attacks to doxxing to threats of physical harm. In this paper, we seek to understand how we can make social media usable for journalists who face constant digital harassment. To begin, we conduct a set of need-finding interviews to understand where existing platform tools and newsroom resources fall short in adequately protecting journalists. We map journalists' unmet needs to concrete design goals, which we use to build PressProtect, an interface that provides journalists greater agency over engaging with readers on Twitter/X. Through user testing with eight journalists, we evaluate PressProtect and find that participants felt it effectively protected them against harassment and could also generalize to serve other visible and vulnerable groups. We conclude with a discussion of our findings and recommendations for social platforms hoping to build defensive defaults for journalists facing online harassment.
1904.01922
Bernhard G\"ade
Bernhard G\"ade, Ali Bereyhi, Saba Asaad, Ralf R. M\"uller
A Fair Comparison Between Spatial Modulation and Antenna Selection in Massive MIMO Systems
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Both antenna selection and spatial modulation allow for low-complexity MIMO transmitters when the number of RF chains is much lower than the number of transmit antennas. In this manuscript, we present a quantitative performance comparison between these two approaches by taking into account implementational restrictions, such as antenna switching. We consider a band-limitedMIMO system, for which the pulse shape is designed, such that the outband emission satisfies a desired spectral mask. The bit error rate is determined for this system, considering antenna selection and spatial modulation. The results depict that for any array size at the transmit and receive sides, antenna selection outperforms spatial modulation, as long as the power efficiency is smaller than a certain threshold level. By passing this threshold, spatial modulation starts to perform superior. Our investigations show that the threshold takes smaller values, as the number of receive antennas grows large. This indicates that spatial modulation is an effective technique for uplink transmission in massive MIMO systems.
[ { "created": "Wed, 3 Apr 2019 11:29:01 GMT", "version": "v1" } ]
2019-04-04
[ [ "Gäde", "Bernhard", "" ], [ "Bereyhi", "Ali", "" ], [ "Asaad", "Saba", "" ], [ "Müller", "Ralf R.", "" ] ]
Both antenna selection and spatial modulation allow for low-complexity MIMO transmitters when the number of RF chains is much lower than the number of transmit antennas. In this manuscript, we present a quantitative performance comparison between these two approaches by taking into account implementational restrictions, such as antenna switching. We consider a band-limitedMIMO system, for which the pulse shape is designed, such that the outband emission satisfies a desired spectral mask. The bit error rate is determined for this system, considering antenna selection and spatial modulation. The results depict that for any array size at the transmit and receive sides, antenna selection outperforms spatial modulation, as long as the power efficiency is smaller than a certain threshold level. By passing this threshold, spatial modulation starts to perform superior. Our investigations show that the threshold takes smaller values, as the number of receive antennas grows large. This indicates that spatial modulation is an effective technique for uplink transmission in massive MIMO systems.
1812.10078
Zachary Pardos
Weijie Jiang, Zachary A. Pardos, Qiang Wei
Goal-based Course Recommendation
null
null
null
null
cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With cross-disciplinary academic interests increasing and academic advising resources over capacity, the importance of exploring data-assisted methods to support student decision making has never been higher. We build on the findings and methodologies of a quickly developing literature around prediction and recommendation in higher education and develop a novel recurrent neural network-based recommendation system for suggesting courses to help students prepare for target courses of interest, personalized to their estimated prior knowledge background and zone of proximal development. We validate the model using tests of grade prediction and the ability to recover prerequisite relationships articulated by the university. In the third validation, we run the fully personalized recommendation for students the semester before taking a historically difficult course and observe differential overlap with our would-be suggestions. While not proof of causal effectiveness, these three evaluation perspectives on the performance of the goal-based model build confidence and bring us one step closer to deployment of this personalized course preparation affordance in the wild.
[ { "created": "Tue, 25 Dec 2018 09:30:46 GMT", "version": "v1" } ]
2018-12-27
[ [ "Jiang", "Weijie", "" ], [ "Pardos", "Zachary A.", "" ], [ "Wei", "Qiang", "" ] ]
With cross-disciplinary academic interests increasing and academic advising resources over capacity, the importance of exploring data-assisted methods to support student decision making has never been higher. We build on the findings and methodologies of a quickly developing literature around prediction and recommendation in higher education and develop a novel recurrent neural network-based recommendation system for suggesting courses to help students prepare for target courses of interest, personalized to their estimated prior knowledge background and zone of proximal development. We validate the model using tests of grade prediction and the ability to recover prerequisite relationships articulated by the university. In the third validation, we run the fully personalized recommendation for students the semester before taking a historically difficult course and observe differential overlap with our would-be suggestions. While not proof of causal effectiveness, these three evaluation perspectives on the performance of the goal-based model build confidence and bring us one step closer to deployment of this personalized course preparation affordance in the wild.
1910.00887
Benjamin Bergougnoux
Benjamin Bergougnoux, Charis Papadopoulos and Jan Arne Telle
Node Multiway Cut and Subset Feedback Vertex Set on Graphs of Bounded Mim-width
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The two weighted graph problems Node Multiway Cut (NMC) and Subset Feedback Vertex Set (SFVS) both ask for a vertex set of minimum total weight, that for NMC disconnects a given set of terminals, and for SFVS intersects all cycles containing a vertex of a given set. We design a meta-algorithm that allows to solve both problems in time $2^{O(rw^3)}\cdot n^{4}$, $2^{O(q^2\log(q))}\cdot n^{4}$, and $n^{O(k^2)}$ where $rw$ is the rank-width, $q$ the $\mathbb{Q}$-rank-width, and $k$ the mim-width of a given decomposition. This answers in the affirmative an open question raised by Jaffke et al. (Algorithmica, 2019) concerning an XP algorithm for SFVS parameterized by mim-width. By a unified algorithm, this solves both problems in polynomial-time on the following graph classes: Interval, Permutation, and Bi-Interval graphs, Circular Arc and Circular Permutation graphs, Convex graphs, $k$-Polygon, Dilworth-$k$ and Co-$k$-Degenerate graphs for fixed $k$; and also on Leaf Power graphs if a leaf root is given as input, on $H$-Graphs for fixed $H$ if an $H$-representation is given as input, and on arbitrary powers of graphs in all the above classes. Prior to our results, only SFVS was known to be tractable restricted only on Interval and Permutation graphs, whereas all other results are new.
[ { "created": "Wed, 2 Oct 2019 11:45:52 GMT", "version": "v1" }, { "created": "Thu, 3 Oct 2019 09:23:29 GMT", "version": "v2" }, { "created": "Mon, 7 Oct 2019 11:33:46 GMT", "version": "v3" }, { "created": "Tue, 7 Jan 2020 14:45:29 GMT", "version": "v4" }, { "created": "Tue, 3 Mar 2020 10:32:58 GMT", "version": "v5" }, { "created": "Tue, 22 Sep 2020 11:27:13 GMT", "version": "v6" }, { "created": "Mon, 23 Aug 2021 08:57:10 GMT", "version": "v7" }, { "created": "Mon, 17 Jan 2022 10:12:55 GMT", "version": "v8" } ]
2022-01-19
[ [ "Bergougnoux", "Benjamin", "" ], [ "Papadopoulos", "Charis", "" ], [ "Telle", "Jan Arne", "" ] ]
The two weighted graph problems Node Multiway Cut (NMC) and Subset Feedback Vertex Set (SFVS) both ask for a vertex set of minimum total weight, that for NMC disconnects a given set of terminals, and for SFVS intersects all cycles containing a vertex of a given set. We design a meta-algorithm that allows to solve both problems in time $2^{O(rw^3)}\cdot n^{4}$, $2^{O(q^2\log(q))}\cdot n^{4}$, and $n^{O(k^2)}$ where $rw$ is the rank-width, $q$ the $\mathbb{Q}$-rank-width, and $k$ the mim-width of a given decomposition. This answers in the affirmative an open question raised by Jaffke et al. (Algorithmica, 2019) concerning an XP algorithm for SFVS parameterized by mim-width. By a unified algorithm, this solves both problems in polynomial-time on the following graph classes: Interval, Permutation, and Bi-Interval graphs, Circular Arc and Circular Permutation graphs, Convex graphs, $k$-Polygon, Dilworth-$k$ and Co-$k$-Degenerate graphs for fixed $k$; and also on Leaf Power graphs if a leaf root is given as input, on $H$-Graphs for fixed $H$ if an $H$-representation is given as input, and on arbitrary powers of graphs in all the above classes. Prior to our results, only SFVS was known to be tractable restricted only on Interval and Permutation graphs, whereas all other results are new.
1809.04041
Marco Tulio Valente
Jailton Coelho, Marco Tulio Valente, Luciana L. Silva, Emad Shihab
Identifying Unmaintained Projects in GitHub
Accepted at 12th International Symposium on Empirical Software Engineering and Measurement (ESEM), 10 pages, 2018
null
10.1145/3239235.3240501
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Open source software has an increasing importance in modern software development. However, there is also a growing concern on the sustainability of such projects, which are usually managed by a small number of developers, frequently working as volunteers. Aims: In this paper, we propose an approach to identify GitHub projects that are not actively maintained. Our goal is to alert users about the risks of using these projects and possibly motivate other developers to assume the maintenance of the projects. Method: We train machine learning models to identify unmaintained or sparsely maintained projects, based on a set of features about project activity (commits, forks, issues, etc). We empirically validate the model with the best performance with the principal developers of 129 GitHub projects. Results: The proposed machine learning approach has a precision of 80%, based on the feedback of real open source developers; and a recall of 96%. We also show that our approach can be used to assess the risks of projects becoming unmaintained. Conclusions: The model proposed in this paper can be used by open source users and developers to identify GitHub projects that are not actively maintained anymore.
[ { "created": "Tue, 11 Sep 2018 17:15:56 GMT", "version": "v1" } ]
2019-01-31
[ [ "Coelho", "Jailton", "" ], [ "Valente", "Marco Tulio", "" ], [ "Silva", "Luciana L.", "" ], [ "Shihab", "Emad", "" ] ]
Background: Open source software has an increasing importance in modern software development. However, there is also a growing concern on the sustainability of such projects, which are usually managed by a small number of developers, frequently working as volunteers. Aims: In this paper, we propose an approach to identify GitHub projects that are not actively maintained. Our goal is to alert users about the risks of using these projects and possibly motivate other developers to assume the maintenance of the projects. Method: We train machine learning models to identify unmaintained or sparsely maintained projects, based on a set of features about project activity (commits, forks, issues, etc). We empirically validate the model with the best performance with the principal developers of 129 GitHub projects. Results: The proposed machine learning approach has a precision of 80%, based on the feedback of real open source developers; and a recall of 96%. We also show that our approach can be used to assess the risks of projects becoming unmaintained. Conclusions: The model proposed in this paper can be used by open source users and developers to identify GitHub projects that are not actively maintained anymore.
2404.13630
Keqin Li
Keqin Li, Armando Zhu, Peng Zhao, Jintong Song, Jiabei Liu
Utilizing Deep Learning to Optimize Software Development Processes
null
null
10.5281/zenodo.11004006
JCTAM-2024042100074
cs.SE cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
This study explores the application of deep learning technologies in software development processes, particularly in automating code reviews, error prediction, and test generation to enhance code quality and development efficiency. Through a series of empirical studies, experimental groups using deep learning tools and control groups using traditional methods were compared in terms of code error rates and project completion times. The results demonstrated significant improvements in the experimental group, validating the effectiveness of deep learning technologies. The research also discusses potential optimization points, methodologies, and technical challenges of deep learning in software development, as well as how to integrate these technologies into existing software development workflows.
[ { "created": "Sun, 21 Apr 2024 12:06:05 GMT", "version": "v1" }, { "created": "Fri, 3 May 2024 13:07:18 GMT", "version": "v2" } ]
2024-05-06
[ [ "Li", "Keqin", "" ], [ "Zhu", "Armando", "" ], [ "Zhao", "Peng", "" ], [ "Song", "Jintong", "" ], [ "Liu", "Jiabei", "" ] ]
This study explores the application of deep learning technologies in software development processes, particularly in automating code reviews, error prediction, and test generation to enhance code quality and development efficiency. Through a series of empirical studies, experimental groups using deep learning tools and control groups using traditional methods were compared in terms of code error rates and project completion times. The results demonstrated significant improvements in the experimental group, validating the effectiveness of deep learning technologies. The research also discusses potential optimization points, methodologies, and technical challenges of deep learning in software development, as well as how to integrate these technologies into existing software development workflows.
2404.10076
Andrew Boutros
Andrew Boutros, Aman Arora, Vaughn Betz
Field-Programmable Gate Array Architecture for Deep Learning: Survey & Future Directions
null
null
null
null
cs.AR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Deep learning (DL) is becoming the cornerstone of numerous applications both in datacenters and at the edge. Specialized hardware is often necessary to meet the performance requirements of state-of-the-art DL models, but the rapid pace of change in DL models and the wide variety of systems integrating DL make it impossible to create custom computer chips for all but the largest markets. Field-programmable gate arrays (FPGAs) present a unique blend of reprogrammability and direct hardware execution that make them suitable for accelerating DL inference. They offer the ability to customize processing pipelines and memory hierarchies to achieve lower latency and higher energy efficiency compared to general-purpose CPUs and GPUs, at a fraction of the development time and cost of custom chips. Their diverse high-speed IOs also enable directly interfacing the FPGA to the network and/or a variety of external sensors, making them suitable for both datacenter and edge use cases. As DL has become an ever more important workload, FPGA architectures are evolving to enable higher DL performance. In this article, we survey both academic and industrial FPGA architecture enhancements for DL. First, we give a brief introduction on the basics of FPGA architecture and how its components lead to strengths and weaknesses for DL applications. Next, we discuss different styles of DL inference accelerators on FPGA, ranging from model-specific dataflow styles to software-programmable overlay styles. We survey DL-specific enhancements to traditional FPGA building blocks such as logic blocks, arithmetic circuitry, and on-chip memories, as well as new in-fabric DL-specialized blocks for accelerating tensor computations. Finally, we discuss hybrid devices that combine processors and coarse-grained accelerator blocks with FPGA-like interconnect and networks-on-chip, and highlight promising future research directions.
[ { "created": "Mon, 15 Apr 2024 18:28:10 GMT", "version": "v1" } ]
2024-04-17
[ [ "Boutros", "Andrew", "" ], [ "Arora", "Aman", "" ], [ "Betz", "Vaughn", "" ] ]
Deep learning (DL) is becoming the cornerstone of numerous applications both in datacenters and at the edge. Specialized hardware is often necessary to meet the performance requirements of state-of-the-art DL models, but the rapid pace of change in DL models and the wide variety of systems integrating DL make it impossible to create custom computer chips for all but the largest markets. Field-programmable gate arrays (FPGAs) present a unique blend of reprogrammability and direct hardware execution that make them suitable for accelerating DL inference. They offer the ability to customize processing pipelines and memory hierarchies to achieve lower latency and higher energy efficiency compared to general-purpose CPUs and GPUs, at a fraction of the development time and cost of custom chips. Their diverse high-speed IOs also enable directly interfacing the FPGA to the network and/or a variety of external sensors, making them suitable for both datacenter and edge use cases. As DL has become an ever more important workload, FPGA architectures are evolving to enable higher DL performance. In this article, we survey both academic and industrial FPGA architecture enhancements for DL. First, we give a brief introduction on the basics of FPGA architecture and how its components lead to strengths and weaknesses for DL applications. Next, we discuss different styles of DL inference accelerators on FPGA, ranging from model-specific dataflow styles to software-programmable overlay styles. We survey DL-specific enhancements to traditional FPGA building blocks such as logic blocks, arithmetic circuitry, and on-chip memories, as well as new in-fabric DL-specialized blocks for accelerating tensor computations. Finally, we discuss hybrid devices that combine processors and coarse-grained accelerator blocks with FPGA-like interconnect and networks-on-chip, and highlight promising future research directions.
2301.09728
Arian Askari
Arian Askari, Amin Abolghasemi, Gabriella Pasi, Wessel Kraaij, Suzan Verberne
Injecting the BM25 Score as Text Improves BERT-Based Re-rankers
Accepted at ECIR 2023
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
In this paper we propose a novel approach for combining first-stage lexical retrieval models and Transformer-based re-rankers: we inject the relevance score of the lexical model as a token in the middle of the input of the cross-encoder re-ranker. It was shown in prior work that interpolation between the relevance score of lexical and BERT-based re-rankers may not consistently result in higher effectiveness. Our idea is motivated by the finding that BERT models can capture numeric information. We compare several representations of the BM25 score and inject them as text in the input of four different cross-encoders. We additionally analyze the effect for different query types, and investigate the effectiveness of our method for capturing exact matching relevance. Evaluation on the MSMARCO Passage collection and the TREC DL collections shows that the proposed method significantly improves over all cross-encoder re-rankers as well as the common interpolation methods. We show that the improvement is consistent for all query types. We also find an improvement in exact matching capabilities over both BM25 and the cross-encoders. Our findings indicate that cross-encoder re-rankers can efficiently be improved without additional computational burden and extra steps in the pipeline by explicitly adding the output of the first-stage ranker to the model input, and this effect is robust for different models and query types.
[ { "created": "Mon, 23 Jan 2023 21:41:25 GMT", "version": "v1" } ]
2023-01-25
[ [ "Askari", "Arian", "" ], [ "Abolghasemi", "Amin", "" ], [ "Pasi", "Gabriella", "" ], [ "Kraaij", "Wessel", "" ], [ "Verberne", "Suzan", "" ] ]
In this paper we propose a novel approach for combining first-stage lexical retrieval models and Transformer-based re-rankers: we inject the relevance score of the lexical model as a token in the middle of the input of the cross-encoder re-ranker. It was shown in prior work that interpolation between the relevance score of lexical and BERT-based re-rankers may not consistently result in higher effectiveness. Our idea is motivated by the finding that BERT models can capture numeric information. We compare several representations of the BM25 score and inject them as text in the input of four different cross-encoders. We additionally analyze the effect for different query types, and investigate the effectiveness of our method for capturing exact matching relevance. Evaluation on the MSMARCO Passage collection and the TREC DL collections shows that the proposed method significantly improves over all cross-encoder re-rankers as well as the common interpolation methods. We show that the improvement is consistent for all query types. We also find an improvement in exact matching capabilities over both BM25 and the cross-encoders. Our findings indicate that cross-encoder re-rankers can efficiently be improved without additional computational burden and extra steps in the pipeline by explicitly adding the output of the first-stage ranker to the model input, and this effect is robust for different models and query types.
1901.08840
Kees Middelburg
J. A. Bergstra, C. A. Middelburg
Program algebra for Turing-machine programs
19 pages, Sect. 2--4 are largely shortened versions of Sect. 2--4 of arXiv:1808.04264, which, in turn, draw from preliminary sections of several earlier papers; 21 pages, some remarks in Sect.1 and Sect.10 added
Scientific Annals of Computer Science 29(2):113--139 (2019)
10.7561/SACS.2019.2.113
null
cs.PL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an algebraic theory of instruction sequences with instructions for Turing tapes as basic instructions, the behaviours produced by the instruction sequences concerned under execution, and the interaction between such behaviours and Turing tapes provided by an execution environment. This theory provides a setting for the development of theory in areas such as computability and computational complexity that distinguishes itself by offering the possibility of equational reasoning and being more general than the setting provided by a known version of the Turing-machine model of computation. The theory is essentially an instantiation of a parameterized algebraic theory which is the basis of a line of research in which issues relating to a wide variety of subjects from computer science have been rigorously investigated thinking in terms of instruction sequences.
[ { "created": "Fri, 25 Jan 2019 11:39:36 GMT", "version": "v1" }, { "created": "Sat, 7 Dec 2019 12:39:19 GMT", "version": "v2" } ]
2020-01-06
[ [ "Bergstra", "J. A.", "" ], [ "Middelburg", "C. A.", "" ] ]
This paper presents an algebraic theory of instruction sequences with instructions for Turing tapes as basic instructions, the behaviours produced by the instruction sequences concerned under execution, and the interaction between such behaviours and Turing tapes provided by an execution environment. This theory provides a setting for the development of theory in areas such as computability and computational complexity that distinguishes itself by offering the possibility of equational reasoning and being more general than the setting provided by a known version of the Turing-machine model of computation. The theory is essentially an instantiation of a parameterized algebraic theory which is the basis of a line of research in which issues relating to a wide variety of subjects from computer science have been rigorously investigated thinking in terms of instruction sequences.
2303.00957
Changyeon Kim
Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
Preference Transformer: Modeling Human Preferences using Transformers for RL
Project website: https://sites.google.com/view/preference-transformer. Accepted to ICLR 2023
null
null
null
cs.LG cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Preference-based reinforcement learning (RL) provides a framework to train agents using human preferences between two behaviors. However, preference-based RL has been challenging to scale since it requires a large amount of human feedback to learn a reward function aligned with human intent. In this paper, we present Preference Transformer, a neural architecture that models human preferences using transformers. Unlike prior approaches assuming human judgment is based on the Markovian rewards which contribute to the decision equally, we introduce a new preference model based on the weighted sum of non-Markovian rewards. We then design the proposed preference model using a transformer architecture that stacks causal and bidirectional self-attention layers. We demonstrate that Preference Transformer can solve a variety of control tasks using real human preferences, while prior approaches fail to work. We also show that Preference Transformer can induce a well-specified reward and attend to critical events in the trajectory by automatically capturing the temporal dependencies in human decision-making. Code is available on the project website: https://sites.google.com/view/preference-transformer.
[ { "created": "Thu, 2 Mar 2023 04:24:29 GMT", "version": "v1" } ]
2023-03-03
[ [ "Kim", "Changyeon", "" ], [ "Park", "Jongjin", "" ], [ "Shin", "Jinwoo", "" ], [ "Lee", "Honglak", "" ], [ "Abbeel", "Pieter", "" ], [ "Lee", "Kimin", "" ] ]
Preference-based reinforcement learning (RL) provides a framework to train agents using human preferences between two behaviors. However, preference-based RL has been challenging to scale since it requires a large amount of human feedback to learn a reward function aligned with human intent. In this paper, we present Preference Transformer, a neural architecture that models human preferences using transformers. Unlike prior approaches assuming human judgment is based on the Markovian rewards which contribute to the decision equally, we introduce a new preference model based on the weighted sum of non-Markovian rewards. We then design the proposed preference model using a transformer architecture that stacks causal and bidirectional self-attention layers. We demonstrate that Preference Transformer can solve a variety of control tasks using real human preferences, while prior approaches fail to work. We also show that Preference Transformer can induce a well-specified reward and attend to critical events in the trajectory by automatically capturing the temporal dependencies in human decision-making. Code is available on the project website: https://sites.google.com/view/preference-transformer.
2212.14116
Chuhao Qin
Chuhao Qin and Evangelos Pournaras
Coordination of Drones at Scale: Decentralized Energy-aware Swarm Intelligence for Spatio-temporal Sensing
14 pages, 8 figures, 6 tables. Accepted in Transportation Research Part C: Emerging Technologies
null
null
null
cs.RO cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Smart City applications, such as traffic monitoring and disaster response, often use swarms of intelligent and cooperative drones to efficiently collect sensor data over different areas of interest and time spans. However, when the required sensing becomes spatio-temporally large and varying, a collective arrangement of sensing tasks to a large number of battery-restricted and distributed drones is challenging. To address this problem, this paper introduces a scalable and energy-aware model for planning and coordination of spatio-temporal sensing. The coordination model is built upon a decentralized multi-agent collective learning algorithm (EPOS) to ensure scalability, resilience, and flexibility that existing approaches lack of. Experimental results illustrate the outstanding performance of the proposed method compared to state-of-the-art methods. Analytical results contribute a deeper understanding of how coordinated mobility of drones influences sensing performance. This novel coordination solution is applied to traffic monitoring using real-world data to demonstrate a $46.45\%$ more accurate and $2.88\%$ more efficient detection of vehicles as the number of drones become a scarce resource.
[ { "created": "Wed, 28 Dec 2022 22:46:50 GMT", "version": "v1" }, { "created": "Wed, 11 Oct 2023 21:14:38 GMT", "version": "v2" } ]
2023-10-13
[ [ "Qin", "Chuhao", "" ], [ "Pournaras", "Evangelos", "" ] ]
Smart City applications, such as traffic monitoring and disaster response, often use swarms of intelligent and cooperative drones to efficiently collect sensor data over different areas of interest and time spans. However, when the required sensing becomes spatio-temporally large and varying, a collective arrangement of sensing tasks to a large number of battery-restricted and distributed drones is challenging. To address this problem, this paper introduces a scalable and energy-aware model for planning and coordination of spatio-temporal sensing. The coordination model is built upon a decentralized multi-agent collective learning algorithm (EPOS) to ensure scalability, resilience, and flexibility that existing approaches lack of. Experimental results illustrate the outstanding performance of the proposed method compared to state-of-the-art methods. Analytical results contribute a deeper understanding of how coordinated mobility of drones influences sensing performance. This novel coordination solution is applied to traffic monitoring using real-world data to demonstrate a $46.45\%$ more accurate and $2.88\%$ more efficient detection of vehicles as the number of drones become a scarce resource.
2211.05637
Rui Xue
Rui Xue (State Key Laboratory of Information Security, Institute of Information Engineering, CAS)
Description Graphs, Matrix-Power Stabilizations and Graph Isomorphism in Polynomial Time
In this version, some related references are added. An explicit proof to Theorem 9 is sketched in the appendix. The proofs in Section 7-10 to our main results are rewritten with WL process for consistency
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is confirmed in this work that the graph isomorphism can be tested in polynomial time, which resolves a longstanding problem in the theory of computation. The contributions are in three phases as follows. 1. A description graph $\tilde{A}$ to a given graph $A$ is introduced so that labels to vertices and edges of $\tilde{A}$ indicate the identical or different amounts of walks of any sort in any length between vertices in $A$. Three processes are then developed to obtain description graphs. They reveal relations among matrix power, spectral decomposition and adjoint matrices, which is of independent interest. 2. We show that the stabilization of description graphs can be implemented via matrix-power stabilization, a new approach to distinguish vertices and edges to graphs. The approach is proven to be equivalent in the partition of vertices to Weisfeiler-Lehman (WL for short) process. The specific Square-and-Substitution (SaS) process is more succinct than WL process. The vertex partitions to our stable graphs are proven to be \emph{strongly} equitable partitions, which is important in the proofs of our main conclusion. Some properties on stable graphs are also explored. 3. A class of graphs named binding graphs is proposed and proven to be graph-isomorphism complete. The vertex partition to the stable graph of a binding graph is the automorphism partition, which allows us to confirm graph-isomorphism problem is in complexity class $\mathtt{P}$. Since the binding graph to a graph is so simple in construction, our approach can be readily applied in practice. Some examples are supplied as illustrations to the contexts, and a brief suggestion to implementation of SaS process is also given in the appendix.
[ { "created": "Wed, 9 Nov 2022 05:52:55 GMT", "version": "v1" }, { "created": "Wed, 16 Nov 2022 11:20:40 GMT", "version": "v2" }, { "created": "Thu, 1 Dec 2022 09:14:24 GMT", "version": "v3" }, { "created": "Sat, 31 Dec 2022 09:04:15 GMT", "version": "v4" }, { "created": "Tue, 24 Jan 2023 06:27:55 GMT", "version": "v5" } ]
2023-01-25
[ [ "Xue", "Rui", "", "State Key Laboratory of Information Security, Institute of\n Information Engineering, CAS" ] ]
It is confirmed in this work that the graph isomorphism can be tested in polynomial time, which resolves a longstanding problem in the theory of computation. The contributions are in three phases as follows. 1. A description graph $\tilde{A}$ to a given graph $A$ is introduced so that labels to vertices and edges of $\tilde{A}$ indicate the identical or different amounts of walks of any sort in any length between vertices in $A$. Three processes are then developed to obtain description graphs. They reveal relations among matrix power, spectral decomposition and adjoint matrices, which is of independent interest. 2. We show that the stabilization of description graphs can be implemented via matrix-power stabilization, a new approach to distinguish vertices and edges to graphs. The approach is proven to be equivalent in the partition of vertices to Weisfeiler-Lehman (WL for short) process. The specific Square-and-Substitution (SaS) process is more succinct than WL process. The vertex partitions to our stable graphs are proven to be \emph{strongly} equitable partitions, which is important in the proofs of our main conclusion. Some properties on stable graphs are also explored. 3. A class of graphs named binding graphs is proposed and proven to be graph-isomorphism complete. The vertex partition to the stable graph of a binding graph is the automorphism partition, which allows us to confirm graph-isomorphism problem is in complexity class $\mathtt{P}$. Since the binding graph to a graph is so simple in construction, our approach can be readily applied in practice. Some examples are supplied as illustrations to the contexts, and a brief suggestion to implementation of SaS process is also given in the appendix.
2004.04081
Andrew Hobbs
Andrew Hobbs, Stacey Svetlichnaya
Satellite-based Prediction of Forage Conditions for Livestock in Northern Kenya
Paper presented at the ICLR 2020 Workshop on Computer Vision for Agriculture (CV4A)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces the first dataset of satellite images labeled with forage quality by on-the-ground experts and provides proof of concept for applying computer vision methods to index-based drought insurance. We also present the results of a collaborative benchmark tool used to crowdsource an accurate machine learning model on the dataset. Our methods significantly outperform the existing technology for an insurance program in Northern Kenya, suggesting that a computer vision-based approach could substantially benefit pastoralists, whose exposure to droughts is severe and worsening with climate change.
[ { "created": "Wed, 8 Apr 2020 16:03:50 GMT", "version": "v1" }, { "created": "Thu, 23 Apr 2020 18:12:23 GMT", "version": "v2" } ]
2020-04-27
[ [ "Hobbs", "Andrew", "" ], [ "Svetlichnaya", "Stacey", "" ] ]
This paper introduces the first dataset of satellite images labeled with forage quality by on-the-ground experts and provides proof of concept for applying computer vision methods to index-based drought insurance. We also present the results of a collaborative benchmark tool used to crowdsource an accurate machine learning model on the dataset. Our methods significantly outperform the existing technology for an insurance program in Northern Kenya, suggesting that a computer vision-based approach could substantially benefit pastoralists, whose exposure to droughts is severe and worsening with climate change.
2205.02767
Jintang Li
Zulun Zhu, Jiaying Peng, Jintang Li, Liang Chen, Qi Yu, Siqiang Luo
Spiking Graph Convolutional Networks
Accepted by IJCAI 2022; Code available at https://github.com/ZulunZhu/SpikingGCN
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Convolutional Networks (GCNs) achieve an impressive performance due to the remarkable representation ability in learning the graph information. However, GCNs, when implemented on a deep network, require expensive computation power, making them difficult to be deployed on battery-powered devices. In contrast, Spiking Neural Networks (SNNs), which perform a bio-fidelity inference process, offer an energy-efficient neural architecture. In this work, we propose SpikingGCN, an end-to-end framework that aims to integrate the embedding of GCNs with the biofidelity characteristics of SNNs. The original graph data are encoded into spike trains based on the incorporation of graph convolution. We further model biological information processing by utilizing a fully connected layer combined with neuron nodes. In a wide range of scenarios (e.g. citation networks, image graph classification, and recommender systems), our experimental results show that the proposed method could gain competitive performance against state-of-the-art approaches. Furthermore, we show that SpikingGCN on a neuromorphic chip can bring a clear advantage of energy efficiency into graph data analysis, which demonstrates its great potential to construct environment-friendly machine learning models.
[ { "created": "Thu, 5 May 2022 16:44:36 GMT", "version": "v1" }, { "created": "Tue, 2 Aug 2022 15:56:51 GMT", "version": "v2" } ]
2022-08-03
[ [ "Zhu", "Zulun", "" ], [ "Peng", "Jiaying", "" ], [ "Li", "Jintang", "" ], [ "Chen", "Liang", "" ], [ "Yu", "Qi", "" ], [ "Luo", "Siqiang", "" ] ]
Graph Convolutional Networks (GCNs) achieve an impressive performance due to the remarkable representation ability in learning the graph information. However, GCNs, when implemented on a deep network, require expensive computation power, making them difficult to be deployed on battery-powered devices. In contrast, Spiking Neural Networks (SNNs), which perform a bio-fidelity inference process, offer an energy-efficient neural architecture. In this work, we propose SpikingGCN, an end-to-end framework that aims to integrate the embedding of GCNs with the biofidelity characteristics of SNNs. The original graph data are encoded into spike trains based on the incorporation of graph convolution. We further model biological information processing by utilizing a fully connected layer combined with neuron nodes. In a wide range of scenarios (e.g. citation networks, image graph classification, and recommender systems), our experimental results show that the proposed method could gain competitive performance against state-of-the-art approaches. Furthermore, we show that SpikingGCN on a neuromorphic chip can bring a clear advantage of energy efficiency into graph data analysis, which demonstrates its great potential to construct environment-friendly machine learning models.
2406.15862
Moreno La Quatra
Moreno La Quatra, Alkis Koudounas, Elena Baralis, Sabato Marco Siniscalchi
Speech Analysis of Language Varieties in Italy
Accepted to LREC-COLING 2024 - https://aclanthology.org/2024.lrec-main.1317/
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Italy exhibits rich linguistic diversity across its territory due to the distinct regional languages spoken in different areas. Recent advances in self-supervised learning provide new opportunities to analyze Italy's linguistic varieties using speech data alone. This includes the potential to leverage representations learned from large amounts of data to better examine nuances between closely related linguistic varieties. In this study, we focus on automatically identifying the geographic region of origin of speech samples drawn from Italy's diverse language varieties. We leverage self-supervised learning models to tackle this task and analyze differences and similarities between Italy's regional languages. In doing so, we also seek to uncover new insights into the relationships among these diverse yet closely related varieties, which may help linguists understand their interconnected evolution and regional development over time and space. To improve the discriminative ability of learned representations, we evaluate several supervised contrastive learning objectives, both as pre-training steps and additional fine-tuning objectives. Experimental evidence shows that pre-trained self-supervised models can effectively identify regions from speech recording. Additionally, incorporating contrastive objectives during fine-tuning improves classification accuracy and yields embeddings that distinctly separate regional varieties, demonstrating the value of combining self-supervised pre-training and contrastive learning for this task.
[ { "created": "Sat, 22 Jun 2024 14:19:51 GMT", "version": "v1" } ]
2024-06-25
[ [ "La Quatra", "Moreno", "" ], [ "Koudounas", "Alkis", "" ], [ "Baralis", "Elena", "" ], [ "Siniscalchi", "Sabato Marco", "" ] ]
Italy exhibits rich linguistic diversity across its territory due to the distinct regional languages spoken in different areas. Recent advances in self-supervised learning provide new opportunities to analyze Italy's linguistic varieties using speech data alone. This includes the potential to leverage representations learned from large amounts of data to better examine nuances between closely related linguistic varieties. In this study, we focus on automatically identifying the geographic region of origin of speech samples drawn from Italy's diverse language varieties. We leverage self-supervised learning models to tackle this task and analyze differences and similarities between Italy's regional languages. In doing so, we also seek to uncover new insights into the relationships among these diverse yet closely related varieties, which may help linguists understand their interconnected evolution and regional development over time and space. To improve the discriminative ability of learned representations, we evaluate several supervised contrastive learning objectives, both as pre-training steps and additional fine-tuning objectives. Experimental evidence shows that pre-trained self-supervised models can effectively identify regions from speech recording. Additionally, incorporating contrastive objectives during fine-tuning improves classification accuracy and yields embeddings that distinctly separate regional varieties, demonstrating the value of combining self-supervised pre-training and contrastive learning for this task.
2305.03322
Gilles Dowek
Gilles Dowek (DEDUCTEAM)
Skolemization in Simple Type Theory: the Logical and the Theoretical Points of View
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Peter Andrews has proposed, in 1971, the problem of finding an analog of the Skolem theorem for Simple Type Theory. A first idea lead to a naive rule that worked only for Simple Type Theory with the axiom of choice and the general case has only been solved, more than ten years later, by Dale Miller. More recently, we have proposed with Th{\'e}r{\`e}se Hardin and Claude Kirchner a new way to prove analogs of the Miller theorem for different, but equivalent, formulations of Simple Type Theory. In this paper, that does not contain new technical results, I try to show that the history of the skolemization problem and of its various solutions is an illustration of a tension between two points of view on Simple Type Theory: the logical and the theoretical points of view.
[ { "created": "Fri, 5 May 2023 07:00:35 GMT", "version": "v1" } ]
2023-05-08
[ [ "Dowek", "Gilles", "", "DEDUCTEAM" ] ]
Peter Andrews has proposed, in 1971, the problem of finding an analog of the Skolem theorem for Simple Type Theory. A first idea lead to a naive rule that worked only for Simple Type Theory with the axiom of choice and the general case has only been solved, more than ten years later, by Dale Miller. More recently, we have proposed with Th{\'e}r{\`e}se Hardin and Claude Kirchner a new way to prove analogs of the Miller theorem for different, but equivalent, formulations of Simple Type Theory. In this paper, that does not contain new technical results, I try to show that the history of the skolemization problem and of its various solutions is an illustration of a tension between two points of view on Simple Type Theory: the logical and the theoretical points of view.
2405.13677
Dmitrii Usynin
Jack Fitzsimons, Agust\'in Freitas Pasqualini, Robert Pisarczyk, Dmitrii Usynin
Naturally Private Recommendations with Determinantal Point Processes
null
null
null
null
cs.LG cs.CR
http://creativecommons.org/licenses/by/4.0/
Often we consider machine learning models or statistical analysis methods which we endeavour to alter, by introducing a randomized mechanism, to make the model conform to a differential privacy constraint. However, certain models can often be implicitly differentially private or require significantly fewer alterations. In this work, we discuss Determinantal Point Processes (DPPs) which are dispersion models that balance recommendations based on both the popularity and the diversity of the content. We introduce DPPs, derive and discuss the alternations required for them to satisfy epsilon-Differential Privacy and provide an analysis of their sensitivity. We conclude by proposing simple alternatives to DPPs which would make them more efficient with respect to their privacy-utility trade-off.
[ { "created": "Wed, 22 May 2024 14:20:56 GMT", "version": "v1" } ]
2024-05-24
[ [ "Fitzsimons", "Jack", "" ], [ "Pasqualini", "Agustín Freitas", "" ], [ "Pisarczyk", "Robert", "" ], [ "Usynin", "Dmitrii", "" ] ]
Often we consider machine learning models or statistical analysis methods which we endeavour to alter, by introducing a randomized mechanism, to make the model conform to a differential privacy constraint. However, certain models can often be implicitly differentially private or require significantly fewer alterations. In this work, we discuss Determinantal Point Processes (DPPs) which are dispersion models that balance recommendations based on both the popularity and the diversity of the content. We introduce DPPs, derive and discuss the alternations required for them to satisfy epsilon-Differential Privacy and provide an analysis of their sensitivity. We conclude by proposing simple alternatives to DPPs which would make them more efficient with respect to their privacy-utility trade-off.
2107.03354
Tianbo Li
Tianbo Li, Tianze Luo, Yiping Ke, Sinno Jialin Pan
Mitigating Performance Saturation in Neural Marked Point Processes: Architectures and Loss Functions
9 pages, 4 figures, accepted by KDD-21 research track. The source code is available at https://github.com/ltz0120/Graph-Convolutional- Hawkes-Processes-GCHP
null
10.1145/3447548.3467436
null
cs.LG cs.AI stat.ML
http://creativecommons.org/licenses/by/4.0/
Attributed event sequences are commonly encountered in practice. A recent research line focuses on incorporating neural networks with the statistical model -- marked point processes, which is the conventional tool for dealing with attributed event sequences. Neural marked point processes possess good interpretability of probabilistic models as well as the representational power of neural networks. However, we find that performance of neural marked point processes is not always increasing as the network architecture becomes more complicated and larger, which is what we call the performance saturation phenomenon. This is due to the fact that the generalization error of neural marked point processes is determined by both the network representational ability and the model specification at the same time. Therefore we can draw two major conclusions: first, simple network structures can perform no worse than complicated ones for some cases; second, using a proper probabilistic assumption is as equally, if not more, important as improving the complexity of the network. Based on this observation, we propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers, thus it can be easily accelerated by the parallel mechanism. We directly consider the distribution of interarrival times instead of imposing a specific assumption on the conditional intensity function, and propose to use a likelihood ratio loss with a moment matching mechanism for optimization and model selection. Experimental results show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
[ { "created": "Wed, 7 Jul 2021 16:59:14 GMT", "version": "v1" } ]
2021-07-08
[ [ "Li", "Tianbo", "" ], [ "Luo", "Tianze", "" ], [ "Ke", "Yiping", "" ], [ "Pan", "Sinno Jialin", "" ] ]
Attributed event sequences are commonly encountered in practice. A recent research line focuses on incorporating neural networks with the statistical model -- marked point processes, which is the conventional tool for dealing with attributed event sequences. Neural marked point processes possess good interpretability of probabilistic models as well as the representational power of neural networks. However, we find that performance of neural marked point processes is not always increasing as the network architecture becomes more complicated and larger, which is what we call the performance saturation phenomenon. This is due to the fact that the generalization error of neural marked point processes is determined by both the network representational ability and the model specification at the same time. Therefore we can draw two major conclusions: first, simple network structures can perform no worse than complicated ones for some cases; second, using a proper probabilistic assumption is as equally, if not more, important as improving the complexity of the network. Based on this observation, we propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers, thus it can be easily accelerated by the parallel mechanism. We directly consider the distribution of interarrival times instead of imposing a specific assumption on the conditional intensity function, and propose to use a likelihood ratio loss with a moment matching mechanism for optimization and model selection. Experimental results show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
1801.04891
K Venkatesh Emani
K. Venkatesh Emani, S. Sudarshan
Cobra: A Framework for Cost Based Rewriting of Database Applications
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Database applications are typically written using a mixture of imperative languages and declarative frameworks for data processing. Application logic gets distributed across the declarative and imperative parts of a program. Often, there is more than one way to implement the same program, whose efficiency may depend on a number of parameters. In this paper, we propose a framework that automatically generates all equivalent alternatives of a given program using a given set of program transformations, and chooses the least cost alternative. We use the concept of program regions as an algebraic abstraction of a program and extend the Volcano/Cascades framework for optimization of algebraic expressions, to optimize programs. We illustrate the use of our framework for optimizing database applications. We show through experimental results, that our framework has wide applicability in real world applications and provides significant performance benefits.
[ { "created": "Mon, 15 Jan 2018 17:58:18 GMT", "version": "v1" }, { "created": "Tue, 16 Jan 2018 07:49:58 GMT", "version": "v2" }, { "created": "Mon, 26 Feb 2018 11:26:39 GMT", "version": "v3" } ]
2018-02-27
[ [ "Emani", "K. Venkatesh", "" ], [ "Sudarshan", "S.", "" ] ]
Database applications are typically written using a mixture of imperative languages and declarative frameworks for data processing. Application logic gets distributed across the declarative and imperative parts of a program. Often, there is more than one way to implement the same program, whose efficiency may depend on a number of parameters. In this paper, we propose a framework that automatically generates all equivalent alternatives of a given program using a given set of program transformations, and chooses the least cost alternative. We use the concept of program regions as an algebraic abstraction of a program and extend the Volcano/Cascades framework for optimization of algebraic expressions, to optimize programs. We illustrate the use of our framework for optimizing database applications. We show through experimental results, that our framework has wide applicability in real world applications and provides significant performance benefits.
2405.09819
Penghao Liang
Penghao Liang, Bo Song, Xiaoan Zhan, Zhou Chen, Jiaqiang Yuan
Automating the Training and Deployment of Models in MLOps by Integrating Systems with Machine Learning
null
null
null
null
cs.SE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article introduces the importance of machine learning in real-world applications and explores the rise of MLOps (Machine Learning Operations) and its importance for solving challenges such as model deployment and performance monitoring. By reviewing the evolution of MLOps and its relationship to traditional software development methods, the paper proposes ways to integrate the system into machine learning to solve the problems faced by existing MLOps and improve productivity. This paper focuses on the importance of automated model training, and the method to ensure the transparency and repeatability of the training process through version control system. In addition, the challenges of integrating machine learning components into traditional CI/CD pipelines are discussed, and solutions such as versioning environments and containerization are proposed. Finally, the paper emphasizes the importance of continuous monitoring and feedback loops after model deployment to maintain model performance and reliability. Using case studies and best practices from Netflix, the article presents key strategies and lessons learned for successful implementation of MLOps practices, providing valuable references for other organizations to build and optimize their own MLOps practices.
[ { "created": "Thu, 16 May 2024 05:36:28 GMT", "version": "v1" } ]
2024-05-17
[ [ "Liang", "Penghao", "" ], [ "Song", "Bo", "" ], [ "Zhan", "Xiaoan", "" ], [ "Chen", "Zhou", "" ], [ "Yuan", "Jiaqiang", "" ] ]
This article introduces the importance of machine learning in real-world applications and explores the rise of MLOps (Machine Learning Operations) and its importance for solving challenges such as model deployment and performance monitoring. By reviewing the evolution of MLOps and its relationship to traditional software development methods, the paper proposes ways to integrate the system into machine learning to solve the problems faced by existing MLOps and improve productivity. This paper focuses on the importance of automated model training, and the method to ensure the transparency and repeatability of the training process through version control system. In addition, the challenges of integrating machine learning components into traditional CI/CD pipelines are discussed, and solutions such as versioning environments and containerization are proposed. Finally, the paper emphasizes the importance of continuous monitoring and feedback loops after model deployment to maintain model performance and reliability. Using case studies and best practices from Netflix, the article presents key strategies and lessons learned for successful implementation of MLOps practices, providing valuable references for other organizations to build and optimize their own MLOps practices.
2205.15146
Quanshi Zhang
Zhanpeng Zhou, Wen Shen, Huixin Chen, Ling Tang, Quanshi Zhang
Batch Normalization Is Blind to the First and Second Derivatives of the Loss
null
null
null
null
cs.LG cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we prove the effects of the BN operation on the back-propagation of the first and second derivatives of the loss. When we do the Taylor series expansion of the loss function, we prove that the BN operation will block the influence of the first-order term and most influence of the second-order term of the loss. We also find that such a problem is caused by the standardization phase of the BN operation. Experimental results have verified our theoretical conclusions, and we have found that the BN operation significantly affects feature representations in specific tasks, where losses of different samples share similar analytic formulas.
[ { "created": "Mon, 30 May 2022 14:43:51 GMT", "version": "v1" }, { "created": "Thu, 2 Jun 2022 09:29:20 GMT", "version": "v2" } ]
2022-06-03
[ [ "Zhou", "Zhanpeng", "" ], [ "Shen", "Wen", "" ], [ "Chen", "Huixin", "" ], [ "Tang", "Ling", "" ], [ "Zhang", "Quanshi", "" ] ]
In this paper, we prove the effects of the BN operation on the back-propagation of the first and second derivatives of the loss. When we do the Taylor series expansion of the loss function, we prove that the BN operation will block the influence of the first-order term and most influence of the second-order term of the loss. We also find that such a problem is caused by the standardization phase of the BN operation. Experimental results have verified our theoretical conclusions, and we have found that the BN operation significantly affects feature representations in specific tasks, where losses of different samples share similar analytic formulas.
1901.04008
Ami Paz
Keren Censor-Hillel, Neta Dafni, Victor I. Kolobov, Ami Paz, Gregory Schwartzman
Fast Deterministic Algorithms for Highly-Dynamic Networks
null
null
null
null
cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper provides an algorithmic framework for obtaining fast distributed algorithms for a highly-dynamic setting, in which *arbitrarily many* edge changes may occur in each round. Our algorithm significantly improves upon prior work in its combination of (1) having an $O(1)$ amortized time complexity, (2) using only $O(\log{n})$-bit messages, (3) not posing any restrictions on the dynamic behavior of the environment, (4) being deterministic, (5) having strong guarantees for intermediate solutions, and (6) being applicable for a wide family of tasks. The tasks for which we deduce such an algorithm are maximal matching, $(degree+1)$-coloring, 2-approximation for minimum weight vertex cover, and maximal independent set (which is the most subtle case). For some of these tasks, node insertions can also be among the allowed topology changes, and for some of them also abrupt node deletions.
[ { "created": "Sun, 13 Jan 2019 16:11:22 GMT", "version": "v1" }, { "created": "Mon, 15 Jul 2019 20:09:12 GMT", "version": "v2" }, { "created": "Sat, 2 Nov 2019 16:16:11 GMT", "version": "v3" }, { "created": "Sun, 23 Feb 2020 21:46:40 GMT", "version": "v4" }, { "created": "Sun, 11 Oct 2020 14:00:04 GMT", "version": "v5" } ]
2020-10-13
[ [ "Censor-Hillel", "Keren", "" ], [ "Dafni", "Neta", "" ], [ "Kolobov", "Victor I.", "" ], [ "Paz", "Ami", "" ], [ "Schwartzman", "Gregory", "" ] ]
This paper provides an algorithmic framework for obtaining fast distributed algorithms for a highly-dynamic setting, in which *arbitrarily many* edge changes may occur in each round. Our algorithm significantly improves upon prior work in its combination of (1) having an $O(1)$ amortized time complexity, (2) using only $O(\log{n})$-bit messages, (3) not posing any restrictions on the dynamic behavior of the environment, (4) being deterministic, (5) having strong guarantees for intermediate solutions, and (6) being applicable for a wide family of tasks. The tasks for which we deduce such an algorithm are maximal matching, $(degree+1)$-coloring, 2-approximation for minimum weight vertex cover, and maximal independent set (which is the most subtle case). For some of these tasks, node insertions can also be among the allowed topology changes, and for some of them also abrupt node deletions.
2407.14047
Zekun Qian
Zekun Qian, Ruize Han, Wei Feng, Junhui Hou, Linqi Song, Song Wang
OCTrack: Benchmarking the Open-Corpus Multi-Object Tracking
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a novel yet practical problem of open-corpus multi-object tracking (OCMOT), which extends the MOT into localizing, associating, and recognizing generic-category objects of both seen (base) and unseen (novel) classes, but without the category text list as prompt. To study this problem, the top priority is to build a benchmark. In this work, we build OCTrackB, a large-scale and comprehensive benchmark, to provide a standard evaluation platform for the OCMOT problem. Compared to previous datasets, OCTrackB has more abundant and balanced base/novel classes and the corresponding samples for evaluation with less bias. We also propose a new multi-granularity recognition metric to better evaluate the generative object recognition in OCMOT. By conducting the extensive benchmark evaluation, we report and analyze the results of various state-of-the-art methods, which demonstrate the rationale of OCMOT, as well as the usefulness and advantages of OCTrackB.
[ { "created": "Fri, 19 Jul 2024 05:58:01 GMT", "version": "v1" } ]
2024-07-22
[ [ "Qian", "Zekun", "" ], [ "Han", "Ruize", "" ], [ "Feng", "Wei", "" ], [ "Hou", "Junhui", "" ], [ "Song", "Linqi", "" ], [ "Wang", "Song", "" ] ]
We study a novel yet practical problem of open-corpus multi-object tracking (OCMOT), which extends the MOT into localizing, associating, and recognizing generic-category objects of both seen (base) and unseen (novel) classes, but without the category text list as prompt. To study this problem, the top priority is to build a benchmark. In this work, we build OCTrackB, a large-scale and comprehensive benchmark, to provide a standard evaluation platform for the OCMOT problem. Compared to previous datasets, OCTrackB has more abundant and balanced base/novel classes and the corresponding samples for evaluation with less bias. We also propose a new multi-granularity recognition metric to better evaluate the generative object recognition in OCMOT. By conducting the extensive benchmark evaluation, we report and analyze the results of various state-of-the-art methods, which demonstrate the rationale of OCMOT, as well as the usefulness and advantages of OCTrackB.
1112.2755
Kristina Lerman
Kristina Lerman, Suradej Intagorn, Jeon-Hyung Kang, Rumi Ghosh
Using Proximity to Predict Activity in Social Networks
submitted to WWW conference
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The structure of a social network contains information useful for predicting its evolution. Nodes that are "close" in some sense are more likely to become linked in the future than more distant nodes. We show that structural information can also help predict node activity. We use proximity to capture the degree to which two nodes are "close" to each other in the network. In addition to standard proximity metrics used in the link prediction task, such as neighborhood overlap, we introduce new metrics that model different types of interactions that can occur between network nodes. We argue that the "closer" nodes are in a social network, the more similar will be their activity. We study this claim using data about URL recommendation on social media sites Digg and Twitter. We show that structural proximity of two users in the follower graph is related to similarity of their activity, i.e., how many URLs they both recommend. We also show that given friends' activity, knowing their proximity to the user can help better predict which URLs the user will recommend. We compare the performance of different proximity metrics on the activity prediction task and find that some metrics lead to substantial performance improvements.
[ { "created": "Tue, 13 Dec 2011 00:19:17 GMT", "version": "v1" } ]
2011-12-14
[ [ "Lerman", "Kristina", "" ], [ "Intagorn", "Suradej", "" ], [ "Kang", "Jeon-Hyung", "" ], [ "Ghosh", "Rumi", "" ] ]
The structure of a social network contains information useful for predicting its evolution. Nodes that are "close" in some sense are more likely to become linked in the future than more distant nodes. We show that structural information can also help predict node activity. We use proximity to capture the degree to which two nodes are "close" to each other in the network. In addition to standard proximity metrics used in the link prediction task, such as neighborhood overlap, we introduce new metrics that model different types of interactions that can occur between network nodes. We argue that the "closer" nodes are in a social network, the more similar will be their activity. We study this claim using data about URL recommendation on social media sites Digg and Twitter. We show that structural proximity of two users in the follower graph is related to similarity of their activity, i.e., how many URLs they both recommend. We also show that given friends' activity, knowing their proximity to the user can help better predict which URLs the user will recommend. We compare the performance of different proximity metrics on the activity prediction task and find that some metrics lead to substantial performance improvements.
1610.07424
Xianwen Wang
Xianwen Wang, Zhichao Fang, Qingchun Li and Xinhui Guo
The poor altmetric performance of publications authored by researchers in mainland China
15 pages, 2 figures
Frontiers in Research Metrics and Analytics, 2016, 1:8
10.3389/frma.2016.00008
null
cs.DL cs.CY cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
China's scientific output has risen precipitously over the past decade; it is now the world's second-largest producer of scientific papers, behind only the United States. The quality of China's research is also on the rise (Van Noorden, 2016). The online visibility and impact of China's research are also important issues worth exploring. In this study, we investigate the altmetric performance of publications in the field of Biotechnology and Applied Microbiology and published by authors from Chinese affiliations. We find that papers published by those authors from Chinese affiliations have much lower visibility on the social web than articles from other countries, when there is no significant difference for the citations. Fewer of China's publications get tweeted, and those tweeted publications attract less social attention. A geographical analysis of tweeters shows that scholarly articles get most of their social attention from the authors' home countries, a finding that is also confirmed by correlation and regression analysis. This situation, which is unfavorable for researchers from Chinese affiliations, is caused, in part, by the inaccessibility of mainstream social networking platforms in mainland China.
[ { "created": "Mon, 24 Oct 2016 14:18:00 GMT", "version": "v1" } ]
2016-10-25
[ [ "Wang", "Xianwen", "" ], [ "Fang", "Zhichao", "" ], [ "Li", "Qingchun", "" ], [ "Guo", "Xinhui", "" ] ]
China's scientific output has risen precipitously over the past decade; it is now the world's second-largest producer of scientific papers, behind only the United States. The quality of China's research is also on the rise (Van Noorden, 2016). The online visibility and impact of China's research are also important issues worth exploring. In this study, we investigate the altmetric performance of publications in the field of Biotechnology and Applied Microbiology and published by authors from Chinese affiliations. We find that papers published by those authors from Chinese affiliations have much lower visibility on the social web than articles from other countries, when there is no significant difference for the citations. Fewer of China's publications get tweeted, and those tweeted publications attract less social attention. A geographical analysis of tweeters shows that scholarly articles get most of their social attention from the authors' home countries, a finding that is also confirmed by correlation and regression analysis. This situation, which is unfavorable for researchers from Chinese affiliations, is caused, in part, by the inaccessibility of mainstream social networking platforms in mainland China.
2205.14753
Nguyen Dang
Nguyen Dang, \"Ozg\"ur Akg\"un, Joan Espasa, Ian Miguel, Peter Nightingale
A Framework for Generating Informative Benchmark Instances
15 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Benchmarking is an important tool for assessing the relative performance of alternative solving approaches. However, the utility of benchmarking is limited by the quantity and quality of the available problem instances. Modern constraint programming languages typically allow the specification of a class-level model that is parameterised over instance data. This separation presents an opportunity for automated approaches to generate instance data that define instances that are graded (solvable at a certain difficulty level for a solver) or can discriminate between two solving approaches. In this paper, we introduce a framework that combines these two properties to generate a large number of benchmark instances, purposely generated for effective and informative benchmarking. We use five problems that were used in the MiniZinc competition to demonstrate the usage of our framework. In addition to producing a ranking among solvers, our framework gives a broader understanding of the behaviour of each solver for the whole instance space; for example by finding subsets of instances where the solver performance significantly varies from its average performance.
[ { "created": "Sun, 29 May 2022 19:56:08 GMT", "version": "v1" } ]
2022-05-31
[ [ "Dang", "Nguyen", "" ], [ "Akgün", "Özgür", "" ], [ "Espasa", "Joan", "" ], [ "Miguel", "Ian", "" ], [ "Nightingale", "Peter", "" ] ]
Benchmarking is an important tool for assessing the relative performance of alternative solving approaches. However, the utility of benchmarking is limited by the quantity and quality of the available problem instances. Modern constraint programming languages typically allow the specification of a class-level model that is parameterised over instance data. This separation presents an opportunity for automated approaches to generate instance data that define instances that are graded (solvable at a certain difficulty level for a solver) or can discriminate between two solving approaches. In this paper, we introduce a framework that combines these two properties to generate a large number of benchmark instances, purposely generated for effective and informative benchmarking. We use five problems that were used in the MiniZinc competition to demonstrate the usage of our framework. In addition to producing a ranking among solvers, our framework gives a broader understanding of the behaviour of each solver for the whole instance space; for example by finding subsets of instances where the solver performance significantly varies from its average performance.
2101.04645
Jan-Philipp Schulze
J.-P. Schulze, P. Sperl, K. B\"ottinger
Double-Adversarial Activation Anomaly Detection: Adversarial Autoencoders are Anomaly Generators
Accepted at IJCNN 2022
null
10.1109/IJCNN55064.2022.9892896
null
cs.LG cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anomaly detection is a challenging task for machine learning algorithms due to the inherent class imbalance. It is costly and time-demanding to manually analyse the observed data, thus usually only few known anomalies if any are available. Inspired by generative models and the analysis of the hidden activations of neural networks, we introduce a novel unsupervised anomaly detection method called DA3D. Here, we use adversarial autoencoders to generate anomalous counterexamples based on the normal data only. These artificial anomalies used during training allow the detection of real, yet unseen anomalies. With our novel generative approach, we transform the unsupervised task of anomaly detection to a supervised one, which is more tractable by machine learning and especially deep learning methods. DA3D surpasses the performance of state-of-the-art anomaly detection methods in a purely data-driven way, where no domain knowledge is required.
[ { "created": "Tue, 12 Jan 2021 18:07:34 GMT", "version": "v1" }, { "created": "Sun, 4 Apr 2021 17:05:36 GMT", "version": "v2" }, { "created": "Sun, 6 Mar 2022 13:14:07 GMT", "version": "v3" }, { "created": "Mon, 23 May 2022 16:59:49 GMT", "version": "v4" }, { "created": "Sun, 14 Jan 2024 17:28:57 GMT", "version": "v5" } ]
2024-01-17
[ [ "Schulze", "J. -P.", "" ], [ "Sperl", "P.", "" ], [ "Böttinger", "K.", "" ] ]
Anomaly detection is a challenging task for machine learning algorithms due to the inherent class imbalance. It is costly and time-demanding to manually analyse the observed data, thus usually only few known anomalies if any are available. Inspired by generative models and the analysis of the hidden activations of neural networks, we introduce a novel unsupervised anomaly detection method called DA3D. Here, we use adversarial autoencoders to generate anomalous counterexamples based on the normal data only. These artificial anomalies used during training allow the detection of real, yet unseen anomalies. With our novel generative approach, we transform the unsupervised task of anomaly detection to a supervised one, which is more tractable by machine learning and especially deep learning methods. DA3D surpasses the performance of state-of-the-art anomaly detection methods in a purely data-driven way, where no domain knowledge is required.
2308.01684
Bolei Ma
Zheyu Zhang, Han Yang, Bolei Ma, David R\"ugamer, Ercong Nie
Baby's CoThought: Leveraging Large Language Models for Enhanced Reasoning in Compact Models
CoNLL 2023 BabyLM Challenge
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) demonstrate remarkable performance on a variety of natural language understanding (NLU) tasks, primarily due to their in-context learning ability. This ability could be applied to building babylike models, i.e. models at small scales, improving training efficiency. In this paper, we propose a "CoThought" pipeline, which efficiently trains smaller "baby" language models (BabyLMs) by leveraging the Chain of Thought prompting of LLMs. Our pipeline restructures a dataset of less than 100M in size using GPT-3.5-turbo, transforming it into task-oriented, human-readable texts that are comparable to the school texts for language learners. The BabyLM is then pretrained on this restructured dataset in a RoBERTa fashion. In evaluations across 4 benchmarks, our BabyLM outperforms the vanilla RoBERTa in 10 linguistic, NLU, and question-answering tasks by more than 3 points, showing a superior ability to extract contextual information. These results suggest that compact LMs pretrained on small, LLM-restructured data can better understand tasks and achieve improved performance.
[ { "created": "Thu, 3 Aug 2023 10:52:52 GMT", "version": "v1" }, { "created": "Mon, 23 Oct 2023 12:05:52 GMT", "version": "v2" } ]
2023-10-24
[ [ "Zhang", "Zheyu", "" ], [ "Yang", "Han", "" ], [ "Ma", "Bolei", "" ], [ "Rügamer", "David", "" ], [ "Nie", "Ercong", "" ] ]
Large Language Models (LLMs) demonstrate remarkable performance on a variety of natural language understanding (NLU) tasks, primarily due to their in-context learning ability. This ability could be applied to building babylike models, i.e. models at small scales, improving training efficiency. In this paper, we propose a "CoThought" pipeline, which efficiently trains smaller "baby" language models (BabyLMs) by leveraging the Chain of Thought prompting of LLMs. Our pipeline restructures a dataset of less than 100M in size using GPT-3.5-turbo, transforming it into task-oriented, human-readable texts that are comparable to the school texts for language learners. The BabyLM is then pretrained on this restructured dataset in a RoBERTa fashion. In evaluations across 4 benchmarks, our BabyLM outperforms the vanilla RoBERTa in 10 linguistic, NLU, and question-answering tasks by more than 3 points, showing a superior ability to extract contextual information. These results suggest that compact LMs pretrained on small, LLM-restructured data can better understand tasks and achieve improved performance.
1911.06910
Bo Peng
Bo Peng, Renqiang Min, Xia Ning
CNN-based Dual-Chain Models for Knowledge Graph Learning
null
null
null
null
cs.CL cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
Knowledge graph learning plays a critical role in integrating domain specific knowledge bases when deploying machine learning and data mining models in practice. Existing methods on knowledge graph learning primarily focus on modeling the relations among entities as translations among the relations and entities, and many of these methods are not able to handle zero-shot problems, when new entities emerge. In this paper, we present a new convolutional neural network (CNN)-based dual-chain model. Different from translation based methods, in our model, interactions among relations and entities are directly captured via CNN over their embeddings. Moreover, a secondary chain of learning is conducted simultaneously to incorporate additional information and to enable better performance. We also present an extension of this model, which incorporates descriptions of entities and learns a second set of entity embeddings from the descriptions. As a result, the extended model is able to effectively handle zero-shot problems. We conducted comprehensive experiments, comparing our methods with 15 methods on 8 benchmark datasets. Extensive experimental results demonstrate that our proposed methods achieve or outperform the state-of-the-art results on knowledge graph learning, and outperform other methods on zero-shot problems. In addition, our methods applied to real-world biomedical data are able to produce results that conform to expert domain knowledge.
[ { "created": "Fri, 15 Nov 2019 23:24:17 GMT", "version": "v1" }, { "created": "Tue, 26 Nov 2019 13:40:35 GMT", "version": "v2" } ]
2019-11-27
[ [ "Peng", "Bo", "" ], [ "Min", "Renqiang", "" ], [ "Ning", "Xia", "" ] ]
Knowledge graph learning plays a critical role in integrating domain specific knowledge bases when deploying machine learning and data mining models in practice. Existing methods on knowledge graph learning primarily focus on modeling the relations among entities as translations among the relations and entities, and many of these methods are not able to handle zero-shot problems, when new entities emerge. In this paper, we present a new convolutional neural network (CNN)-based dual-chain model. Different from translation based methods, in our model, interactions among relations and entities are directly captured via CNN over their embeddings. Moreover, a secondary chain of learning is conducted simultaneously to incorporate additional information and to enable better performance. We also present an extension of this model, which incorporates descriptions of entities and learns a second set of entity embeddings from the descriptions. As a result, the extended model is able to effectively handle zero-shot problems. We conducted comprehensive experiments, comparing our methods with 15 methods on 8 benchmark datasets. Extensive experimental results demonstrate that our proposed methods achieve or outperform the state-of-the-art results on knowledge graph learning, and outperform other methods on zero-shot problems. In addition, our methods applied to real-world biomedical data are able to produce results that conform to expert domain knowledge.
2005.11780
Chen Lv
Zhongxu Hu, Yang Xing, Chen Lv, Peng Hang, Jie Liu
Deep Convolutional Neural Network-based Bernoulli Heatmap for Head Pose Estimation
null
null
null
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Head pose estimation is a crucial problem for many tasks, such as driver attention, fatigue detection, and human behaviour analysis. It is well known that neural networks are better at handling classification problems than regression problems. It is an extremely nonlinear process to let the network output the angle value directly for optimization learning, and the weight constraint of the loss function will be relatively weak. This paper proposes a novel Bernoulli heatmap for head pose estimation from a single RGB image. Our method can achieve the positioning of the head area while estimating the angles of the head. The Bernoulli heatmap makes it possible to construct fully convolutional neural networks without fully connected layers and provides a new idea for the output form of head pose estimation. A deep convolutional neural network (CNN) structure with multiscale representations is adopted to maintain high-resolution information and low-resolution information in parallel. This kind of structure can maintain rich, high-resolution representations. In addition, channelwise fusion is adopted to make the fusion weights learnable instead of simple addition with equal weights. As a result, the estimation is spatially more precise and potentially more accurate. The effectiveness of the proposed method is empirically demonstrated by comparing it with other state-of-the-art methods on public datasets.
[ { "created": "Sun, 24 May 2020 15:36:29 GMT", "version": "v1" } ]
2020-05-26
[ [ "Hu", "Zhongxu", "" ], [ "Xing", "Yang", "" ], [ "Lv", "Chen", "" ], [ "Hang", "Peng", "" ], [ "Liu", "Jie", "" ] ]
Head pose estimation is a crucial problem for many tasks, such as driver attention, fatigue detection, and human behaviour analysis. It is well known that neural networks are better at handling classification problems than regression problems. It is an extremely nonlinear process to let the network output the angle value directly for optimization learning, and the weight constraint of the loss function will be relatively weak. This paper proposes a novel Bernoulli heatmap for head pose estimation from a single RGB image. Our method can achieve the positioning of the head area while estimating the angles of the head. The Bernoulli heatmap makes it possible to construct fully convolutional neural networks without fully connected layers and provides a new idea for the output form of head pose estimation. A deep convolutional neural network (CNN) structure with multiscale representations is adopted to maintain high-resolution information and low-resolution information in parallel. This kind of structure can maintain rich, high-resolution representations. In addition, channelwise fusion is adopted to make the fusion weights learnable instead of simple addition with equal weights. As a result, the estimation is spatially more precise and potentially more accurate. The effectiveness of the proposed method is empirically demonstrated by comparing it with other state-of-the-art methods on public datasets.
2402.00912
Jack Furby
Jack Furby, Daniel Cunnington, Dave Braines, Alun Preece
Can we Constrain Concept Bottleneck Models to Learn Semantically Meaningful Input Features?
Main paper: 8 pages, 9 figures, Appendix: 14 pages, 21 figures. This paper is a preprint
null
null
null
cs.LG cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Concept Bottleneck Models (CBMs) are regarded as inherently interpretable because they first predict a set of human-defined concepts which are used to predict a task label. For inherent interpretability to be fully realised, and ensure trust in a model's output, it's desirable for concept predictions to use semantically meaningful input features. For instance, in an image, pixels representing a broken bone should contribute to predicting a fracture. However, current literature suggests that concept predictions often rely on irrelevant input features. We hypothesise that this occurs when dataset labels include inaccurate concept annotations, or the relationship between input features and concepts is unclear. In general, the effect of dataset labelling on concept representations remains an understudied area. In this paper, we demonstrate that CBMs can learn to map concepts to semantically meaningful input features, by utilising datasets with a clear link between the input features and the desired concept predictions. This is achieved, for instance, by ensuring multiple concepts do not always co-occur and, therefore provide a clear training signal for the CBM to distinguish the relevant input features for each concept. We validate our hypothesis on both synthetic and real-world image datasets, and demonstrate under the correct conditions, CBMs can learn to attribute semantically meaningful input features to the correct concept predictions.
[ { "created": "Thu, 1 Feb 2024 10:18:43 GMT", "version": "v1" }, { "created": "Tue, 30 Jul 2024 09:49:51 GMT", "version": "v2" } ]
2024-07-31
[ [ "Furby", "Jack", "" ], [ "Cunnington", "Daniel", "" ], [ "Braines", "Dave", "" ], [ "Preece", "Alun", "" ] ]
Concept Bottleneck Models (CBMs) are regarded as inherently interpretable because they first predict a set of human-defined concepts which are used to predict a task label. For inherent interpretability to be fully realised, and ensure trust in a model's output, it's desirable for concept predictions to use semantically meaningful input features. For instance, in an image, pixels representing a broken bone should contribute to predicting a fracture. However, current literature suggests that concept predictions often rely on irrelevant input features. We hypothesise that this occurs when dataset labels include inaccurate concept annotations, or the relationship between input features and concepts is unclear. In general, the effect of dataset labelling on concept representations remains an understudied area. In this paper, we demonstrate that CBMs can learn to map concepts to semantically meaningful input features, by utilising datasets with a clear link between the input features and the desired concept predictions. This is achieved, for instance, by ensuring multiple concepts do not always co-occur and, therefore provide a clear training signal for the CBM to distinguish the relevant input features for each concept. We validate our hypothesis on both synthetic and real-world image datasets, and demonstrate under the correct conditions, CBMs can learn to attribute semantically meaningful input features to the correct concept predictions.
1304.5197
Daniele Cono D'Elia
Daniele Cono D'Elia, Camil Demetrescu, Irene Finocchi
Ball-Larus Path Profiling Across Multiple Loop iterations
13 pages, 14 figures
null
null
null
cs.PL cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying the hottest paths in the control flow graph of a routine can direct optimizations to portions of the code where most resources are consumed. This powerful methodology, called path profiling, was introduced by Ball and Larus in the mid 90s and has received considerable attention in the last 15 years for its practical relevance. A shortcoming of Ball-Larus path profiling was the inability to profile cyclic paths, making it difficult to mine interesting execution patterns that span multiple loop iterations. Previous results, based on rather complex algorithms, have attempted to circumvent this limitation at the price of significant performance losses already for a small number of iterations. In this paper, we present a new approach to multiple iterations path profiling, based on data structures built on top of the original Ball-Larus numbering technique. Our approach allows it to profile all executed paths obtained as a concatenation of up to k Ball-Larus acyclic paths, where k is a user-defined parameter. An extensive experimental investigation on a large variety of Java benchmarks on the Jikes RVM shows that, surprisingly, our approach can be even faster than Ball-Larus due to fewer operations on smaller hash tables, producing compact representations of cyclic paths even for large values of k.
[ { "created": "Thu, 18 Apr 2013 17:34:38 GMT", "version": "v1" } ]
2013-04-30
[ [ "D'Elia", "Daniele Cono", "" ], [ "Demetrescu", "Camil", "" ], [ "Finocchi", "Irene", "" ] ]
Identifying the hottest paths in the control flow graph of a routine can direct optimizations to portions of the code where most resources are consumed. This powerful methodology, called path profiling, was introduced by Ball and Larus in the mid 90s and has received considerable attention in the last 15 years for its practical relevance. A shortcoming of Ball-Larus path profiling was the inability to profile cyclic paths, making it difficult to mine interesting execution patterns that span multiple loop iterations. Previous results, based on rather complex algorithms, have attempted to circumvent this limitation at the price of significant performance losses already for a small number of iterations. In this paper, we present a new approach to multiple iterations path profiling, based on data structures built on top of the original Ball-Larus numbering technique. Our approach allows it to profile all executed paths obtained as a concatenation of up to k Ball-Larus acyclic paths, where k is a user-defined parameter. An extensive experimental investigation on a large variety of Java benchmarks on the Jikes RVM shows that, surprisingly, our approach can be even faster than Ball-Larus due to fewer operations on smaller hash tables, producing compact representations of cyclic paths even for large values of k.
2312.13729
Przemys{\l}aw Spurek
Dawid Malarz, Weronika Smolak, Jacek Tabor, S{\l}awomir Tadeja, Przemys{\l}aw Spurek
Gaussian Splatting with NeRF-based Color and Opacity
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Neural Radiance Fields (NeRFs) have demonstrated the remarkable potential of neural networks to capture the intricacies of 3D objects. By encoding the shape and color information within neural network weights, NeRFs excel at producing strikingly sharp novel views of 3D objects. Recently, numerous generalizations of NeRFs utilizing generative models have emerged, expanding its versatility. In contrast, Gaussian Splatting (GS) offers a similar render quality with faster training and inference as it does not need neural networks to work. It encodes information about the 3D objects in the set of Gaussian distributions that can be rendered in 3D similarly to classical meshes. Unfortunately, GS are difficult to condition since they usually require circa hundred thousand Gaussian components. To mitigate the caveats of both models, we propose a hybrid model Viewing Direction Gaussian Splatting (VDGS) that uses GS representation of the 3D object's shape and NeRF-based encoding of color and opacity. Our model uses Gaussian distributions with trainable positions (i.e. means of Gaussian), shape (i.e. covariance of Gaussian), color and opacity, and a neural network that takes Gaussian parameters and viewing direction to produce changes in the said color and opacity. As a result, our model better describes shadows, light reflections, and the transparency of 3D objects without adding additional texture and light components.
[ { "created": "Thu, 21 Dec 2023 10:52:59 GMT", "version": "v1" }, { "created": "Fri, 22 Dec 2023 09:19:03 GMT", "version": "v2" }, { "created": "Sun, 18 Feb 2024 07:46:42 GMT", "version": "v3" }, { "created": "Tue, 11 Jun 2024 13:09:36 GMT", "version": "v4" }, { "created": "Wed, 12 Jun 2024 09:06:02 GMT", "version": "v5" } ]
2024-06-13
[ [ "Malarz", "Dawid", "" ], [ "Smolak", "Weronika", "" ], [ "Tabor", "Jacek", "" ], [ "Tadeja", "Sławomir", "" ], [ "Spurek", "Przemysław", "" ] ]
Neural Radiance Fields (NeRFs) have demonstrated the remarkable potential of neural networks to capture the intricacies of 3D objects. By encoding the shape and color information within neural network weights, NeRFs excel at producing strikingly sharp novel views of 3D objects. Recently, numerous generalizations of NeRFs utilizing generative models have emerged, expanding its versatility. In contrast, Gaussian Splatting (GS) offers a similar render quality with faster training and inference as it does not need neural networks to work. It encodes information about the 3D objects in the set of Gaussian distributions that can be rendered in 3D similarly to classical meshes. Unfortunately, GS are difficult to condition since they usually require circa hundred thousand Gaussian components. To mitigate the caveats of both models, we propose a hybrid model Viewing Direction Gaussian Splatting (VDGS) that uses GS representation of the 3D object's shape and NeRF-based encoding of color and opacity. Our model uses Gaussian distributions with trainable positions (i.e. means of Gaussian), shape (i.e. covariance of Gaussian), color and opacity, and a neural network that takes Gaussian parameters and viewing direction to produce changes in the said color and opacity. As a result, our model better describes shadows, light reflections, and the transparency of 3D objects without adding additional texture and light components.
2205.06444
Daixuan Li
Daixuan Li, Benjamin Reidys, Jinghan Sun, Thomas Shull, Josep Torrellas, Jian Huang
UniHeap: Managing Persistent Objects Across Managed Runtimes for Non-Volatile Memory
A 2 page extended abstract for NVMW 2022'
null
null
null
cs.PL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Byte-addressable, non-volatile memory (NVM) is emerging as a promising technology. To facilitate its wide adoption, employing NVM in managed runtimes like JVM has proven to be an effective approach (i.e., managed NVM). However, such an approach is runtime specific, which lacks a generic abstraction across different managed languages. Similar to the well-known filesystem primitives that allow diverse programs to access same files via the block I/O interface, managed NVM deserves the same system-wide property for persistent objects across managed runtimes with low overhead. In this paper, we present UniHeap, a new NVM framework for managing persistent objects. It proposes a unified persistent object model that supports various managed languages, and manages NVM within a shared heap that enables cross-language persistent object sharing. UniHeap reduces the object persistence overhead by managing the shared heap in a log-structured manner and coalescing object updates during the garbage collection. We implement UniHeap as a generic framework and extend it to different managed runtimes that include HotSpot JVM, cPython, and JavaScript engine SpiderMonkey. We evaluate UniHeap with a variety of applications, such as key-value store and transactional database. Our evaluation shows that UniHeap significantly outperforms state-of-the-art object sharing approaches, while introducing negligible overhead to the managed runtimes.
[ { "created": "Fri, 13 May 2022 04:23:08 GMT", "version": "v1" } ]
2022-05-16
[ [ "Li", "Daixuan", "" ], [ "Reidys", "Benjamin", "" ], [ "Sun", "Jinghan", "" ], [ "Shull", "Thomas", "" ], [ "Torrellas", "Josep", "" ], [ "Huang", "Jian", "" ] ]
Byte-addressable, non-volatile memory (NVM) is emerging as a promising technology. To facilitate its wide adoption, employing NVM in managed runtimes like JVM has proven to be an effective approach (i.e., managed NVM). However, such an approach is runtime specific, which lacks a generic abstraction across different managed languages. Similar to the well-known filesystem primitives that allow diverse programs to access same files via the block I/O interface, managed NVM deserves the same system-wide property for persistent objects across managed runtimes with low overhead. In this paper, we present UniHeap, a new NVM framework for managing persistent objects. It proposes a unified persistent object model that supports various managed languages, and manages NVM within a shared heap that enables cross-language persistent object sharing. UniHeap reduces the object persistence overhead by managing the shared heap in a log-structured manner and coalescing object updates during the garbage collection. We implement UniHeap as a generic framework and extend it to different managed runtimes that include HotSpot JVM, cPython, and JavaScript engine SpiderMonkey. We evaluate UniHeap with a variety of applications, such as key-value store and transactional database. Our evaluation shows that UniHeap significantly outperforms state-of-the-art object sharing approaches, while introducing negligible overhead to the managed runtimes.
1509.03740
Mohsen Ghasempour
Mohsen Ghasempour, Aamer Jaleel, Jim Garside, and Mikel Luj\'an
HAPPY: Hybrid Address-based Page Policy in DRAMs
null
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Memory controllers have used static page closure policies to decide whether a row should be left open, open-page policy, or closed immediately, close-page policy, after the row has been accessed. The appropriate choice for a particular access can reduce the average memory latency. However, since application access patterns change at run time, static page policies cannot guarantee to deliver optimum execution time. Hybrid page policies have been investigated as a means of covering these dynamic scenarios and are now implemented in state-of-the-art processors. Hybrid page policies switch between open-page and close-page policies while the application is running, by monitoring the access pattern of row hits/conflicts and predicting future behavior. Unfortunately, as the size of DRAM memory increases, fine-grain tracking and analysis of memory access patterns does not remain practical. We propose a compact memory address-based encoding technique which can improve or maintain the performance of DRAMs page closure predictors while reducing the hardware overhead in comparison with state-of-the-art techniques. As a case study, we integrate our technique, HAPPY, with a state-of-the-art monitor, the Intel-adaptive open-page policy predictor employed by the Intel Xeon X5650, and a traditional Hybrid page policy. We evaluate them across 70 memory intensive workload mixes consisting of single-thread and multi-thread applications. The experimental results show that using the HAPPY encoding applied to the Intel-adaptive page closure policy can reduce the hardware overhead by 5X for the evaluated 64 GB memory (up to 40X for a 512 GB memory) while maintaining the prediction accuracy.
[ { "created": "Sat, 12 Sep 2015 13:03:04 GMT", "version": "v1" } ]
2015-09-15
[ [ "Ghasempour", "Mohsen", "" ], [ "Jaleel", "Aamer", "" ], [ "Garside", "Jim", "" ], [ "Luján", "Mikel", "" ] ]
Memory controllers have used static page closure policies to decide whether a row should be left open, open-page policy, or closed immediately, close-page policy, after the row has been accessed. The appropriate choice for a particular access can reduce the average memory latency. However, since application access patterns change at run time, static page policies cannot guarantee to deliver optimum execution time. Hybrid page policies have been investigated as a means of covering these dynamic scenarios and are now implemented in state-of-the-art processors. Hybrid page policies switch between open-page and close-page policies while the application is running, by monitoring the access pattern of row hits/conflicts and predicting future behavior. Unfortunately, as the size of DRAM memory increases, fine-grain tracking and analysis of memory access patterns does not remain practical. We propose a compact memory address-based encoding technique which can improve or maintain the performance of DRAMs page closure predictors while reducing the hardware overhead in comparison with state-of-the-art techniques. As a case study, we integrate our technique, HAPPY, with a state-of-the-art monitor, the Intel-adaptive open-page policy predictor employed by the Intel Xeon X5650, and a traditional Hybrid page policy. We evaluate them across 70 memory intensive workload mixes consisting of single-thread and multi-thread applications. The experimental results show that using the HAPPY encoding applied to the Intel-adaptive page closure policy can reduce the hardware overhead by 5X for the evaluated 64 GB memory (up to 40X for a 512 GB memory) while maintaining the prediction accuracy.
2403.17212
Matias Valdenegro-Toro
Matias Valdenegro-Toro and Mihir Mulye
Sanity Checks for Explanation Uncertainty
15 pages, 7 figures, 3 tables
null
null
null
cs.LG cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Explanations for machine learning models can be hard to interpret or be wrong. Combining an explanation method with an uncertainty estimation method produces explanation uncertainty. Evaluating explanation uncertainty is difficult. In this paper we propose sanity checks for uncertainty explanation methods, where a weight and data randomization tests are defined for explanations with uncertainty, allowing for quick tests to combinations of uncertainty and explanation methods. We experimentally show the validity and effectiveness of these tests on the CIFAR10 and California Housing datasets, noting that Ensembles seem to consistently pass both tests with Guided Backpropagation, Integrated Gradients, and LIME explanations.
[ { "created": "Mon, 25 Mar 2024 21:39:33 GMT", "version": "v1" } ]
2024-03-27
[ [ "Valdenegro-Toro", "Matias", "" ], [ "Mulye", "Mihir", "" ] ]
Explanations for machine learning models can be hard to interpret or be wrong. Combining an explanation method with an uncertainty estimation method produces explanation uncertainty. Evaluating explanation uncertainty is difficult. In this paper we propose sanity checks for uncertainty explanation methods, where a weight and data randomization tests are defined for explanations with uncertainty, allowing for quick tests to combinations of uncertainty and explanation methods. We experimentally show the validity and effectiveness of these tests on the CIFAR10 and California Housing datasets, noting that Ensembles seem to consistently pass both tests with Guided Backpropagation, Integrated Gradients, and LIME explanations.
1606.05725
Amirhossein Akbarnejad
Amirhossein Akbarnejad, Mahdieh Soleymani Baghshah
An Efficient Large-scale Semi-supervised Multi-label Classifier Capable of Handling Missing labels
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-label classification has received considerable interest in recent years. Multi-label classifiers have to address many problems including: handling large-scale datasets with many instances and a large set of labels, compensating missing label assignments in the training set, considering correlations between labels, as well as exploiting unlabeled data to improve prediction performance. To tackle datasets with a large set of labels, embedding-based methods have been proposed which seek to represent the label assignments in a low-dimensional space. Many state-of-the-art embedding-based methods use a linear dimensionality reduction to represent the label assignments in a low-dimensional space. However, by doing so, these methods actually neglect the tail labels - labels that are infrequently assigned to instances. We propose an embedding-based method that non-linearly embeds the label vectors using an stochastic approach, thereby predicting the tail labels more accurately. Moreover, the proposed method have excellent mechanisms for handling missing labels, dealing with large-scale datasets, as well as exploiting unlabeled data. With the best of our knowledge, our proposed method is the first multi-label classifier that simultaneously addresses all of the mentioned challenges. Experiments on real-world datasets show that our method outperforms stateof-the-art multi-label classifiers by a large margin, in terms of prediction performance, as well as training time.
[ { "created": "Sat, 18 Jun 2016 07:49:13 GMT", "version": "v1" } ]
2016-06-21
[ [ "Akbarnejad", "Amirhossein", "" ], [ "Baghshah", "Mahdieh Soleymani", "" ] ]
Multi-label classification has received considerable interest in recent years. Multi-label classifiers have to address many problems including: handling large-scale datasets with many instances and a large set of labels, compensating missing label assignments in the training set, considering correlations between labels, as well as exploiting unlabeled data to improve prediction performance. To tackle datasets with a large set of labels, embedding-based methods have been proposed which seek to represent the label assignments in a low-dimensional space. Many state-of-the-art embedding-based methods use a linear dimensionality reduction to represent the label assignments in a low-dimensional space. However, by doing so, these methods actually neglect the tail labels - labels that are infrequently assigned to instances. We propose an embedding-based method that non-linearly embeds the label vectors using an stochastic approach, thereby predicting the tail labels more accurately. Moreover, the proposed method have excellent mechanisms for handling missing labels, dealing with large-scale datasets, as well as exploiting unlabeled data. With the best of our knowledge, our proposed method is the first multi-label classifier that simultaneously addresses all of the mentioned challenges. Experiments on real-world datasets show that our method outperforms stateof-the-art multi-label classifiers by a large margin, in terms of prediction performance, as well as training time.
2204.03393
Han Hao
Dandan Jiang, Han Hao, Lu Yang, Xiang Chen, Wei Han, Bo Bai
TOSE: A Fast Capacity Determination Algorithm Based on Random Matrix Theory
Another version of this paper has been uploaded by another author
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless network capacity is one of the most important performance metrics for wireless communication networks. Future wireless networks will be composed of extremely large number of base stations (BSs) and users, and organized in the form of multiple clusters. Unfortunately, the determination of average cluster capacity for such future wireless networks is difficult, and lacks of both analytical expressions and fast algorithms. In this paper, we propose a fast algorithm TOSE to estimate the average cluster capacity based on the random matrix theory (RMT). It can avoid the exact eigenvalue derivations of large dimensional matrices, which are complicated and inevitable in conventional capacity determination methods. Instead, fast eigenvalue estimations can be realized based on RMT in our TOSE algorithm. In addition, we derive the analytical upper and lower bounds of the average cluster capacity. Our numerical experiments show that TOSE is faster than the conventional Cholesky decomposition method, by at least three orders of magnitude. Besides, TOSE has superior generality, since it is independent of the distributions of BSs and users, and the shape of network areas.
[ { "created": "Thu, 7 Apr 2022 12:29:06 GMT", "version": "v1" }, { "created": "Thu, 10 Nov 2022 06:17:58 GMT", "version": "v2" } ]
2022-11-11
[ [ "Jiang", "Dandan", "" ], [ "Hao", "Han", "" ], [ "Yang", "Lu", "" ], [ "Chen", "Xiang", "" ], [ "Han", "Wei", "" ], [ "Bai", "Bo", "" ] ]
Wireless network capacity is one of the most important performance metrics for wireless communication networks. Future wireless networks will be composed of extremely large number of base stations (BSs) and users, and organized in the form of multiple clusters. Unfortunately, the determination of average cluster capacity for such future wireless networks is difficult, and lacks of both analytical expressions and fast algorithms. In this paper, we propose a fast algorithm TOSE to estimate the average cluster capacity based on the random matrix theory (RMT). It can avoid the exact eigenvalue derivations of large dimensional matrices, which are complicated and inevitable in conventional capacity determination methods. Instead, fast eigenvalue estimations can be realized based on RMT in our TOSE algorithm. In addition, we derive the analytical upper and lower bounds of the average cluster capacity. Our numerical experiments show that TOSE is faster than the conventional Cholesky decomposition method, by at least three orders of magnitude. Besides, TOSE has superior generality, since it is independent of the distributions of BSs and users, and the shape of network areas.
1004.4917
Pablo Piantanida
Pablo Piantanida and Shlomo Shamai (Shitz)
On the Capacity of Compound State-Dependent Channels with States Known at the Transmitter
TO APPEAR IN PROC. OF IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT2010).
null
null
null
cs.IT math.IT math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the capacity of compound state-dependent channels with non-causal state information available at only the transmitter. A new lower bound on the capacity of this class of channels is derived. This bound is shown to be tight for the special case of compound channels with stochastic degraded components, yielding the full characterization of the capacity. Specific results are derived for the compound Gaussian Dirty-Paper (GDP) channel. This model consists of an additive white Gaussian noise (AWGN) channel corrupted by an additive Gaussian interfering signal, known at the transmitter only, where the input and the state signals are affected by fading coefficients whose realizations are unknown at the transmitter. Our bounds are shown to be tight for specific cases. Applications of these results arise in a variety of wireless scenarios as multicast channels, cognitive radio and problems with interference cancellation.
[ { "created": "Tue, 27 Apr 2010 21:44:43 GMT", "version": "v1" } ]
2010-04-29
[ [ "Piantanida", "Pablo", "", "Shitz" ], [ "Shamai", "Shlomo", "", "Shitz" ] ]
This paper investigates the capacity of compound state-dependent channels with non-causal state information available at only the transmitter. A new lower bound on the capacity of this class of channels is derived. This bound is shown to be tight for the special case of compound channels with stochastic degraded components, yielding the full characterization of the capacity. Specific results are derived for the compound Gaussian Dirty-Paper (GDP) channel. This model consists of an additive white Gaussian noise (AWGN) channel corrupted by an additive Gaussian interfering signal, known at the transmitter only, where the input and the state signals are affected by fading coefficients whose realizations are unknown at the transmitter. Our bounds are shown to be tight for specific cases. Applications of these results arise in a variety of wireless scenarios as multicast channels, cognitive radio and problems with interference cancellation.
2303.13918
Aasa Feragen
Kamil Mikolaj, Manxi Lin, Zahra Bashir, Morten Bo S{\o}ndergaard Svendsen, Martin Tolsgaard, Anders Nymark and Aasa Feragen
Removing confounding information from fetal ultrasound images
Fetal ultrasound, confounders, shortcut learning
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Confounding information in the form of text or markings embedded in medical images can severely affect the training of diagnostic deep learning algorithms. However, data collected for clinical purposes often have such markings embedded in them. In dermatology, known examples include drawings or rulers that are overrepresented in images of malignant lesions. In this paper, we encounter text and calipers placed on the images found in national databases containing fetal screening ultrasound scans, which correlate with standard planes to be predicted. In order to utilize the vast amounts of data available in these databases, we develop and validate a series of methods for minimizing the confounding effects of embedded text and calipers on deep learning algorithms designed for ultrasound, using standard plane classification as a test case.
[ { "created": "Fri, 24 Mar 2023 11:13:33 GMT", "version": "v1" } ]
2023-03-27
[ [ "Mikolaj", "Kamil", "" ], [ "Lin", "Manxi", "" ], [ "Bashir", "Zahra", "" ], [ "Svendsen", "Morten Bo Søndergaard", "" ], [ "Tolsgaard", "Martin", "" ], [ "Nymark", "Anders", "" ], [ "Feragen", "Aasa", "" ] ]
Confounding information in the form of text or markings embedded in medical images can severely affect the training of diagnostic deep learning algorithms. However, data collected for clinical purposes often have such markings embedded in them. In dermatology, known examples include drawings or rulers that are overrepresented in images of malignant lesions. In this paper, we encounter text and calipers placed on the images found in national databases containing fetal screening ultrasound scans, which correlate with standard planes to be predicted. In order to utilize the vast amounts of data available in these databases, we develop and validate a series of methods for minimizing the confounding effects of embedded text and calipers on deep learning algorithms designed for ultrasound, using standard plane classification as a test case.
2106.15355
Ana Valeria Gonzalez
Ana Valeria Gonzalez, Anna Rogers, Anders S{\o}gaard
On the Interaction of Belief Bias and Explanations
accepted at findings of ACL 2021
null
null
null
cs.CL cs.HC
http://creativecommons.org/licenses/by/4.0/
A myriad of explainability methods have been proposed in recent years, but there is little consensus on how to evaluate them. While automatic metrics allow for quick benchmarking, it isn't clear how such metrics reflect human interaction with explanations. Human evaluation is of paramount importance, but previous protocols fail to account for belief biases affecting human performance, which may lead to misleading conclusions. We provide an overview of belief bias, its role in human evaluation, and ideas for NLP practitioners on how to account for it. For two experimental paradigms, we present a case study of gradient-based explainability introducing simple ways to account for humans' prior beliefs: models of varying quality and adversarial examples. We show that conclusions about the highest performing methods change when introducing such controls, pointing to the importance of accounting for belief bias in evaluation.
[ { "created": "Tue, 29 Jun 2021 12:49:42 GMT", "version": "v1" } ]
2021-06-30
[ [ "Gonzalez", "Ana Valeria", "" ], [ "Rogers", "Anna", "" ], [ "Søgaard", "Anders", "" ] ]
A myriad of explainability methods have been proposed in recent years, but there is little consensus on how to evaluate them. While automatic metrics allow for quick benchmarking, it isn't clear how such metrics reflect human interaction with explanations. Human evaluation is of paramount importance, but previous protocols fail to account for belief biases affecting human performance, which may lead to misleading conclusions. We provide an overview of belief bias, its role in human evaluation, and ideas for NLP practitioners on how to account for it. For two experimental paradigms, we present a case study of gradient-based explainability introducing simple ways to account for humans' prior beliefs: models of varying quality and adversarial examples. We show that conclusions about the highest performing methods change when introducing such controls, pointing to the importance of accounting for belief bias in evaluation.
2301.01178
Stefano Salsano
Francesco Lombardo, Stefano Salsano, Ahmed Abdelsalam, Daniel Bernier, Clarence Filsfils
Extending Kubernetes Networking to make use of Segment Routing over IPv6 (SRv6)
Submitted paper under review
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Kubernetes is the leading platform for orchestrating containerized applications. In this paper, we extend Kubernetes networking to make use of SRv6, a feature-rich overlay networking mechanism. Integration with SRv6 can be very beneficial when Kubernetes is used in large-scale and distributed multi-datacenter scenarios. We have focused on the Calico CNI plugin, one of the most used Kubernetes networking plugins. In particular, we consider Calico-VPP, a version of the Calico plugin based on the VPP (Vector Packet Processing) data plane, which provides support for SRv6 operations with very high performance. The proposed SRv6 overlay networking solution for Kubernetes offers several advantages compared to a traditional overlay (e.g. IP in IP), in particular the possibility to use Traffic Engineering for the overlay tunnels. In the paper, we provide the architecture and the detailed design of the SRv6 based overlay and describe our open source implementation. We consider the research and technological question on how to extend Kubernetes networking to support large-scale and distributed multi-datacenter scenarios, which is an important goal for Cloud and Network providers. In this respect, we compare two different solutions for the control plane architecture of the SRv6 capable Kubernetes networking plugin, one based on the BGP routing protocol and another one based on extending the Kubernetes control plane. Finally, we report a performance evaluation of the data plane of the proposed SRv6 overlay networking, showing that it has comparable performance to existing overlay solutions (e.g. IP in IP), while offering a richer set of features.
[ { "created": "Tue, 3 Jan 2023 16:19:16 GMT", "version": "v1" } ]
2023-01-04
[ [ "Lombardo", "Francesco", "" ], [ "Salsano", "Stefano", "" ], [ "Abdelsalam", "Ahmed", "" ], [ "Bernier", "Daniel", "" ], [ "Filsfils", "Clarence", "" ] ]
Kubernetes is the leading platform for orchestrating containerized applications. In this paper, we extend Kubernetes networking to make use of SRv6, a feature-rich overlay networking mechanism. Integration with SRv6 can be very beneficial when Kubernetes is used in large-scale and distributed multi-datacenter scenarios. We have focused on the Calico CNI plugin, one of the most used Kubernetes networking plugins. In particular, we consider Calico-VPP, a version of the Calico plugin based on the VPP (Vector Packet Processing) data plane, which provides support for SRv6 operations with very high performance. The proposed SRv6 overlay networking solution for Kubernetes offers several advantages compared to a traditional overlay (e.g. IP in IP), in particular the possibility to use Traffic Engineering for the overlay tunnels. In the paper, we provide the architecture and the detailed design of the SRv6 based overlay and describe our open source implementation. We consider the research and technological question on how to extend Kubernetes networking to support large-scale and distributed multi-datacenter scenarios, which is an important goal for Cloud and Network providers. In this respect, we compare two different solutions for the control plane architecture of the SRv6 capable Kubernetes networking plugin, one based on the BGP routing protocol and another one based on extending the Kubernetes control plane. Finally, we report a performance evaluation of the data plane of the proposed SRv6 overlay networking, showing that it has comparable performance to existing overlay solutions (e.g. IP in IP), while offering a richer set of features.
2206.10397
Bingheng Wang
Bingheng Wang, Zhengtian Ma, Shupeng Lai, and Lin Zhao
Neural Moving Horizon Estimation for Robust Flight Control
This paper (not the final version) has been accepted for publication in the IEEE Transactions on Robotics
null
10.1109/TRO.2023.3331064
null
cs.RO cs.LG cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating and reacting to disturbances is crucial for robust flight control of quadrotors. Existing estimators typically require significant tuning for a specific flight scenario or training with extensive ground-truth disturbance data to achieve satisfactory performance. In this paper, we propose a neural moving horizon estimator (NeuroMHE) that can automatically tune its key parameters modeled by a neural network and adapt to different flight scenarios. We achieve this by deriving the analytical gradients of the MHE estimates with respect to the MHE weighting matrices, which enables a seamless embedding of the MHE as a learnable layer into the neural network for highly effective learning. Interestingly, we show that the gradients can be computed efficiently using a Kalman filter in a recursive form. Moreover, we develop a model-based policy gradient algorithm to train NeuroMHE directly from the quadrotor trajectory tracking error without needing the ground-truth disturbance data. The effectiveness of NeuroMHE is verified extensively via both simulations and physical experiments on quadrotors in various challenging flights. Notably, NeuroMHE outperforms a state-of-the-art neural network-based estimator, reducing force estimation errors by up to 76.7%, while using a portable neural network that has only 7.7% of the learnable parameters of the latter. The proposed method is general and can be applied to robust adaptive control of other robotic systems.
[ { "created": "Tue, 21 Jun 2022 13:43:24 GMT", "version": "v1" }, { "created": "Mon, 11 Sep 2023 16:15:00 GMT", "version": "v10" }, { "created": "Mon, 18 Sep 2023 13:10:07 GMT", "version": "v11" }, { "created": "Tue, 10 Oct 2023 00:38:54 GMT", "version": "v12" }, { "created": "Tue, 14 Nov 2023 13:04:02 GMT", "version": "v13" }, { "created": "Wed, 22 Jun 2022 14:37:56 GMT", "version": "v2" }, { "created": "Thu, 23 Jun 2022 04:31:33 GMT", "version": "v3" }, { "created": "Mon, 27 Jun 2022 13:10:36 GMT", "version": "v4" }, { "created": "Wed, 29 Jun 2022 08:01:17 GMT", "version": "v5" }, { "created": "Fri, 1 Jul 2022 14:23:56 GMT", "version": "v6" }, { "created": "Thu, 7 Jul 2022 09:06:28 GMT", "version": "v7" }, { "created": "Fri, 8 Jul 2022 10:06:57 GMT", "version": "v8" }, { "created": "Mon, 11 Jul 2022 17:15:41 GMT", "version": "v9" } ]
2023-11-15
[ [ "Wang", "Bingheng", "" ], [ "Ma", "Zhengtian", "" ], [ "Lai", "Shupeng", "" ], [ "Zhao", "Lin", "" ] ]
Estimating and reacting to disturbances is crucial for robust flight control of quadrotors. Existing estimators typically require significant tuning for a specific flight scenario or training with extensive ground-truth disturbance data to achieve satisfactory performance. In this paper, we propose a neural moving horizon estimator (NeuroMHE) that can automatically tune its key parameters modeled by a neural network and adapt to different flight scenarios. We achieve this by deriving the analytical gradients of the MHE estimates with respect to the MHE weighting matrices, which enables a seamless embedding of the MHE as a learnable layer into the neural network for highly effective learning. Interestingly, we show that the gradients can be computed efficiently using a Kalman filter in a recursive form. Moreover, we develop a model-based policy gradient algorithm to train NeuroMHE directly from the quadrotor trajectory tracking error without needing the ground-truth disturbance data. The effectiveness of NeuroMHE is verified extensively via both simulations and physical experiments on quadrotors in various challenging flights. Notably, NeuroMHE outperforms a state-of-the-art neural network-based estimator, reducing force estimation errors by up to 76.7%, while using a portable neural network that has only 7.7% of the learnable parameters of the latter. The proposed method is general and can be applied to robust adaptive control of other robotic systems.
1905.05381
Anh Duc Le Dr.
Anh Duc Le, Hung Tuan Nguyen and Masaki Nakagawa
End to End Recognition System for Recognizing Offline Unconstrained Vietnamese Handwriting
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by recent successes in neural machine translation and image caption generation, we present an attention based encoder decoder model (AED) to recognize Vietnamese Handwritten Text. The model composes of two parts: a DenseNet for extracting invariant features, and a Long Short-Term Memory network (LSTM) with an attention model incorporated for generating output text (LSTM decoder), which are connected from the CNN part to the attention model. The input of the CNN part is a handwritten text image and the target of the LSTM decoder is the corresponding text of the input image. Our model is trained end-to-end to predict the text from a given input image since all the parts are differential components. In the experiment section, we evaluate our proposed AED model on the VNOnDB-Word and VNOnDB-Line datasets to verify its efficiency. The experiential results show that our model achieves 12.30% of word error rate without using any language model. This result is competitive with the handwriting recognition system provided by Google in the Vietnamese Online Handwritten Text Recognition competition.
[ { "created": "Tue, 14 May 2019 03:59:46 GMT", "version": "v1" } ]
2019-05-15
[ [ "Le", "Anh Duc", "" ], [ "Nguyen", "Hung Tuan", "" ], [ "Nakagawa", "Masaki", "" ] ]
Inspired by recent successes in neural machine translation and image caption generation, we present an attention based encoder decoder model (AED) to recognize Vietnamese Handwritten Text. The model composes of two parts: a DenseNet for extracting invariant features, and a Long Short-Term Memory network (LSTM) with an attention model incorporated for generating output text (LSTM decoder), which are connected from the CNN part to the attention model. The input of the CNN part is a handwritten text image and the target of the LSTM decoder is the corresponding text of the input image. Our model is trained end-to-end to predict the text from a given input image since all the parts are differential components. In the experiment section, we evaluate our proposed AED model on the VNOnDB-Word and VNOnDB-Line datasets to verify its efficiency. The experiential results show that our model achieves 12.30% of word error rate without using any language model. This result is competitive with the handwriting recognition system provided by Google in the Vietnamese Online Handwritten Text Recognition competition.
2308.16400
Hao Lei
Hao Lei, Jiayi Zhang, Huahua Xiao, Xiaodan Zhang, Bo Ai, and Derrick Wing Kwan Ng
Channel Estimation for XL-MIMO Systems with Polar-Domain Multi-Scale Residual Dense Network
null
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by-nc-sa/4.0/
Extremely large-scale multiple-input multiple-output (XL-MIMO) is a promising technique to enable versatile applications for future wireless communications.To realize the huge potential performance gain, accurate channel state information is a fundamental technical prerequisite. In conventional massive MIMO, the channel is often modeled by the far-field planar-wavefront with rich sparsity in the angular domain that facilitates the design of low-complexity channel estimation. However, this sparsity is not conspicuous in XL-MIMO systems due to the non-negligible near-field spherical-wavefront. To address the inherent performance loss of the angular-domain channel estimation schemes, we first propose the polar-domain multiple residual dense network (P-MRDN) for XL-MIMO systems based on the polar-domain sparsity of the near-field channel by improving the existing MRDN scheme. Furthermore, a polar-domain multi-scale residual dense network (P-MSRDN) is designed to improve the channel estimation accuracy. Finally, simulation results reveal the superior performance of the proposed schemes compared with existing benchmark schemes and the minimal influence of the channel sparsity on the proposed schemes.
[ { "created": "Thu, 31 Aug 2023 02:11:08 GMT", "version": "v1" }, { "created": "Sat, 2 Sep 2023 03:27:11 GMT", "version": "v2" } ]
2023-09-06
[ [ "Lei", "Hao", "" ], [ "Zhang", "Jiayi", "" ], [ "Xiao", "Huahua", "" ], [ "Zhang", "Xiaodan", "" ], [ "Ai", "Bo", "" ], [ "Ng", "Derrick Wing Kwan", "" ] ]
Extremely large-scale multiple-input multiple-output (XL-MIMO) is a promising technique to enable versatile applications for future wireless communications.To realize the huge potential performance gain, accurate channel state information is a fundamental technical prerequisite. In conventional massive MIMO, the channel is often modeled by the far-field planar-wavefront with rich sparsity in the angular domain that facilitates the design of low-complexity channel estimation. However, this sparsity is not conspicuous in XL-MIMO systems due to the non-negligible near-field spherical-wavefront. To address the inherent performance loss of the angular-domain channel estimation schemes, we first propose the polar-domain multiple residual dense network (P-MRDN) for XL-MIMO systems based on the polar-domain sparsity of the near-field channel by improving the existing MRDN scheme. Furthermore, a polar-domain multi-scale residual dense network (P-MSRDN) is designed to improve the channel estimation accuracy. Finally, simulation results reveal the superior performance of the proposed schemes compared with existing benchmark schemes and the minimal influence of the channel sparsity on the proposed schemes.
2403.06938
Ryan Wong
Ryan Wong, Nikita Kim, Kevin Higgs, Sapan Agarwal, Engin Ipek, Saugata Ghose, Ben Feinberg
TCAM-SSD: A Framework for Search-Based Computing in Solid-State Drives
null
null
null
null
cs.AR
http://creativecommons.org/licenses/by/4.0/
As the amount of data produced in society continues to grow at an exponential rate, modern applications are incurring significant performance and energy penalties due to high data movement between the CPU and memory/storage. While processing in main memory can alleviate these penalties, it is becoming increasingly difficult to keep large datasets entirely in main memory. This has led to a recent push for in-storage computation, where processing is performed inside the storage device. We propose TCAM-SSD, a new framework for search-based computation inside the NAND flash memory arrays of a conventional solid-state drive (SSD), which requires lightweight modifications to only the array periphery and firmware. TCAM-SSD introduces a search manager and link table, which can logically partition the NAND flash memory's contents into search-enabled regions and standard storage regions. Together, these light firmware changes enable TCAM-SSD to seamlessly handle block I/O operations, in addition to new search operations, thereby reducing end-to-end execution time and total data movement. We provide an NVMe-compatible interface that provides programmers with the ability to dynamically allocate data on and make use of TCAM-SSD, allowing the system to be leveraged by a wide variety of applications. We evaluate three example use cases of TCAM-SSD to demonstrate its benefits. For transactional databases, TCAM-SSD can mitigate the performance penalties for applications with large datasets, achieving a 60.9% speedup over a conventional system that retrieves data from the SSD and computes using the CPU. For database analytics, TCAM-SSD provides an average speedup of 17.7x over a conventional system for a collection of analytical queries. For graph analytics, we combine TCAM-SSD's associative search with a sparse data structure, speeding up graph computing for larger-than-memory datasets by 14.5%.
[ { "created": "Mon, 11 Mar 2024 17:25:01 GMT", "version": "v1" } ]
2024-03-12
[ [ "Wong", "Ryan", "" ], [ "Kim", "Nikita", "" ], [ "Higgs", "Kevin", "" ], [ "Agarwal", "Sapan", "" ], [ "Ipek", "Engin", "" ], [ "Ghose", "Saugata", "" ], [ "Feinberg", "Ben", "" ] ]
As the amount of data produced in society continues to grow at an exponential rate, modern applications are incurring significant performance and energy penalties due to high data movement between the CPU and memory/storage. While processing in main memory can alleviate these penalties, it is becoming increasingly difficult to keep large datasets entirely in main memory. This has led to a recent push for in-storage computation, where processing is performed inside the storage device. We propose TCAM-SSD, a new framework for search-based computation inside the NAND flash memory arrays of a conventional solid-state drive (SSD), which requires lightweight modifications to only the array periphery and firmware. TCAM-SSD introduces a search manager and link table, which can logically partition the NAND flash memory's contents into search-enabled regions and standard storage regions. Together, these light firmware changes enable TCAM-SSD to seamlessly handle block I/O operations, in addition to new search operations, thereby reducing end-to-end execution time and total data movement. We provide an NVMe-compatible interface that provides programmers with the ability to dynamically allocate data on and make use of TCAM-SSD, allowing the system to be leveraged by a wide variety of applications. We evaluate three example use cases of TCAM-SSD to demonstrate its benefits. For transactional databases, TCAM-SSD can mitigate the performance penalties for applications with large datasets, achieving a 60.9% speedup over a conventional system that retrieves data from the SSD and computes using the CPU. For database analytics, TCAM-SSD provides an average speedup of 17.7x over a conventional system for a collection of analytical queries. For graph analytics, we combine TCAM-SSD's associative search with a sparse data structure, speeding up graph computing for larger-than-memory datasets by 14.5%.
1303.2933
Pedro Henrique Juliano Nardelli
Pedro H. J. Nardelli, Paulo Cardieri, William A. Kretzschmar Jr. and Matti Latva-aho
Interference Networks: A Complex System View
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an unusual view of interference wireless networks based on complex system thinking. To proceed with this analysis, a literature review of the different applications of complex systems is firstly presented to illustrate how such an approach can be used in a wide range of research topics, from economics to linguistics. Then the problem of quantifying the fundamental limits of wireless systems where the co-channel interference is the main limiting factor is described and hence contextualized in the perspective of complex systems. Specifically some possible internal and external pressures that the network elements may suffer are identified as, for example, queue stability, maximum packet loss rate and transmit power constraint. Besides, other important external factors such as mobility and incoming traffic are also pointed out. As a study case, a decentralized point-to-point interference network is described and several claims about the optimal design setting for different network states and under two mobility conditions, namely quasi-static and highly mobile, are stated based on results found in the literature. Using these claims as a background, the design of a robust adaptive algorithm that each network element should run is investigated.
[ { "created": "Sat, 9 Mar 2013 05:55:46 GMT", "version": "v1" } ]
2013-03-13
[ [ "Nardelli", "Pedro H. J.", "" ], [ "Cardieri", "Paulo", "" ], [ "Kretzschmar", "William A.", "Jr." ], [ "Latva-aho", "Matti", "" ] ]
This paper presents an unusual view of interference wireless networks based on complex system thinking. To proceed with this analysis, a literature review of the different applications of complex systems is firstly presented to illustrate how such an approach can be used in a wide range of research topics, from economics to linguistics. Then the problem of quantifying the fundamental limits of wireless systems where the co-channel interference is the main limiting factor is described and hence contextualized in the perspective of complex systems. Specifically some possible internal and external pressures that the network elements may suffer are identified as, for example, queue stability, maximum packet loss rate and transmit power constraint. Besides, other important external factors such as mobility and incoming traffic are also pointed out. As a study case, a decentralized point-to-point interference network is described and several claims about the optimal design setting for different network states and under two mobility conditions, namely quasi-static and highly mobile, are stated based on results found in the literature. Using these claims as a background, the design of a robust adaptive algorithm that each network element should run is investigated.
2110.03578
Mohamed Afham
Mohamed Afham, Udith Haputhanthri, Jathurshan Pradeepkumar, Mithunjha Anandakumar, Ashwin De Silva, Chamira Edussooriya
Towards Accurate Cross-Domain In-Bed Human Pose Estimation
Code is available at https://github.com/MohamedAfham/CD_HPE
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human behavioral monitoring during sleep is essential for various medical applications. Majority of the contactless human pose estimation algorithms are based on RGB modality, causing ineffectiveness in in-bed pose estimation due to occlusions by blankets and varying illumination conditions. Long-wavelength infrared (LWIR) modality based pose estimation algorithms overcome the aforementioned challenges; however, ground truth pose generations by a human annotator under such conditions are not feasible. A feasible solution to address this issue is to transfer the knowledge learned from images with pose labels and no occlusions, and adapt it towards real world conditions (occlusions due to blankets). In this paper, we propose a novel learning strategy comprises of two-fold data augmentation to reduce the cross-domain discrepancy and knowledge distillation to learn the distribution of unlabeled images in real world conditions. Our experiments and analysis show the effectiveness of our approach over multiple standard human pose estimation baselines.
[ { "created": "Thu, 7 Oct 2021 15:54:46 GMT", "version": "v1" } ]
2021-10-08
[ [ "Afham", "Mohamed", "" ], [ "Haputhanthri", "Udith", "" ], [ "Pradeepkumar", "Jathurshan", "" ], [ "Anandakumar", "Mithunjha", "" ], [ "De Silva", "Ashwin", "" ], [ "Edussooriya", "Chamira", "" ] ]
Human behavioral monitoring during sleep is essential for various medical applications. Majority of the contactless human pose estimation algorithms are based on RGB modality, causing ineffectiveness in in-bed pose estimation due to occlusions by blankets and varying illumination conditions. Long-wavelength infrared (LWIR) modality based pose estimation algorithms overcome the aforementioned challenges; however, ground truth pose generations by a human annotator under such conditions are not feasible. A feasible solution to address this issue is to transfer the knowledge learned from images with pose labels and no occlusions, and adapt it towards real world conditions (occlusions due to blankets). In this paper, we propose a novel learning strategy comprises of two-fold data augmentation to reduce the cross-domain discrepancy and knowledge distillation to learn the distribution of unlabeled images in real world conditions. Our experiments and analysis show the effectiveness of our approach over multiple standard human pose estimation baselines.
2002.01535
Shrey Desai
Shrey Desai, Geoffrey Goh, Arun Babu, Ahmed Aly
Lightweight Convolutional Representations for On-Device Natural Language Processing
Accepted to MLSys 2020
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing computational and memory complexities of deep neural networks have made it difficult to deploy them on low-resource electronic devices (e.g., mobile phones, tablets, wearables). Practitioners have developed numerous model compression methods to address these concerns, but few have condensed input representations themselves. In this work, we propose a fast, accurate, and lightweight convolutional representation that can be swapped into any neural model and compressed significantly (up to 32x) with a negligible reduction in performance. In addition, we show gains over recurrent representations when considering resource-centric metrics (e.g., model file size, latency, memory usage) on a Samsung Galaxy S9.
[ { "created": "Tue, 4 Feb 2020 21:02:11 GMT", "version": "v1" } ]
2020-02-06
[ [ "Desai", "Shrey", "" ], [ "Goh", "Geoffrey", "" ], [ "Babu", "Arun", "" ], [ "Aly", "Ahmed", "" ] ]
The increasing computational and memory complexities of deep neural networks have made it difficult to deploy them on low-resource electronic devices (e.g., mobile phones, tablets, wearables). Practitioners have developed numerous model compression methods to address these concerns, but few have condensed input representations themselves. In this work, we propose a fast, accurate, and lightweight convolutional representation that can be swapped into any neural model and compressed significantly (up to 32x) with a negligible reduction in performance. In addition, we show gains over recurrent representations when considering resource-centric metrics (e.g., model file size, latency, memory usage) on a Samsung Galaxy S9.
0806.3787
Ted Pedersen
Ted Pedersen (University of Minnesota, Duluth)
Computational Approaches to Measuring the Similarity of Short Contexts : A Review of Applications and Methods
23 pages
University of Minnesota Supercomputing Institute Research Report UMSI 2010/118, October 2010
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Measuring the similarity of short written contexts is a fundamental problem in Natural Language Processing. This article provides a unifying framework by which short context problems can be categorized both by their intended application and proposed solution. The goal is to show that various problems and methodologies that appear quite different on the surface are in fact very closely related. The axes by which these categorizations are made include the format of the contexts (headed versus headless), the way in which the contexts are to be measured (first-order versus second-order similarity), and the information used to represent the features in the contexts (micro versus macro views). The unifying thread that binds together many short context applications and methods is the fact that similarity decisions must be made between contexts that share few (if any) words in common.
[ { "created": "Mon, 23 Jun 2008 23:27:20 GMT", "version": "v1" }, { "created": "Mon, 18 Oct 2010 19:08:13 GMT", "version": "v2" } ]
2010-10-19
[ [ "Pedersen", "Ted", "", "University of Minnesota, Duluth" ] ]
Measuring the similarity of short written contexts is a fundamental problem in Natural Language Processing. This article provides a unifying framework by which short context problems can be categorized both by their intended application and proposed solution. The goal is to show that various problems and methodologies that appear quite different on the surface are in fact very closely related. The axes by which these categorizations are made include the format of the contexts (headed versus headless), the way in which the contexts are to be measured (first-order versus second-order similarity), and the information used to represent the features in the contexts (micro versus macro views). The unifying thread that binds together many short context applications and methods is the fact that similarity decisions must be made between contexts that share few (if any) words in common.
cs/0602040
Pierre-Alain Masson
Samir Chouali (LIFC), Jacques Julliand (LIFC), Pierre-Alain Masson (LIFC), Fran\c{c}oise Bellegarde (LIFC)
PLTL Partitioned Model Checking for Reactive Systems under Fairness Assumptions
null
ACM Transactions on Embedded Computing Systems 4(2) (2005) 267-301
10.1145/1067915.1067918
null
cs.LO
null
We are interested in verifying dynamic properties of finite state reactive systems under fairness assumptions by model checking. The systems we want to verify are specified through a top-down refinement process. In order to deal with the state explosion problem, we have proposed in previous works to partition the reachability graph, and to perform the verification on each part separately. Moreover, we have defined a class, called Bmod, of dynamic properties that are verifiable by parts, whatever the partition. We decide if a property P belongs to Bmod by looking at the form of the Buchi automaton that accepts the negation of P. However, when a property P belongs to Bmod, the property f => P, where f is a fairness assumption, does not necessarily belong to Bmod. In this paper, we propose to use the refinement process in order to build the parts on which the verification has to be performed. We then show that with such a partition, if a property P is verifiable by parts and if f is the expression of the fairness assumptions on a system, then the property f => P is still verifiable by parts. This approach is illustrated by its application to the chip card protocol T=1 using the B engineering design language.
[ { "created": "Fri, 10 Feb 2006 14:48:29 GMT", "version": "v1" } ]
2011-11-10
[ [ "Chouali", "Samir", "", "LIFC" ], [ "Julliand", "Jacques", "", "LIFC" ], [ "Masson", "Pierre-Alain", "", "LIFC" ], [ "Bellegarde", "Françoise", "", "LIFC" ] ]
We are interested in verifying dynamic properties of finite state reactive systems under fairness assumptions by model checking. The systems we want to verify are specified through a top-down refinement process. In order to deal with the state explosion problem, we have proposed in previous works to partition the reachability graph, and to perform the verification on each part separately. Moreover, we have defined a class, called Bmod, of dynamic properties that are verifiable by parts, whatever the partition. We decide if a property P belongs to Bmod by looking at the form of the Buchi automaton that accepts the negation of P. However, when a property P belongs to Bmod, the property f => P, where f is a fairness assumption, does not necessarily belong to Bmod. In this paper, we propose to use the refinement process in order to build the parts on which the verification has to be performed. We then show that with such a partition, if a property P is verifiable by parts and if f is the expression of the fairness assumptions on a system, then the property f => P is still verifiable by parts. This approach is illustrated by its application to the chip card protocol T=1 using the B engineering design language.
0911.1508
Sana Ullah
Ahasanun Nessa, Qinghai Yang, Sana Ullah, Humaun Kabir, and Kyung Sup Kwak
Performance Analysis of Two-Hop Cooperative MIMO transmission with Relay Selection in Rayleigh Fading Channel
5 figures, 4th International Conference on Wireless Communications, Networking and Mobile Computing, 2008. WiCOM '08
null
10.1109/WiCom.2008.122
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless relaying is one of the promising solutions to overcome the channel impairments and provide high data rate coverage that appears for beyond 3G mobile communications. In this paper we present an end to end BER performance analysis of dual hop wireless communication systems equipped with multiple decode and forward relays over the Rayleigh fading channel with relay selection. We select the best relay based on end to end channel conditions. We apply orthogonal space time block coding (OSTBC) at source, and also present how the multiple antennas at the source terminal affects the end to end BER performance. This intermediate relay technique will cover long distance where destination is out of reach from source.
[ { "created": "Sun, 8 Nov 2009 10:30:32 GMT", "version": "v1" } ]
2009-11-10
[ [ "Nessa", "Ahasanun", "" ], [ "Yang", "Qinghai", "" ], [ "Ullah", "Sana", "" ], [ "Kabir", "Humaun", "" ], [ "Kwak", "Kyung Sup", "" ] ]
Wireless relaying is one of the promising solutions to overcome the channel impairments and provide high data rate coverage that appears for beyond 3G mobile communications. In this paper we present an end to end BER performance analysis of dual hop wireless communication systems equipped with multiple decode and forward relays over the Rayleigh fading channel with relay selection. We select the best relay based on end to end channel conditions. We apply orthogonal space time block coding (OSTBC) at source, and also present how the multiple antennas at the source terminal affects the end to end BER performance. This intermediate relay technique will cover long distance where destination is out of reach from source.
1502.03248
Anna Harutyunyan
Anna Harutyunyan and Tim Brys and Peter Vrancx and Ann Nowe
Off-Policy Reward Shaping with Ensembles
To be presented at ALA-15. Short version to appear at AAMAS-15
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Potential-based reward shaping (PBRS) is an effective and popular technique to speed up reinforcement learning by leveraging domain knowledge. While PBRS is proven to always preserve optimal policies, its effect on learning speed is determined by the quality of its potential function, which, in turn, depends on both the underlying heuristic and the scale. Knowing which heuristic will prove effective requires testing the options beforehand, and determining the appropriate scale requires tuning, both of which introduce additional sample complexity. We formulate a PBRS framework that reduces learning speed, but does not incur extra sample complexity. For this, we propose to simultaneously learn an ensemble of policies, shaped w.r.t. many heuristics and on a range of scales. The target policy is then obtained by voting. The ensemble needs to be able to efficiently and reliably learn off-policy: requirements fulfilled by the recent Horde architecture, which we take as our basis. We demonstrate empirically that (1) our ensemble policy outperforms both the base policy, and its single-heuristic components, and (2) an ensemble over a general range of scales performs at least as well as one with optimally tuned components.
[ { "created": "Wed, 11 Feb 2015 10:27:15 GMT", "version": "v1" }, { "created": "Mon, 23 Mar 2015 13:35:59 GMT", "version": "v2" } ]
2015-03-24
[ [ "Harutyunyan", "Anna", "" ], [ "Brys", "Tim", "" ], [ "Vrancx", "Peter", "" ], [ "Nowe", "Ann", "" ] ]
Potential-based reward shaping (PBRS) is an effective and popular technique to speed up reinforcement learning by leveraging domain knowledge. While PBRS is proven to always preserve optimal policies, its effect on learning speed is determined by the quality of its potential function, which, in turn, depends on both the underlying heuristic and the scale. Knowing which heuristic will prove effective requires testing the options beforehand, and determining the appropriate scale requires tuning, both of which introduce additional sample complexity. We formulate a PBRS framework that reduces learning speed, but does not incur extra sample complexity. For this, we propose to simultaneously learn an ensemble of policies, shaped w.r.t. many heuristics and on a range of scales. The target policy is then obtained by voting. The ensemble needs to be able to efficiently and reliably learn off-policy: requirements fulfilled by the recent Horde architecture, which we take as our basis. We demonstrate empirically that (1) our ensemble policy outperforms both the base policy, and its single-heuristic components, and (2) an ensemble over a general range of scales performs at least as well as one with optimally tuned components.
2206.11495
Laura Kovacs
Andreas Humenberger and Daneshvar Amrollahi and Nikolaj Bj{\o}rner and Laura Kov\'acs
Algebra-Based Reasoning for Loop Synthesis
This paper is an extended version of the "Algebra-Based Loop Synthesis'' manuscript published at iFM 2020. arXiv admin note: substantial text overlap with arXiv:2004.11787
null
10.1145/3527458
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Provably correct software is one of the key challenges of our software-driven society. Program synthesis -- the task of constructing a program satisfying a given specification -- is one strategy for achieving this. The result of this task is then a program which is correct by design. As in the domain of program verification, handling loops is one of the main ingredients to a successful synthesis procedure. We present an algorithm for synthesizing loops satisfying a given polynomial loop invariant. The class of loops we are considering can be modeled by a system of algebraic recurrence equations with constant coefficients, encoding thus program loops with affine operations among program variables. We turn the task of loop synthesis into a polynomial constraint problem, by precisely characterizing the set of all loops satisfying the given invariant. We prove soundness of our approach, as well as its completeness with respect to an a priori fixed upper bound on the number of program variables. Our work has applications towards synthesizing loops satisfying a given polynomial loop invariant, program verification, as well as generating number sequences from algebraic relations. To understand viability of the methodology and heuristics for synthesizing loops, we implement and evaluate the method using the Absynth tool.
[ { "created": "Thu, 23 Jun 2022 06:39:13 GMT", "version": "v1" } ]
2022-06-24
[ [ "Humenberger", "Andreas", "" ], [ "Amrollahi", "Daneshvar", "" ], [ "Bjørner", "Nikolaj", "" ], [ "Kovács", "Laura", "" ] ]
Provably correct software is one of the key challenges of our software-driven society. Program synthesis -- the task of constructing a program satisfying a given specification -- is one strategy for achieving this. The result of this task is then a program which is correct by design. As in the domain of program verification, handling loops is one of the main ingredients to a successful synthesis procedure. We present an algorithm for synthesizing loops satisfying a given polynomial loop invariant. The class of loops we are considering can be modeled by a system of algebraic recurrence equations with constant coefficients, encoding thus program loops with affine operations among program variables. We turn the task of loop synthesis into a polynomial constraint problem, by precisely characterizing the set of all loops satisfying the given invariant. We prove soundness of our approach, as well as its completeness with respect to an a priori fixed upper bound on the number of program variables. Our work has applications towards synthesizing loops satisfying a given polynomial loop invariant, program verification, as well as generating number sequences from algebraic relations. To understand viability of the methodology and heuristics for synthesizing loops, we implement and evaluate the method using the Absynth tool.
2305.11229
Tiantian Feng
Tiantian Feng and Rajat Hebbar and Shrikanth Narayanan
TrustSER: On the Trustworthiness of Fine-tuning Pre-trained Speech Embeddings For Speech Emotion Recognition
null
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies have explored the use of pre-trained embeddings for speech emotion recognition (SER), achieving comparable performance to conventional methods that rely on low-level knowledge-inspired acoustic features. These embeddings are often generated from models trained on large-scale speech datasets using self-supervised or weakly-supervised learning objectives. Despite the significant advancements made in SER through the use of pre-trained embeddings, there is a limited understanding of the trustworthiness of these methods, including privacy breaches, unfair performance, vulnerability to adversarial attacks, and computational cost, all of which may hinder the real-world deployment of these systems. In response, we introduce TrustSER, a general framework designed to evaluate the trustworthiness of SER systems using deep learning methods, with a focus on privacy, safety, fairness, and sustainability, offering unique insights into future research in the field of SER. Our code is publicly available under: https://github.com/usc-sail/trust-ser.
[ { "created": "Thu, 18 May 2023 18:00:36 GMT", "version": "v1" } ]
2023-05-22
[ [ "Feng", "Tiantian", "" ], [ "Hebbar", "Rajat", "" ], [ "Narayanan", "Shrikanth", "" ] ]
Recent studies have explored the use of pre-trained embeddings for speech emotion recognition (SER), achieving comparable performance to conventional methods that rely on low-level knowledge-inspired acoustic features. These embeddings are often generated from models trained on large-scale speech datasets using self-supervised or weakly-supervised learning objectives. Despite the significant advancements made in SER through the use of pre-trained embeddings, there is a limited understanding of the trustworthiness of these methods, including privacy breaches, unfair performance, vulnerability to adversarial attacks, and computational cost, all of which may hinder the real-world deployment of these systems. In response, we introduce TrustSER, a general framework designed to evaluate the trustworthiness of SER systems using deep learning methods, with a focus on privacy, safety, fairness, and sustainability, offering unique insights into future research in the field of SER. Our code is publicly available under: https://github.com/usc-sail/trust-ser.
1606.03982
Tobias Denkinger
Tobias Denkinger
A Chomsky-Sch\"utzenberger representation for weighted multiple context-free languages
This is an extended and corrected version of a paper with the same title presented at the 12th International Conference on Finite-State Methods and Natural Language Processing (FSMNLP 2015)
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove a Chomsky-Sch\"utzenberger representation theorem for multiple context-free languages weighted over complete commutative strong bimonoids.
[ { "created": "Mon, 13 Jun 2016 14:55:53 GMT", "version": "v1" }, { "created": "Mon, 28 Nov 2016 14:02:00 GMT", "version": "v2" } ]
2016-11-29
[ [ "Denkinger", "Tobias", "" ] ]
We prove a Chomsky-Sch\"utzenberger representation theorem for multiple context-free languages weighted over complete commutative strong bimonoids.
1601.00275
Alexander Chepurnoy Mr.
Alexander Chepurnoy
Interactive Proof-of-stake
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper examines decentralized cryptocurrency protocols that are based on the use of internal tokens as identity tools. An analysis of security problems with popular Proof-of-stake consensus protocols is provided. A new protocol, Interactive Proof-of-stake, is proposed. The main ideas of the protocol are to reduce a number of variables a miner can iterate over to a minimum and also to bring a communication into block generation. The protocol is checked against known attacks. It is shown that Interactive Proof-of-stake is more secure than current pure Proof-of-stake protocols.
[ { "created": "Sun, 3 Jan 2016 10:42:55 GMT", "version": "v1" }, { "created": "Mon, 11 Jan 2016 20:21:53 GMT", "version": "v2" } ]
2016-01-12
[ [ "Chepurnoy", "Alexander", "" ] ]
The paper examines decentralized cryptocurrency protocols that are based on the use of internal tokens as identity tools. An analysis of security problems with popular Proof-of-stake consensus protocols is provided. A new protocol, Interactive Proof-of-stake, is proposed. The main ideas of the protocol are to reduce a number of variables a miner can iterate over to a minimum and also to bring a communication into block generation. The protocol is checked against known attacks. It is shown that Interactive Proof-of-stake is more secure than current pure Proof-of-stake protocols.
2106.11815
Kwan Hui Lim Dr
Prashant Solanki, Kwan Hui Lim and Aaron Harwood
User Identification across Social Networking Sites using User Profiles and Posting Patterns
Accepted at the 2021 International Joint Conference on Neural Networks (IJCNN'21)
null
null
null
cs.LG cs.CY cs.SI
http://creativecommons.org/licenses/by/4.0/
With the prevalence of online social networking sites (OSNs) and mobile devices, people are increasingly reliant on a variety of OSNs for keeping in touch with family and friends, and using it as a source of information. For example, a user might utilise multiple OSNs for different purposes, such as using Flickr to share holiday pictures with family and friends, and Twitter to post short messages about their thoughts. Identifying the same user across multiple OSNs is an important task as this allows us to understand the usage patterns of users among different OSNs, make recommendations when a user registers for a new OSN, and various other useful applications. To address this problem, we proposed an algorithm based on the multilayer perceptron using various types of features, namely: (i) user profile, such as name, location, description; (ii) temporal distribution of user generated content; and (iii) embedding based on user name, real name and description. Using a Twitter and Flickr dataset of users and their posting activities, we perform an empirical study on how these features affect the performance of user identification across the two OSNs and discuss our main findings based on the different features.
[ { "created": "Tue, 22 Jun 2021 14:28:19 GMT", "version": "v1" } ]
2021-06-23
[ [ "Solanki", "Prashant", "" ], [ "Lim", "Kwan Hui", "" ], [ "Harwood", "Aaron", "" ] ]
With the prevalence of online social networking sites (OSNs) and mobile devices, people are increasingly reliant on a variety of OSNs for keeping in touch with family and friends, and using it as a source of information. For example, a user might utilise multiple OSNs for different purposes, such as using Flickr to share holiday pictures with family and friends, and Twitter to post short messages about their thoughts. Identifying the same user across multiple OSNs is an important task as this allows us to understand the usage patterns of users among different OSNs, make recommendations when a user registers for a new OSN, and various other useful applications. To address this problem, we proposed an algorithm based on the multilayer perceptron using various types of features, namely: (i) user profile, such as name, location, description; (ii) temporal distribution of user generated content; and (iii) embedding based on user name, real name and description. Using a Twitter and Flickr dataset of users and their posting activities, we perform an empirical study on how these features affect the performance of user identification across the two OSNs and discuss our main findings based on the different features.
1810.12069
Timoth\'ee Lesort
Timoth\'ee Lesort, Alexander Gepperth, Andrei Stoian, David Filliat
Marginal Replay vs Conditional Replay for Continual Learning
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new replay-based method of continual classification learning that we term "conditional replay" which generates samples and labels together by sampling from a distribution conditioned on the class. We compare conditional replay to another replay-based continual learning paradigm (which we term "marginal replay") that generates samples independently of their class and assigns labels in a separate step. The main improvement in conditional replay is that labels for generated samples need not be inferred, which reduces the margin for error in complex continual classification learning tasks. We demonstrate the effectiveness of this approach using novel and standard benchmarks constructed from MNIST and FashionMNIST data, and compare to the regularization-based \textit{elastic weight consolidation} (EWC) method.
[ { "created": "Mon, 29 Oct 2018 12:17:48 GMT", "version": "v1" }, { "created": "Mon, 26 Nov 2018 13:23:15 GMT", "version": "v2" }, { "created": "Thu, 13 Dec 2018 20:22:15 GMT", "version": "v3" }, { "created": "Sat, 29 Dec 2018 10:35:42 GMT", "version": "v4" }, { "created": "Wed, 23 Jan 2019 14:49:00 GMT", "version": "v5" }, { "created": "Mon, 1 Jul 2019 14:12:50 GMT", "version": "v6" } ]
2019-07-02
[ [ "Lesort", "Timothée", "" ], [ "Gepperth", "Alexander", "" ], [ "Stoian", "Andrei", "" ], [ "Filliat", "David", "" ] ]
We present a new replay-based method of continual classification learning that we term "conditional replay" which generates samples and labels together by sampling from a distribution conditioned on the class. We compare conditional replay to another replay-based continual learning paradigm (which we term "marginal replay") that generates samples independently of their class and assigns labels in a separate step. The main improvement in conditional replay is that labels for generated samples need not be inferred, which reduces the margin for error in complex continual classification learning tasks. We demonstrate the effectiveness of this approach using novel and standard benchmarks constructed from MNIST and FashionMNIST data, and compare to the regularization-based \textit{elastic weight consolidation} (EWC) method.
2405.17132
Chunjing Gan
Chunjing Gan, Binbin Hu, Bo Huang, Ziqi Liu, Jian Ma, Zhiqiang Zhang, Wenliang Zhong, Jun Zhou
Your decision path does matter in pre-training industrial recommenders with multi-source behaviors
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online service platforms offering a wide range of services through miniapps have become crucial for users who visit these platforms with clear intentions to find services they are interested in. Aiming at effective content delivery, cross-domain recommendation are introduced to learn high-quality representations by transferring behaviors from data-rich scenarios. However, these methods overlook the impact of the decision path that users take when conduct behaviors, that is, users ultimately exhibit different behaviors based on various intents. To this end, we propose HIER, a novel Hierarchical decIsion path Enhanced Representation learning for cross-domain recommendation. With the help of graph neural networks for high-order topological information of the knowledge graph between multi-source behaviors, we further adaptively learn decision paths through well-designed exemplar-level and information bottleneck based contrastive learning. Extensive experiments in online and offline environments show the superiority of HIER.
[ { "created": "Mon, 27 May 2024 12:49:07 GMT", "version": "v1" } ]
2024-05-28
[ [ "Gan", "Chunjing", "" ], [ "Hu", "Binbin", "" ], [ "Huang", "Bo", "" ], [ "Liu", "Ziqi", "" ], [ "Ma", "Jian", "" ], [ "Zhang", "Zhiqiang", "" ], [ "Zhong", "Wenliang", "" ], [ "Zhou", "Jun", "" ] ]
Online service platforms offering a wide range of services through miniapps have become crucial for users who visit these platforms with clear intentions to find services they are interested in. Aiming at effective content delivery, cross-domain recommendation are introduced to learn high-quality representations by transferring behaviors from data-rich scenarios. However, these methods overlook the impact of the decision path that users take when conduct behaviors, that is, users ultimately exhibit different behaviors based on various intents. To this end, we propose HIER, a novel Hierarchical decIsion path Enhanced Representation learning for cross-domain recommendation. With the help of graph neural networks for high-order topological information of the knowledge graph between multi-source behaviors, we further adaptively learn decision paths through well-designed exemplar-level and information bottleneck based contrastive learning. Extensive experiments in online and offline environments show the superiority of HIER.
1802.00939
Peisong Wang
Jian Cheng, Peisong Wang, Gang Li, Qinghao Hu, Hanqing Lu
Recent Advances in Efficient Computation of Deep Convolutional Neural Networks
14 pages, 3 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems. At the same time, the computational complexity and resource consumption of these networks also continue to increase. This will pose a significant challenge to the deployment of such networks, especially in real-time applications or on resource-limited devices. Thus, network acceleration has become a hot topic within the deep learning community. As for hardware implementation of deep neural networks, a batch of accelerators based on FPGA/ASIC have been proposed in recent years. In this paper, we provide a comprehensive survey of recent advances in network acceleration, compression and accelerator design from both algorithm and hardware points of view. Specifically, we provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher-student networks, compact network design and hardware accelerators. Finally, we will introduce and discuss a few possible future directions.
[ { "created": "Sat, 3 Feb 2018 08:52:12 GMT", "version": "v1" }, { "created": "Sun, 11 Feb 2018 10:22:38 GMT", "version": "v2" } ]
2018-02-13
[ [ "Cheng", "Jian", "" ], [ "Wang", "Peisong", "" ], [ "Li", "Gang", "" ], [ "Hu", "Qinghao", "" ], [ "Lu", "Hanqing", "" ] ]
Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems. At the same time, the computational complexity and resource consumption of these networks also continue to increase. This will pose a significant challenge to the deployment of such networks, especially in real-time applications or on resource-limited devices. Thus, network acceleration has become a hot topic within the deep learning community. As for hardware implementation of deep neural networks, a batch of accelerators based on FPGA/ASIC have been proposed in recent years. In this paper, we provide a comprehensive survey of recent advances in network acceleration, compression and accelerator design from both algorithm and hardware points of view. Specifically, we provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher-student networks, compact network design and hardware accelerators. Finally, we will introduce and discuss a few possible future directions.
1309.7598
Tamir Hazan
Tamir Hazan, Subhransu Maji and Tommi Jaakkola
On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori Perturbations
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we describe how MAP inference can be used to sample efficiently from Gibbs distributions. Specifically, we provide means for drawing either approximate or unbiased samples from Gibbs' distributions by introducing low dimensional perturbations and solving the corresponding MAP assignments. Our approach also leads to new ways to derive lower bounds on partition functions. We demonstrate empirically that our method excels in the typical "high signal - high coupling" regime. The setting results in ragged energy landscapes that are challenging for alternative approaches to sampling and/or lower bounds.
[ { "created": "Sun, 29 Sep 2013 13:48:52 GMT", "version": "v1" } ]
2013-10-01
[ [ "Hazan", "Tamir", "" ], [ "Maji", "Subhransu", "" ], [ "Jaakkola", "Tommi", "" ] ]
In this paper we describe how MAP inference can be used to sample efficiently from Gibbs distributions. Specifically, we provide means for drawing either approximate or unbiased samples from Gibbs' distributions by introducing low dimensional perturbations and solving the corresponding MAP assignments. Our approach also leads to new ways to derive lower bounds on partition functions. We demonstrate empirically that our method excels in the typical "high signal - high coupling" regime. The setting results in ragged energy landscapes that are challenging for alternative approaches to sampling and/or lower bounds.
2003.03256
Pavly Salah
Pavly Salah Zaki, Marco Magdy William, Bolis Karam Soliman, Kerolos Gamal Alexsan, Keroles Khalil, and Magdy El-Moursy
Traffic Signs Detection and Recognition System using Deep Learning
7 pages, 14 figures, 10 tables
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid development of technology, automobiles have become an essential asset in our day-to-day lives. One of the more important researches is Traffic Signs Recognition (TSR) systems. This paper describes an approach for efficiently detecting and recognizing traffic signs in real-time, taking into account the various weather, illumination and visibility challenges through the means of transfer learning. We tackle the traffic sign detection problem using the state-of-the-art of multi-object detection systems such as Faster Recurrent Convolutional Neural Networks (F-RCNN) and Single Shot Multi- Box Detector (SSD) combined with various feature extractors such as MobileNet v1 and Inception v2, and also Tiny-YOLOv2. However, the focus of this paper is going to be F-RCNN Inception v2 and Tiny YOLO v2 as they achieved the best results. The aforementioned models were fine-tuned on the German Traffic Signs Detection Benchmark (GTSDB) dataset. These models were tested on the host PC as well as Raspberry Pi 3 Model B+ and the TASS PreScan simulation. We will discuss the results of all the models in the conclusion section.
[ { "created": "Fri, 6 Mar 2020 14:54:40 GMT", "version": "v1" } ]
2020-03-09
[ [ "Zaki", "Pavly Salah", "" ], [ "William", "Marco Magdy", "" ], [ "Soliman", "Bolis Karam", "" ], [ "Alexsan", "Kerolos Gamal", "" ], [ "Khalil", "Keroles", "" ], [ "El-Moursy", "Magdy", "" ] ]
With the rapid development of technology, automobiles have become an essential asset in our day-to-day lives. One of the more important researches is Traffic Signs Recognition (TSR) systems. This paper describes an approach for efficiently detecting and recognizing traffic signs in real-time, taking into account the various weather, illumination and visibility challenges through the means of transfer learning. We tackle the traffic sign detection problem using the state-of-the-art of multi-object detection systems such as Faster Recurrent Convolutional Neural Networks (F-RCNN) and Single Shot Multi- Box Detector (SSD) combined with various feature extractors such as MobileNet v1 and Inception v2, and also Tiny-YOLOv2. However, the focus of this paper is going to be F-RCNN Inception v2 and Tiny YOLO v2 as they achieved the best results. The aforementioned models were fine-tuned on the German Traffic Signs Detection Benchmark (GTSDB) dataset. These models were tested on the host PC as well as Raspberry Pi 3 Model B+ and the TASS PreScan simulation. We will discuss the results of all the models in the conclusion section.
1212.5095
Tamal Ghosh Tamal Ghosh
T. Ghosh, P.K. Dan
Modelling of Optimal Design of Manufacturing Cell Layout Considering Material Flow and Closeness Rating Factors
Proceedings of 4th International & 25th AIMTDR Conference, December 2012
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Developing a group of machine cells and their corresponding part families to minimize the inter-cell and intra-cell material flow is the basic objective of the designing of a cellular manufacturing system (CMS). Afterwards achieving a competent cell layout is essential in order to minimize the total inter-cell part travels, which is principally noteworthy. There are plentiful articles of CMS literature which considered cell formation problems; however cell layout topic has rarely been addressed. Therefore this research is intended to focus on an adapted mathematical model of the layout design problem considering material handling cost and closeness ratings of manufacturing cells. Owing to the combinatorial class of the said problem, an efficient NP-hard technique based on Simulated Annealing metaheuristic is proposed henceforth. Some test problems are solved using the proposed technique. Computational results show that the proposed metaheuristic approach is extremely effective and efficient in terms of solution quality and computational complexity.
[ { "created": "Thu, 20 Dec 2012 15:47:50 GMT", "version": "v1" } ]
2012-12-21
[ [ "Ghosh", "T.", "" ], [ "Dan", "P. K.", "" ] ]
Developing a group of machine cells and their corresponding part families to minimize the inter-cell and intra-cell material flow is the basic objective of the designing of a cellular manufacturing system (CMS). Afterwards achieving a competent cell layout is essential in order to minimize the total inter-cell part travels, which is principally noteworthy. There are plentiful articles of CMS literature which considered cell formation problems; however cell layout topic has rarely been addressed. Therefore this research is intended to focus on an adapted mathematical model of the layout design problem considering material handling cost and closeness ratings of manufacturing cells. Owing to the combinatorial class of the said problem, an efficient NP-hard technique based on Simulated Annealing metaheuristic is proposed henceforth. Some test problems are solved using the proposed technique. Computational results show that the proposed metaheuristic approach is extremely effective and efficient in terms of solution quality and computational complexity.
2110.01023
Daniel Rudmark
Daniel Rudmark, Magnus Andersson
Feedback Loops in Open Data Ecosystems
null
null
10.1109/MS.2021.3116874
null
cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Public agencies are increasingly publishing open data to increase transparency and fuel data-driven innovation. For these organizations, maintaining sufficient data quality is key to continuous re-use but also heavily dependent on feedback loops being initiated between data publishers and users. This paper reports from a longitudinal engagement with Scandinavian transportation agencies, where such feedback loops have been successfully established. Based on these experiences, we propose four distinct types of data feedback loops in which both data publishers and re-users play critical roles.
[ { "created": "Sun, 3 Oct 2021 15:23:14 GMT", "version": "v1" } ]
2021-10-05
[ [ "Rudmark", "Daniel", "" ], [ "Andersson", "Magnus", "" ] ]
Public agencies are increasingly publishing open data to increase transparency and fuel data-driven innovation. For these organizations, maintaining sufficient data quality is key to continuous re-use but also heavily dependent on feedback loops being initiated between data publishers and users. This paper reports from a longitudinal engagement with Scandinavian transportation agencies, where such feedback loops have been successfully established. Based on these experiences, we propose four distinct types of data feedback loops in which both data publishers and re-users play critical roles.
2407.02081
Fangyu Wang
Zhengxian Lu, Fangyu Wang, Zhiwei Xu, Fei Yang, Tao Li
On the Performance and Memory Footprint of Distributed Training: An Empirical Study on Transformers
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transformer models have emerged as potent solutions to a wide array of multidisciplinary challenges. The deployment of Transformer architectures is significantly hindered by their extensive computational and memory requirements, necessitating the reliance on advanced efficient distributed training methodologies. Prior research has delved into the performance bottlenecks associated with distributed training, aiming to unravel these bottlenecks and suggest optimization directions. However, such analyses often overlook three aspects unique to Transformer models: the specialized architecture, the dependency on various distributed strategies, and the requirement to balance computational and memory overhead. This paper aims to bridge this gap by offering a comprehensive examination of the performance bottlenecks inherent in distributed training of Transformer models, leveraging both theoretical analysis and empirical investigation. We propose an analytical framework tailored to these unique aspects of Transformers, facilitating a holistic evaluation of model architectures, distributed strategies, and resource consumption. Based on this analytical framework, we conduct a comparative analysis of theoretical performances and further systematically explore how various distributed training strategies fare in real-world scenarios. Most of the experimental results can be well explained by the analytical outcomes derived from the analytical framework. Notably, our findings suggest an advantage of pipeline parallelism over data parallelism for Transformer models. Moreover, we shed light on some unexpected outcomes, such as the potential for increased total memory overhead due to suboptimal model partitioning within pipeline parallelism. Additionally, we underscore the significance of communication block size and waiting time to further enhance performance.
[ { "created": "Tue, 2 Jul 2024 09:17:19 GMT", "version": "v1" } ]
2024-07-03
[ [ "Lu", "Zhengxian", "" ], [ "Wang", "Fangyu", "" ], [ "Xu", "Zhiwei", "" ], [ "Yang", "Fei", "" ], [ "Li", "Tao", "" ] ]
Transformer models have emerged as potent solutions to a wide array of multidisciplinary challenges. The deployment of Transformer architectures is significantly hindered by their extensive computational and memory requirements, necessitating the reliance on advanced efficient distributed training methodologies. Prior research has delved into the performance bottlenecks associated with distributed training, aiming to unravel these bottlenecks and suggest optimization directions. However, such analyses often overlook three aspects unique to Transformer models: the specialized architecture, the dependency on various distributed strategies, and the requirement to balance computational and memory overhead. This paper aims to bridge this gap by offering a comprehensive examination of the performance bottlenecks inherent in distributed training of Transformer models, leveraging both theoretical analysis and empirical investigation. We propose an analytical framework tailored to these unique aspects of Transformers, facilitating a holistic evaluation of model architectures, distributed strategies, and resource consumption. Based on this analytical framework, we conduct a comparative analysis of theoretical performances and further systematically explore how various distributed training strategies fare in real-world scenarios. Most of the experimental results can be well explained by the analytical outcomes derived from the analytical framework. Notably, our findings suggest an advantage of pipeline parallelism over data parallelism for Transformer models. Moreover, we shed light on some unexpected outcomes, such as the potential for increased total memory overhead due to suboptimal model partitioning within pipeline parallelism. Additionally, we underscore the significance of communication block size and waiting time to further enhance performance.
1811.06446
Cuixian Chen
Benjamin Yip, Garrett Bingham, Katherine Kempfert, Jonathan Fabish, Troy Kling, Cuixian Chen, Yishi Wang
Preliminary Studies on a Large Face Database
It has been accepted in the 5th National Symposium for NSF REU Research in Data Science, Systems, and Security. G. Bingham and K. Kempfert contributed equally
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We perform preliminary studies on a large longitudinal face database MORPH-II, which is a benchmark dataset in the field of computer vision and pattern recognition. First, we summarize the inconsistencies in the dataset and introduce the steps and strategy taken for cleaning. The potential implications of these inconsistencies on prior research are introduced. Next, we propose a new automatic subsetting scheme for evaluation protocol. It is intended to overcome the unbalanced racial and gender distributions of MORPH-II, while ensuring independence between training and testing sets. Finally, we contribute a novel global framework for age estimation that utilizes posterior probabilities from the race classification step to compute a racecomposite age estimate. Preliminary experimental results on MORPH-II are presented.
[ { "created": "Thu, 15 Nov 2018 16:07:49 GMT", "version": "v1" } ]
2018-11-16
[ [ "Yip", "Benjamin", "" ], [ "Bingham", "Garrett", "" ], [ "Kempfert", "Katherine", "" ], [ "Fabish", "Jonathan", "" ], [ "Kling", "Troy", "" ], [ "Chen", "Cuixian", "" ], [ "Wang", "Yishi", "" ] ]
We perform preliminary studies on a large longitudinal face database MORPH-II, which is a benchmark dataset in the field of computer vision and pattern recognition. First, we summarize the inconsistencies in the dataset and introduce the steps and strategy taken for cleaning. The potential implications of these inconsistencies on prior research are introduced. Next, we propose a new automatic subsetting scheme for evaluation protocol. It is intended to overcome the unbalanced racial and gender distributions of MORPH-II, while ensuring independence between training and testing sets. Finally, we contribute a novel global framework for age estimation that utilizes posterior probabilities from the race classification step to compute a racecomposite age estimate. Preliminary experimental results on MORPH-II are presented.
1206.5941
Stefan Kratsch
Hans L. Bodlaender and Bart M. P. Jansen and Stefan Kratsch
Kernelization Lower Bounds By Cross-Composition
A preliminary version appeared in the proceedings of the 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011) under the title "Cross-Composition: A New Technique for Kernelization Lower Bounds". Several results have been strengthened compared to the preliminary version (http://arxiv.org/abs/1011.4224). 29 pages, 2 figures
null
null
null
cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the cross-composition framework for proving kernelization lower bounds. A classical problem L AND/OR-cross-composes into a parameterized problem Q if it is possible to efficiently construct an instance of Q with polynomially bounded parameter value that expresses the logical AND or OR of a sequence of instances of L. Building on work by Bodlaender et al. (ICALP 2008) and using a result by Fortnow and Santhanam (STOC 2008) with a refinement by Dell and van Melkebeek (STOC 2010), we show that if an NP-hard problem OR-cross-composes into a parameterized problem Q then Q does not admit a polynomial kernel unless NP \subseteq coNP/poly and the polynomial hierarchy collapses. Similarly, an AND-cross-composition for Q rules out polynomial kernels for Q under Bodlaender et al.'s AND-distillation conjecture. Our technique generalizes and strengthens the recent techniques of using composition algorithms and of transferring the lower bounds via polynomial parameter transformations. We show its applicability by proving kernelization lower bounds for a number of important graphs problems with structural (non-standard) parameterizations, e.g., Clique, Chromatic Number, Weighted Feedback Vertex Set, and Weighted Odd Cycle Transversal do not admit polynomial kernels with respect to the vertex cover number of the input graphs unless the polynomial hierarchy collapses, contrasting the fact that these problems are trivially fixed-parameter tractable for this parameter. After learning of our results, several teams of authors have successfully applied the cross-composition framework to different parameterized problems. For completeness, our presentation of the framework includes several extensions based on this follow-up work. For example, we show how a relaxed version of OR-cross-compositions may be used to give lower bounds on the degree of the polynomial in the kernel size.
[ { "created": "Tue, 26 Jun 2012 10:06:37 GMT", "version": "v1" } ]
2015-03-20
[ [ "Bodlaender", "Hans L.", "" ], [ "Jansen", "Bart M. P.", "" ], [ "Kratsch", "Stefan", "" ] ]
We introduce the cross-composition framework for proving kernelization lower bounds. A classical problem L AND/OR-cross-composes into a parameterized problem Q if it is possible to efficiently construct an instance of Q with polynomially bounded parameter value that expresses the logical AND or OR of a sequence of instances of L. Building on work by Bodlaender et al. (ICALP 2008) and using a result by Fortnow and Santhanam (STOC 2008) with a refinement by Dell and van Melkebeek (STOC 2010), we show that if an NP-hard problem OR-cross-composes into a parameterized problem Q then Q does not admit a polynomial kernel unless NP \subseteq coNP/poly and the polynomial hierarchy collapses. Similarly, an AND-cross-composition for Q rules out polynomial kernels for Q under Bodlaender et al.'s AND-distillation conjecture. Our technique generalizes and strengthens the recent techniques of using composition algorithms and of transferring the lower bounds via polynomial parameter transformations. We show its applicability by proving kernelization lower bounds for a number of important graphs problems with structural (non-standard) parameterizations, e.g., Clique, Chromatic Number, Weighted Feedback Vertex Set, and Weighted Odd Cycle Transversal do not admit polynomial kernels with respect to the vertex cover number of the input graphs unless the polynomial hierarchy collapses, contrasting the fact that these problems are trivially fixed-parameter tractable for this parameter. After learning of our results, several teams of authors have successfully applied the cross-composition framework to different parameterized problems. For completeness, our presentation of the framework includes several extensions based on this follow-up work. For example, we show how a relaxed version of OR-cross-compositions may be used to give lower bounds on the degree of the polynomial in the kernel size.
2303.03553
Qingsong Wen
Qingsong Wen, Linxiao Yang, Liang Sun
Robust Dominant Periodicity Detection for Time Series with Missing Data
Accepted by 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023)
IEEE ICASSP 2023
null
null
cs.LG eess.SP stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Periodicity detection is an important task in time series analysis, but still a challenging problem due to the diverse characteristics of time series data like abrupt trend change, outlier, noise, and especially block missing data. In this paper, we propose a robust and effective periodicity detection algorithm for time series with block missing data. We first design a robust trend filter to remove the interference of complicated trend patterns under missing data. Then, we propose a robust autocorrelation function (ACF) that can handle missing values and outliers effectively. We rigorously prove that the proposed robust ACF can still work well when the length of the missing block is less than $1/3$ of the period length. Last, by combining the time-frequency information, our algorithm can generate the period length accurately. The experimental results demonstrate that our algorithm outperforms existing periodicity detection algorithms on real-world time series datasets.
[ { "created": "Mon, 6 Mar 2023 23:37:58 GMT", "version": "v1" } ]
2023-03-08
[ [ "Wen", "Qingsong", "" ], [ "Yang", "Linxiao", "" ], [ "Sun", "Liang", "" ] ]
Periodicity detection is an important task in time series analysis, but still a challenging problem due to the diverse characteristics of time series data like abrupt trend change, outlier, noise, and especially block missing data. In this paper, we propose a robust and effective periodicity detection algorithm for time series with block missing data. We first design a robust trend filter to remove the interference of complicated trend patterns under missing data. Then, we propose a robust autocorrelation function (ACF) that can handle missing values and outliers effectively. We rigorously prove that the proposed robust ACF can still work well when the length of the missing block is less than $1/3$ of the period length. Last, by combining the time-frequency information, our algorithm can generate the period length accurately. The experimental results demonstrate that our algorithm outperforms existing periodicity detection algorithms on real-world time series datasets.
0803.1090
Valentin Savin
Valentin Savin
Self-Corrected Min-Sum decoding of LDPC codes
5 pages, ISIT08 second version: acknowledgement footnote added (no content modification)
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a very simple but powerful self-correction method for the Min-Sum decoding of LPDC codes. Unlike other correction methods known in the literature, our method does not try to correct the check node processing approximation, but it modifies the variable node processing by erasing unreliable messages. However, this positively affects check node messages, which become symmetric Gaussian distributed, and we show that this is sufficient to ensure a quasi-optimal decoding performance. Monte-Carlo simulations show that the proposed Self-Corrected Min-Sum decoding performs very close to the Sum-Product decoding, while preserving the main features of the Min-Sum decoding, that is low complexity and independence with respect to noise variance estimation errors.
[ { "created": "Fri, 7 Mar 2008 13:46:28 GMT", "version": "v1" }, { "created": "Tue, 13 Jan 2009 17:13:48 GMT", "version": "v2" } ]
2009-01-13
[ [ "Savin", "Valentin", "" ] ]
In this paper we propose a very simple but powerful self-correction method for the Min-Sum decoding of LPDC codes. Unlike other correction methods known in the literature, our method does not try to correct the check node processing approximation, but it modifies the variable node processing by erasing unreliable messages. However, this positively affects check node messages, which become symmetric Gaussian distributed, and we show that this is sufficient to ensure a quasi-optimal decoding performance. Monte-Carlo simulations show that the proposed Self-Corrected Min-Sum decoding performs very close to the Sum-Product decoding, while preserving the main features of the Min-Sum decoding, that is low complexity and independence with respect to noise variance estimation errors.
1912.12927
Lei Feng
Lei Feng, Takuo Kaneko, Bo Han, Gang Niu, Bo An, Masashi Sugiyama
Learning with Multiple Complementary Labels
Corrected typos in Lemma 2, accepted by ICML 2020
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
A complementary label (CL) simply indicates an incorrect class of an example, but learning with CLs results in multi-class classifiers that can predict the correct class. Unfortunately, the problem setting only allows a single CL for each example, which notably limits its potential since our labelers may easily identify multiple CLs (MCLs) to one example. In this paper, we propose a novel problem setting to allow MCLs for each example and two ways for learning with MCLs. In the first way, we design two wrappers that decompose MCLs into many single CLs, so that we could use any method for learning with CLs. However, the supervision information that MCLs hold is conceptually diluted after decomposition. Thus, in the second way, we derive an unbiased risk estimator; minimizing it processes each set of MCLs as a whole and possesses an estimation error bound. We further improve the second way into minimizing properly chosen upper bounds. Experiments show that the former way works well for learning with MCLs but the latter is even better.
[ { "created": "Mon, 30 Dec 2019 13:50:51 GMT", "version": "v1" }, { "created": "Wed, 22 Apr 2020 04:45:52 GMT", "version": "v2" }, { "created": "Tue, 7 Jul 2020 08:50:50 GMT", "version": "v3" }, { "created": "Sat, 6 Aug 2022 10:47:03 GMT", "version": "v4" } ]
2022-08-09
[ [ "Feng", "Lei", "" ], [ "Kaneko", "Takuo", "" ], [ "Han", "Bo", "" ], [ "Niu", "Gang", "" ], [ "An", "Bo", "" ], [ "Sugiyama", "Masashi", "" ] ]
A complementary label (CL) simply indicates an incorrect class of an example, but learning with CLs results in multi-class classifiers that can predict the correct class. Unfortunately, the problem setting only allows a single CL for each example, which notably limits its potential since our labelers may easily identify multiple CLs (MCLs) to one example. In this paper, we propose a novel problem setting to allow MCLs for each example and two ways for learning with MCLs. In the first way, we design two wrappers that decompose MCLs into many single CLs, so that we could use any method for learning with CLs. However, the supervision information that MCLs hold is conceptually diluted after decomposition. Thus, in the second way, we derive an unbiased risk estimator; minimizing it processes each set of MCLs as a whole and possesses an estimation error bound. We further improve the second way into minimizing properly chosen upper bounds. Experiments show that the former way works well for learning with MCLs but the latter is even better.
1901.07469
Nilavra Pathak
Nilavra Pathak, James Foulds, Nirmalya Roy, Nilanjan Banerjee, Ryan Robucci
Estimating Buildings' Parameters over Time Including Prior Knowledge
11 pages with reference
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling buildings' heat dynamics is a complex process which depends on various factors including weather, building thermal capacity, insulation preservation, and residents' behavior. Gray-box models offer a causal inference of those dynamics expressed in few parameters specific to built environments. These parameters can provide compelling insights into the characteristics of building artifacts and have various applications such as forecasting HVAC usage, indoor temperature control monitoring of built environments, etc. In this paper, we present a systematic study of modeling buildings' thermal characteristics and thus derive the parameters of built conditions with a Bayesian approach. We build a Bayesian state-space model that can adapt and incorporate buildings' thermal equations and propose a generalized solution that can easily adapt prior knowledge regarding the parameters. We show that a faster approximate approach using variational inference for parameter estimation can provide similar parameters as that of a more time-consuming Markov Chain Monte Carlo (MCMC) approach. We perform extensive evaluations on two datasets to understand the generative process and show that the Bayesian approach is more interpretable. We further study the effects of prior selection for the model parameters and transfer learning, where we learn parameters from one season and use them to fit the model in the other. We perform extensive evaluations on controlled and real data traces to enumerate buildings' parameter within a 95% credible interval.
[ { "created": "Wed, 9 Jan 2019 03:37:32 GMT", "version": "v1" }, { "created": "Mon, 4 Feb 2019 22:47:47 GMT", "version": "v2" }, { "created": "Mon, 18 Feb 2019 23:53:36 GMT", "version": "v3" } ]
2019-02-20
[ [ "Pathak", "Nilavra", "" ], [ "Foulds", "James", "" ], [ "Roy", "Nirmalya", "" ], [ "Banerjee", "Nilanjan", "" ], [ "Robucci", "Ryan", "" ] ]
Modeling buildings' heat dynamics is a complex process which depends on various factors including weather, building thermal capacity, insulation preservation, and residents' behavior. Gray-box models offer a causal inference of those dynamics expressed in few parameters specific to built environments. These parameters can provide compelling insights into the characteristics of building artifacts and have various applications such as forecasting HVAC usage, indoor temperature control monitoring of built environments, etc. In this paper, we present a systematic study of modeling buildings' thermal characteristics and thus derive the parameters of built conditions with a Bayesian approach. We build a Bayesian state-space model that can adapt and incorporate buildings' thermal equations and propose a generalized solution that can easily adapt prior knowledge regarding the parameters. We show that a faster approximate approach using variational inference for parameter estimation can provide similar parameters as that of a more time-consuming Markov Chain Monte Carlo (MCMC) approach. We perform extensive evaluations on two datasets to understand the generative process and show that the Bayesian approach is more interpretable. We further study the effects of prior selection for the model parameters and transfer learning, where we learn parameters from one season and use them to fit the model in the other. We perform extensive evaluations on controlled and real data traces to enumerate buildings' parameter within a 95% credible interval.
1403.1120
Carlos Luis Gonz\'alez-Valiente
C. L. Gonz\'alez-Valiente, Y. S\'anchez-Rodr\'iguez, Y. Lezcano-P\'erez
Estudio exploratorio sobre las competencias informacionales de los estudiantes de la Universidad de La Habana
http://www.redalyc.org/pdf/1814/181423798009.pdf
Ciencias de la Informaci\'on; 2012, 43 (2): 61-68
null
null
cs.CY
http://creativecommons.org/licenses/by/3.0/
The present article shows the results of a survey aimed at identifying the informational abilities of Havana University students. Several methods such as the survey, expert's interviews and content and document analysis are used. The questionnaire has been structured base on three basic variables: information search, information analysis and release and self evaluation elements. The identification of these abilities was a key element for guiding libraries in the development of actions focused on their communities.
[ { "created": "Wed, 5 Mar 2014 13:34:55 GMT", "version": "v1" } ]
2014-03-06
[ [ "González-Valiente", "C. L.", "" ], [ "Sánchez-Rodríguez", "Y.", "" ], [ "Lezcano-Pérez", "Y.", "" ] ]
The present article shows the results of a survey aimed at identifying the informational abilities of Havana University students. Several methods such as the survey, expert's interviews and content and document analysis are used. The questionnaire has been structured base on three basic variables: information search, information analysis and release and self evaluation elements. The identification of these abilities was a key element for guiding libraries in the development of actions focused on their communities.
2204.09036
Oleg Sychev
Oleg Sychev
Write a Line: Tests with Answer Templates and String Completion Hints for Self-Learning in a CS1 Course
9 pages, 5 tables, 3 figures, ICSE-SEET 2022
Proceedings - 44th ACM/IEEE International Conference on Software Engineering: Software Engineering Education and Training, ICSE-SEET 2022. Pages 265 - 276
10.1109/ICSE-SEET55299.2022.9794157
null
cs.CY
http://creativecommons.org/licenses/by-sa/4.0/
One of the important scaffolding tasks in programming learning is writing a line of code performing the necessary action. This allows students to practice skills in a playground with instant feedback before writing more complex programs and increases their proficiency when solving programming problems. However, answers in the form of program code have high variability. Among the possible approaches to grading and providing feedback, we chose template matching. This paper reports the results of using regular-expression-based questions with string completion hints in a CS1 course for 4 years with 497 students. The evaluation results show that Perl-compatible regular expressions provide good precision and recall (more than 99\%) when used for questions requiring writing a single line of code while being able to provide string-completion feedback regardless of how wrong the initial student's answer is. After introducing formative quizzes with string-completion hints to the course, the number of questions that teachers and teaching assistants received about questions in the formative quizzes dropped considerably: most of the training question attempts resulted in finding the correct answer without help from the teaching staff. However, some of the students use formative quizzes just to learn correct answers without actually trying to answer the questions.
[ { "created": "Tue, 19 Apr 2022 17:53:35 GMT", "version": "v1" } ]
2022-08-11
[ [ "Sychev", "Oleg", "" ] ]
One of the important scaffolding tasks in programming learning is writing a line of code performing the necessary action. This allows students to practice skills in a playground with instant feedback before writing more complex programs and increases their proficiency when solving programming problems. However, answers in the form of program code have high variability. Among the possible approaches to grading and providing feedback, we chose template matching. This paper reports the results of using regular-expression-based questions with string completion hints in a CS1 course for 4 years with 497 students. The evaluation results show that Perl-compatible regular expressions provide good precision and recall (more than 99\%) when used for questions requiring writing a single line of code while being able to provide string-completion feedback regardless of how wrong the initial student's answer is. After introducing formative quizzes with string-completion hints to the course, the number of questions that teachers and teaching assistants received about questions in the formative quizzes dropped considerably: most of the training question attempts resulted in finding the correct answer without help from the teaching staff. However, some of the students use formative quizzes just to learn correct answers without actually trying to answer the questions.
2307.03385
Angel Felipe Magnoss\~ao de Paula
Angel Felipe Magnoss\~ao de Paula, Giulia Rizzi, Elisabetta Fersini, Damiano Spina
AI-UPV at EXIST 2023 -- Sexism Characterization Using Large Language Models Under The Learning with Disagreements Regime
15 pages, 9 tables, 1 figures, conference
null
null
null
cs.CL cs.CY cs.LG
http://creativecommons.org/licenses/by/4.0/
With the increasing influence of social media platforms, it has become crucial to develop automated systems capable of detecting instances of sexism and other disrespectful and hateful behaviors to promote a more inclusive and respectful online environment. Nevertheless, these tasks are considerably challenging considering different hate categories and the author's intentions, especially under the learning with disagreements regime. This paper describes AI-UPV team's participation in the EXIST (sEXism Identification in Social neTworks) Lab at CLEF 2023. The proposed approach aims at addressing the task of sexism identification and characterization under the learning with disagreements paradigm by training directly from the data with disagreements, without using any aggregated label. Yet, performances considering both soft and hard evaluations are reported. The proposed system uses large language models (i.e., mBERT and XLM-RoBERTa) and ensemble strategies for sexism identification and classification in English and Spanish. In particular, our system is articulated in three different pipelines. The ensemble approach outperformed the individual large language models obtaining the best performances both adopting a soft and a hard label evaluation. This work describes the participation in all the three EXIST tasks, considering a soft evaluation, it obtained fourth place in Task 2 at EXIST and first place in Task 3, with the highest ICM-Soft of -2.32 and a normalized ICM-Soft of 0.79. The source code of our approaches is publicly available at https://github.com/AngelFelipeMP/Sexism-LLM-Learning-With-Disagreement.
[ { "created": "Fri, 7 Jul 2023 04:49:26 GMT", "version": "v1" } ]
2023-07-10
[ [ "de Paula", "Angel Felipe Magnossão", "" ], [ "Rizzi", "Giulia", "" ], [ "Fersini", "Elisabetta", "" ], [ "Spina", "Damiano", "" ] ]
With the increasing influence of social media platforms, it has become crucial to develop automated systems capable of detecting instances of sexism and other disrespectful and hateful behaviors to promote a more inclusive and respectful online environment. Nevertheless, these tasks are considerably challenging considering different hate categories and the author's intentions, especially under the learning with disagreements regime. This paper describes AI-UPV team's participation in the EXIST (sEXism Identification in Social neTworks) Lab at CLEF 2023. The proposed approach aims at addressing the task of sexism identification and characterization under the learning with disagreements paradigm by training directly from the data with disagreements, without using any aggregated label. Yet, performances considering both soft and hard evaluations are reported. The proposed system uses large language models (i.e., mBERT and XLM-RoBERTa) and ensemble strategies for sexism identification and classification in English and Spanish. In particular, our system is articulated in three different pipelines. The ensemble approach outperformed the individual large language models obtaining the best performances both adopting a soft and a hard label evaluation. This work describes the participation in all the three EXIST tasks, considering a soft evaluation, it obtained fourth place in Task 2 at EXIST and first place in Task 3, with the highest ICM-Soft of -2.32 and a normalized ICM-Soft of 0.79. The source code of our approaches is publicly available at https://github.com/AngelFelipeMP/Sexism-LLM-Learning-With-Disagreement.
2105.13765
Yasir K{\i}l{\i}\c{c}
Yasir Kilic
Exploiting Transductive Property of Graph Convolutional Neural Networks with Less Labeling Effort
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Recently, machine learning approaches on Graph data have become very popular. It was observed that significant results were obtained by including implicit or explicit logical connections between data samples that make up the data to the model. In this context, the developing GCN model has made significant experimental contributions with Convolution filters applied to graph data. This model follows Transductive and Semi-Supervised Learning approach. Due to its transductive property, all of the data samples, which is partially labeled, are given as input to the model. Labeling, which is a cost, is very important. Within the scope of this study, the following research question is tried to be answered: If at least how many samples are labeled, the optimum model success is achieved? In addition, some experimental contributions have been made on the accuracy of the model, whichever sampling approach is used with fixed labeling effort. According to the experiments, the success of the model can be increased by using the local centrality metric.
[ { "created": "Sat, 1 May 2021 05:33:31 GMT", "version": "v1" } ]
2021-05-31
[ [ "Kilic", "Yasir", "" ] ]
Recently, machine learning approaches on Graph data have become very popular. It was observed that significant results were obtained by including implicit or explicit logical connections between data samples that make up the data to the model. In this context, the developing GCN model has made significant experimental contributions with Convolution filters applied to graph data. This model follows Transductive and Semi-Supervised Learning approach. Due to its transductive property, all of the data samples, which is partially labeled, are given as input to the model. Labeling, which is a cost, is very important. Within the scope of this study, the following research question is tried to be answered: If at least how many samples are labeled, the optimum model success is achieved? In addition, some experimental contributions have been made on the accuracy of the model, whichever sampling approach is used with fixed labeling effort. According to the experiments, the success of the model can be increased by using the local centrality metric.
2407.12512
Fengyu Cai
Fengyu Cai, Xinran Zhao, Hongming Zhang, Iryna Gurevych, Heinz Koeppl
$\textit{GeoHard}$: Towards Measuring Class-wise Hardness through Modelling Class Semantics
Findings of ACL 2024
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent advances in measuring hardness-wise properties of data guide language models in sample selection within low-resource scenarios. However, class-specific properties are overlooked for task setup and learning. How will these properties influence model learning and is it generalizable across datasets? To answer this question, this work formally initiates the concept of $\textit{class-wise hardness}$. Experiments across eight natural language understanding (NLU) datasets demonstrate a consistent hardness distribution across learning paradigms, models, and human judgment. Subsequent experiments unveil a notable challenge in measuring such class-wise hardness with instance-level metrics in previous works. To address this, we propose $\textit{GeoHard}$ for class-wise hardness measurement by modeling class geometry in the semantic embedding space. $\textit{GeoHard}$ surpasses instance-level metrics by over 59 percent on $\textit{Pearson}$'s correlation on measuring class-wise hardness. Our analysis theoretically and empirically underscores the generality of $\textit{GeoHard}$ as a fresh perspective on data diagnosis. Additionally, we showcase how understanding class-wise hardness can practically aid in improving task learning.
[ { "created": "Wed, 17 Jul 2024 11:53:39 GMT", "version": "v1" } ]
2024-07-18
[ [ "Cai", "Fengyu", "" ], [ "Zhao", "Xinran", "" ], [ "Zhang", "Hongming", "" ], [ "Gurevych", "Iryna", "" ], [ "Koeppl", "Heinz", "" ] ]
Recent advances in measuring hardness-wise properties of data guide language models in sample selection within low-resource scenarios. However, class-specific properties are overlooked for task setup and learning. How will these properties influence model learning and is it generalizable across datasets? To answer this question, this work formally initiates the concept of $\textit{class-wise hardness}$. Experiments across eight natural language understanding (NLU) datasets demonstrate a consistent hardness distribution across learning paradigms, models, and human judgment. Subsequent experiments unveil a notable challenge in measuring such class-wise hardness with instance-level metrics in previous works. To address this, we propose $\textit{GeoHard}$ for class-wise hardness measurement by modeling class geometry in the semantic embedding space. $\textit{GeoHard}$ surpasses instance-level metrics by over 59 percent on $\textit{Pearson}$'s correlation on measuring class-wise hardness. Our analysis theoretically and empirically underscores the generality of $\textit{GeoHard}$ as a fresh perspective on data diagnosis. Additionally, we showcase how understanding class-wise hardness can practically aid in improving task learning.
1710.06993
Hanjiang Lai
Hanjiang Lai and Yan Pan
Improved Search in Hamming Space using Deep Multi-Index Hashing
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. There has been considerable research on generating efficient image representation via the deep-network-based hashing methods. However, the issue of efficient searching in the deep representation space remains largely unsolved. To this end, we propose a simple yet efficient deep-network-based multi-index hashing method for simultaneously learning the powerful image representation and the efficient searching. To achieve these two goals, we introduce the multi-index hashing (MIH) mechanism into the proposed deep architecture, which divides the binary codes into multiple substrings. Due to the non-uniformly distributed codes will result in inefficiency searching, we add the two balanced constraints at feature-level and instance-level, respectively. Extensive evaluations on several benchmark image retrieval datasets show that the learned balanced binary codes bring dramatic speedups and achieve comparable performance over the existing baselines.
[ { "created": "Thu, 19 Oct 2017 02:51:12 GMT", "version": "v1" } ]
2017-10-20
[ [ "Lai", "Hanjiang", "" ], [ "Pan", "Yan", "" ] ]
Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. There has been considerable research on generating efficient image representation via the deep-network-based hashing methods. However, the issue of efficient searching in the deep representation space remains largely unsolved. To this end, we propose a simple yet efficient deep-network-based multi-index hashing method for simultaneously learning the powerful image representation and the efficient searching. To achieve these two goals, we introduce the multi-index hashing (MIH) mechanism into the proposed deep architecture, which divides the binary codes into multiple substrings. Due to the non-uniformly distributed codes will result in inefficiency searching, we add the two balanced constraints at feature-level and instance-level, respectively. Extensive evaluations on several benchmark image retrieval datasets show that the learned balanced binary codes bring dramatic speedups and achieve comparable performance over the existing baselines.
2005.07289
Soren Pirk
Yao Lu, S\"oren Pirk, Jan Dlabal, Anthony Brohan, Ankita Pasad, Zhao Chen, Vincent Casser, Anelia Angelova, Ariel Gordon
Taskology: Utilizing Task Relations at Scale
IEEE Conference on Computer Vision and Pattern Recognition, 2021
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many computer vision tasks address the problem of scene understanding and are naturally interrelated e.g. object classification, detection, scene segmentation, depth estimation, etc. We show that we can leverage the inherent relationships among collections of tasks, as they are trained jointly, supervising each other through their known relationships via consistency losses. Furthermore, explicitly utilizing the relationships between tasks allows improving their performance while dramatically reducing the need for labeled data, and allows training with additional unsupervised or simulated data. We demonstrate a distributed joint training algorithm with task-level parallelism, which affords a high degree of asynchronicity and robustness. This allows learning across multiple tasks, or with large amounts of input data, at scale. We demonstrate our framework on subsets of the following collection of tasks: depth and normal prediction, semantic segmentation, 3D motion and ego-motion estimation, and object tracking and 3D detection in point clouds. We observe improved performance across these tasks, especially in the low-label regime.
[ { "created": "Thu, 14 May 2020 22:53:46 GMT", "version": "v1" }, { "created": "Wed, 17 Mar 2021 04:10:16 GMT", "version": "v2" } ]
2021-03-18
[ [ "Lu", "Yao", "" ], [ "Pirk", "Sören", "" ], [ "Dlabal", "Jan", "" ], [ "Brohan", "Anthony", "" ], [ "Pasad", "Ankita", "" ], [ "Chen", "Zhao", "" ], [ "Casser", "Vincent", "" ], [ "Angelova", "Anelia", "" ], [ "Gordon", "Ariel", "" ] ]
Many computer vision tasks address the problem of scene understanding and are naturally interrelated e.g. object classification, detection, scene segmentation, depth estimation, etc. We show that we can leverage the inherent relationships among collections of tasks, as they are trained jointly, supervising each other through their known relationships via consistency losses. Furthermore, explicitly utilizing the relationships between tasks allows improving their performance while dramatically reducing the need for labeled data, and allows training with additional unsupervised or simulated data. We demonstrate a distributed joint training algorithm with task-level parallelism, which affords a high degree of asynchronicity and robustness. This allows learning across multiple tasks, or with large amounts of input data, at scale. We demonstrate our framework on subsets of the following collection of tasks: depth and normal prediction, semantic segmentation, 3D motion and ego-motion estimation, and object tracking and 3D detection in point clouds. We observe improved performance across these tasks, especially in the low-label regime.
2211.01939
Divyat Mahajan
Divyat Mahajan, Ioannis Mitliagkas, Brady Neal, Vasilis Syrgkanis
Empirical Analysis of Model Selection for Heterogeneous Causal Effect Estimation
Proceedings of the 12th International Conference on Learning Representations (ICLR), 2024. (Spotlight)
null
null
null
cs.LG cs.AI stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of model selection in causal inference, specifically for conditional average treatment effect (CATE) estimation. Unlike machine learning, there is no perfect analogue of cross-validation for model selection as we do not observe the counterfactual potential outcomes. Towards this, a variety of surrogate metrics have been proposed for CATE model selection that use only observed data. However, we do not have a good understanding regarding their effectiveness due to limited comparisons in prior studies. We conduct an extensive empirical analysis to benchmark the surrogate model selection metrics introduced in the literature, as well as the novel ones introduced in this work. We ensure a fair comparison by tuning the hyperparameters associated with these metrics via AutoML, and provide more detailed trends by incorporating realistic datasets via generative modeling. Our analysis suggests novel model selection strategies based on careful hyperparameter selection of CATE estimators and causal ensembling.
[ { "created": "Thu, 3 Nov 2022 16:26:06 GMT", "version": "v1" }, { "created": "Tue, 13 Jun 2023 02:05:24 GMT", "version": "v2" }, { "created": "Mon, 29 Apr 2024 15:34:44 GMT", "version": "v3" } ]
2024-04-30
[ [ "Mahajan", "Divyat", "" ], [ "Mitliagkas", "Ioannis", "" ], [ "Neal", "Brady", "" ], [ "Syrgkanis", "Vasilis", "" ] ]
We study the problem of model selection in causal inference, specifically for conditional average treatment effect (CATE) estimation. Unlike machine learning, there is no perfect analogue of cross-validation for model selection as we do not observe the counterfactual potential outcomes. Towards this, a variety of surrogate metrics have been proposed for CATE model selection that use only observed data. However, we do not have a good understanding regarding their effectiveness due to limited comparisons in prior studies. We conduct an extensive empirical analysis to benchmark the surrogate model selection metrics introduced in the literature, as well as the novel ones introduced in this work. We ensure a fair comparison by tuning the hyperparameters associated with these metrics via AutoML, and provide more detailed trends by incorporating realistic datasets via generative modeling. Our analysis suggests novel model selection strategies based on careful hyperparameter selection of CATE estimators and causal ensembling.
1505.04364
Kai-Fu Yang
Kai-Fu Yang, Hui Li, Chao-Yi Li, and Yong-Jie Li
Salient Structure Detection by Context-Guided Visual Search
13 pages, 15 figures
IEEE Transactions on Image Processing (TIP), 2016
10.1109/TIP.2016.2572600
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We define the task of salient structure (SS) detection to unify the saliency-related tasks like fixation prediction, salient object detection, and other detection of structures of interest. In this study, we propose a unified framework for SS detection by modeling the two-pathway-based guided search strategy of biological vision. Firstly, context-based spatial prior (CBSP) is extracted based on the layout of edges in the given scene along a fast visual pathway, called non-selective pathway. This is a rough and non-selective estimation of the locations where the potential SSs present. Secondly, another flow of local feature extraction is executed in parallel along the selective pathway. Finally, Bayesian inference is used to integrate local cues guided by CBSP, and to predict the exact locations of SSs in the input scene. The proposed model is invariant to size and features of objects. Experimental results on four datasets (two fixation prediction datasets and two salient object datasets) demonstrate that our system achieves competitive performance for SS detection (i.e., both the tasks of fixation prediction and salient object detection) comparing to the state-of-the-art methods.
[ { "created": "Sun, 17 May 2015 07:15:25 GMT", "version": "v1" } ]
2016-06-08
[ [ "Yang", "Kai-Fu", "" ], [ "Li", "Hui", "" ], [ "Li", "Chao-Yi", "" ], [ "Li", "Yong-Jie", "" ] ]
We define the task of salient structure (SS) detection to unify the saliency-related tasks like fixation prediction, salient object detection, and other detection of structures of interest. In this study, we propose a unified framework for SS detection by modeling the two-pathway-based guided search strategy of biological vision. Firstly, context-based spatial prior (CBSP) is extracted based on the layout of edges in the given scene along a fast visual pathway, called non-selective pathway. This is a rough and non-selective estimation of the locations where the potential SSs present. Secondly, another flow of local feature extraction is executed in parallel along the selective pathway. Finally, Bayesian inference is used to integrate local cues guided by CBSP, and to predict the exact locations of SSs in the input scene. The proposed model is invariant to size and features of objects. Experimental results on four datasets (two fixation prediction datasets and two salient object datasets) demonstrate that our system achieves competitive performance for SS detection (i.e., both the tasks of fixation prediction and salient object detection) comparing to the state-of-the-art methods.
2312.00462
Kerui Gu
Kerui Gu, Zhihao Li, Shiyong Liu, Jianzhuang Liu, Songcen Xu, Youliang Yan, Michael Bi Mi, Kenji Kawaguchi, Angela Yao
Learning Unorthogonalized Matrices for Rotation Estimation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating 3D rotations is a common procedure for 3D computer vision. The accuracy depends heavily on the rotation representation. One form of representation -- rotation matrices -- is popular due to its continuity, especially for pose estimation tasks. The learning process usually incorporates orthogonalization to ensure orthonormal matrices. Our work reveals, through gradient analysis, that common orthogonalization procedures based on the Gram-Schmidt process and singular value decomposition will slow down training efficiency. To this end, we advocate removing orthogonalization from the learning process and learning unorthogonalized `Pseudo' Rotation Matrices (PRoM). An optimization analysis shows that PRoM converges faster and to a better solution. By replacing the orthogonalization incorporated representation with our proposed PRoM in various rotation-related tasks, we achieve state-of-the-art results on large-scale benchmarks for human pose estimation.
[ { "created": "Fri, 1 Dec 2023 09:56:29 GMT", "version": "v1" } ]
2023-12-04
[ [ "Gu", "Kerui", "" ], [ "Li", "Zhihao", "" ], [ "Liu", "Shiyong", "" ], [ "Liu", "Jianzhuang", "" ], [ "Xu", "Songcen", "" ], [ "Yan", "Youliang", "" ], [ "Mi", "Michael Bi", "" ], [ "Kawaguchi", "Kenji", "" ], [ "Yao", "Angela", "" ] ]
Estimating 3D rotations is a common procedure for 3D computer vision. The accuracy depends heavily on the rotation representation. One form of representation -- rotation matrices -- is popular due to its continuity, especially for pose estimation tasks. The learning process usually incorporates orthogonalization to ensure orthonormal matrices. Our work reveals, through gradient analysis, that common orthogonalization procedures based on the Gram-Schmidt process and singular value decomposition will slow down training efficiency. To this end, we advocate removing orthogonalization from the learning process and learning unorthogonalized `Pseudo' Rotation Matrices (PRoM). An optimization analysis shows that PRoM converges faster and to a better solution. By replacing the orthogonalization incorporated representation with our proposed PRoM in various rotation-related tasks, we achieve state-of-the-art results on large-scale benchmarks for human pose estimation.
1805.06875
Alexander Richard
Alexander Richard, Hilde Kuehne, Ahsan Iqbal, Juergen Gall
NeuralNetwork-Viterbi: A Framework for Weakly Supervised Video Learning
CVPR 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video learning is an important task in computer vision and has experienced increasing interest over the recent years. Since even a small amount of videos easily comprises several million frames, methods that do not rely on a frame-level annotation are of special importance. In this work, we propose a novel learning algorithm with a Viterbi-based loss that allows for online and incremental learning of weakly annotated video data. We moreover show that explicit context and length modeling leads to huge improvements in video segmentation and labeling tasks andinclude these models into our framework. On several action segmentation benchmarks, we obtain an improvement of up to 10% compared to current state-of-the-art methods.
[ { "created": "Thu, 17 May 2018 17:36:42 GMT", "version": "v1" } ]
2018-05-18
[ [ "Richard", "Alexander", "" ], [ "Kuehne", "Hilde", "" ], [ "Iqbal", "Ahsan", "" ], [ "Gall", "Juergen", "" ] ]
Video learning is an important task in computer vision and has experienced increasing interest over the recent years. Since even a small amount of videos easily comprises several million frames, methods that do not rely on a frame-level annotation are of special importance. In this work, we propose a novel learning algorithm with a Viterbi-based loss that allows for online and incremental learning of weakly annotated video data. We moreover show that explicit context and length modeling leads to huge improvements in video segmentation and labeling tasks andinclude these models into our framework. On several action segmentation benchmarks, we obtain an improvement of up to 10% compared to current state-of-the-art methods.
2404.00560
Bing Liu
Changnan Xiao and Bing Liu
A Theory for Length Generalization in Learning to Reason
arXiv admin note: text overlap with arXiv:2311.16173
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Length generalization (LG) is a challenging problem in learning to reason. It refers to the phenomenon that when trained on reasoning problems of smaller lengths or sizes, the resulting model struggles with problems of larger sizes or lengths. Although LG has been studied by many researchers, the challenge remains. This paper proposes a theoretical study of LG for problems whose reasoning processes can be modeled as DAGs (directed acyclic graphs). The paper first identifies and proves the conditions under which LG can be achieved in learning to reason. It then designs problem representations based on the theory to learn to solve challenging reasoning problems like parity, addition, and multiplication, using a Transformer to achieve perfect LG.
[ { "created": "Sun, 31 Mar 2024 04:44:22 GMT", "version": "v1" } ]
2024-04-02
[ [ "Xiao", "Changnan", "" ], [ "Liu", "Bing", "" ] ]
Length generalization (LG) is a challenging problem in learning to reason. It refers to the phenomenon that when trained on reasoning problems of smaller lengths or sizes, the resulting model struggles with problems of larger sizes or lengths. Although LG has been studied by many researchers, the challenge remains. This paper proposes a theoretical study of LG for problems whose reasoning processes can be modeled as DAGs (directed acyclic graphs). The paper first identifies and proves the conditions under which LG can be achieved in learning to reason. It then designs problem representations based on the theory to learn to solve challenging reasoning problems like parity, addition, and multiplication, using a Transformer to achieve perfect LG.
1904.03241
Markus N Rabe
Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, and Stewart Wilcox
HOList: An Environment for Machine Learning of Higher-Order Theorem Proving
Accepted at ICML 2019
null
null
null
cs.LO cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an environment, benchmark, and deep learning driven automated theorem prover for higher-order logic. Higher-order interactive theorem provers enable the formalization of arbitrary mathematical theories and thereby present an interesting, open-ended challenge for deep learning. We provide an open-source framework based on the HOL Light theorem prover that can be used as a reinforcement learning environment. HOL Light comes with a broad coverage of basic mathematical theorems on calculus and the formal proof of the Kepler conjecture, from which we derive a challenging benchmark for automated reasoning. We also present a deep reinforcement learning driven automated theorem prover, DeepHOL, with strong initial results on this benchmark.
[ { "created": "Fri, 5 Apr 2019 19:04:33 GMT", "version": "v1" }, { "created": "Fri, 24 May 2019 00:08:20 GMT", "version": "v2" }, { "created": "Fri, 1 Nov 2019 20:16:27 GMT", "version": "v3" } ]
2019-11-05
[ [ "Bansal", "Kshitij", "" ], [ "Loos", "Sarah M.", "" ], [ "Rabe", "Markus N.", "" ], [ "Szegedy", "Christian", "" ], [ "Wilcox", "Stewart", "" ] ]
We present an environment, benchmark, and deep learning driven automated theorem prover for higher-order logic. Higher-order interactive theorem provers enable the formalization of arbitrary mathematical theories and thereby present an interesting, open-ended challenge for deep learning. We provide an open-source framework based on the HOL Light theorem prover that can be used as a reinforcement learning environment. HOL Light comes with a broad coverage of basic mathematical theorems on calculus and the formal proof of the Kepler conjecture, from which we derive a challenging benchmark for automated reasoning. We also present a deep reinforcement learning driven automated theorem prover, DeepHOL, with strong initial results on this benchmark.
2110.07575
Andrew Rouditchenko
Ian Palmer, Andrew Rouditchenko, Andrei Barbu, Boris Katz, James Glass
Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset
Presented at Interspeech 2021. This version contains additional experiments on the Spoken ObjectNet test set
null
null
null
cs.CL cs.CV cs.MM eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visually-grounded spoken language datasets can enable models to learn cross-modal correspondences with very weak supervision. However, modern audio-visual datasets contain biases that undermine the real-world performance of models trained on that data. We introduce Spoken ObjectNet, which is designed to remove some of these biases and provide a way to better evaluate how effectively models will perform in real-world scenarios. This dataset expands upon ObjectNet, which is a bias-controlled image dataset that features similar image classes to those present in ImageNet. We detail our data collection pipeline, which features several methods to improve caption quality, including automated language model checks. Lastly, we show baseline results on image retrieval and audio retrieval tasks. These results show that models trained on other datasets and then evaluated on Spoken ObjectNet tend to perform poorly due to biases in other datasets that the models have learned. We also show evidence that the performance decrease is due to the dataset controls, and not the transfer setting.
[ { "created": "Thu, 14 Oct 2021 17:38:20 GMT", "version": "v1" } ]
2021-10-15
[ [ "Palmer", "Ian", "" ], [ "Rouditchenko", "Andrew", "" ], [ "Barbu", "Andrei", "" ], [ "Katz", "Boris", "" ], [ "Glass", "James", "" ] ]
Visually-grounded spoken language datasets can enable models to learn cross-modal correspondences with very weak supervision. However, modern audio-visual datasets contain biases that undermine the real-world performance of models trained on that data. We introduce Spoken ObjectNet, which is designed to remove some of these biases and provide a way to better evaluate how effectively models will perform in real-world scenarios. This dataset expands upon ObjectNet, which is a bias-controlled image dataset that features similar image classes to those present in ImageNet. We detail our data collection pipeline, which features several methods to improve caption quality, including automated language model checks. Lastly, we show baseline results on image retrieval and audio retrieval tasks. These results show that models trained on other datasets and then evaluated on Spoken ObjectNet tend to perform poorly due to biases in other datasets that the models have learned. We also show evidence that the performance decrease is due to the dataset controls, and not the transfer setting.
2212.05638
Sangwon Kim
Sangwon Kim and Dasom Ahn and Byoung Chul Ko
Cross-Modal Learning with 3D Deformable Attention for Action Recognition
Accepted by ICCV2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
An important challenge in vision-based action recognition is the embedding of spatiotemporal features with two or more heterogeneous modalities into a single feature. In this study, we propose a new 3D deformable transformer for action recognition with adaptive spatiotemporal receptive fields and a cross-modal learning scheme. The 3D deformable transformer consists of three attention modules: 3D deformability, local joint stride, and temporal stride attention. The two cross-modal tokens are input into the 3D deformable attention module to create a cross-attention token with a reflected spatiotemporal correlation. Local joint stride attention is applied to spatially combine attention and pose tokens. Temporal stride attention temporally reduces the number of input tokens in the attention module and supports temporal expression learning without the simultaneous use of all tokens. The deformable transformer iterates L-times and combines the last cross-modal token for classification. The proposed 3D deformable transformer was tested on the NTU60, NTU120, FineGYM, and PennAction datasets, and showed results better than or similar to pre-trained state-of-the-art methods even without a pre-training process. In addition, by visualizing important joints and correlations during action recognition through spatial joint and temporal stride attention, the possibility of achieving an explainable potential for action recognition is presented.
[ { "created": "Mon, 12 Dec 2022 00:31:08 GMT", "version": "v1" }, { "created": "Wed, 8 Mar 2023 00:45:48 GMT", "version": "v2" }, { "created": "Thu, 17 Aug 2023 07:23:45 GMT", "version": "v3" } ]
2023-08-21
[ [ "Kim", "Sangwon", "" ], [ "Ahn", "Dasom", "" ], [ "Ko", "Byoung Chul", "" ] ]
An important challenge in vision-based action recognition is the embedding of spatiotemporal features with two or more heterogeneous modalities into a single feature. In this study, we propose a new 3D deformable transformer for action recognition with adaptive spatiotemporal receptive fields and a cross-modal learning scheme. The 3D deformable transformer consists of three attention modules: 3D deformability, local joint stride, and temporal stride attention. The two cross-modal tokens are input into the 3D deformable attention module to create a cross-attention token with a reflected spatiotemporal correlation. Local joint stride attention is applied to spatially combine attention and pose tokens. Temporal stride attention temporally reduces the number of input tokens in the attention module and supports temporal expression learning without the simultaneous use of all tokens. The deformable transformer iterates L-times and combines the last cross-modal token for classification. The proposed 3D deformable transformer was tested on the NTU60, NTU120, FineGYM, and PennAction datasets, and showed results better than or similar to pre-trained state-of-the-art methods even without a pre-training process. In addition, by visualizing important joints and correlations during action recognition through spatial joint and temporal stride attention, the possibility of achieving an explainable potential for action recognition is presented.
1912.09581
Kai-Fu Yang
Kai-Fu Yang, Wen-Wen Jiang, Teng-Fei Zhan, and Yong-Jie Li
Line Drawings of Natural Scenes Guide Visual Attention
12 pages, 10 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual search is an important strategy of the human visual system for fast scene perception. The guided search theory suggests that the global layout or other top-down sources of scenes play a crucial role in guiding object searching. In order to verify the specific roles of scene layout and regional cues in guiding visual attention, we executed a psychophysical experiment to record the human fixations on line drawings of natural scenes with an eye-tracking system in this work. We collected the human fixations of ten subjects from 498 natural images and of another ten subjects from the corresponding 996 human-marked line drawings of boundaries (two boundary maps per image) under free-viewing condition. The experimental results show that with the absence of some basic features like color and luminance, the distribution of the fixations on the line drawings has a high correlation with that on the natural images. Moreover, compared to the basic cues of regions, subjects pay more attention to the closed regions of line drawings which are usually related to the dominant objects of the scenes. Finally, we built a computational model to demonstrate that the fixation information on the line drawings can be used to significantly improve the performances of classical bottom-up models for fixation prediction in natural scenes. These results support that Gestalt features and scene layout are important cues for guiding fast visual object searching.
[ { "created": "Thu, 19 Dec 2019 22:41:43 GMT", "version": "v1" } ]
2019-12-23
[ [ "Yang", "Kai-Fu", "" ], [ "Jiang", "Wen-Wen", "" ], [ "Zhan", "Teng-Fei", "" ], [ "Li", "Yong-Jie", "" ] ]
Visual search is an important strategy of the human visual system for fast scene perception. The guided search theory suggests that the global layout or other top-down sources of scenes play a crucial role in guiding object searching. In order to verify the specific roles of scene layout and regional cues in guiding visual attention, we executed a psychophysical experiment to record the human fixations on line drawings of natural scenes with an eye-tracking system in this work. We collected the human fixations of ten subjects from 498 natural images and of another ten subjects from the corresponding 996 human-marked line drawings of boundaries (two boundary maps per image) under free-viewing condition. The experimental results show that with the absence of some basic features like color and luminance, the distribution of the fixations on the line drawings has a high correlation with that on the natural images. Moreover, compared to the basic cues of regions, subjects pay more attention to the closed regions of line drawings which are usually related to the dominant objects of the scenes. Finally, we built a computational model to demonstrate that the fixation information on the line drawings can be used to significantly improve the performances of classical bottom-up models for fixation prediction in natural scenes. These results support that Gestalt features and scene layout are important cues for guiding fast visual object searching.
1902.10272
Ali Cheraghian
Ali Cheraghian, Shafin Rahman, Lars Petersson
Zero-shot Learning of 3D Point Cloud Objects
null
International Conference on Machine Vision Applications (MVA) (2019)
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent deep learning architectures can recognize instances of 3D point cloud objects of previously seen classes quite well. At the same time, current 3D depth camera technology allows generating/segmenting a large amount of 3D point cloud objects from an arbitrary scene, for which there is no previously seen training data. A challenge for a 3D point cloud recognition system is, then, to classify objects from new, unseen, classes. This issue can be resolved by adopting a zero-shot learning (ZSL) approach for 3D data, similar to the 2D image version of the same problem. ZSL attempts to classify unseen objects by comparing semantic information (attribute/word vector) of seen and unseen classes. Here, we adapt several recent 3D point cloud recognition systems to the ZSL setting with some changes to their architectures. To the best of our knowledge, this is the first attempt to classify unseen 3D point cloud objects in the ZSL setting. A standard protocol (which includes the choice of datasets and the seen/unseen split) to evaluate such systems is also proposed. Baseline performances are reported using the new protocol on the investigated models. This investigation throws a new challenge to the 3D point cloud recognition community that may instigate numerous future works.
[ { "created": "Wed, 27 Feb 2019 00:15:31 GMT", "version": "v1" } ]
2019-02-28
[ [ "Cheraghian", "Ali", "" ], [ "Rahman", "Shafin", "" ], [ "Petersson", "Lars", "" ] ]
Recent deep learning architectures can recognize instances of 3D point cloud objects of previously seen classes quite well. At the same time, current 3D depth camera technology allows generating/segmenting a large amount of 3D point cloud objects from an arbitrary scene, for which there is no previously seen training data. A challenge for a 3D point cloud recognition system is, then, to classify objects from new, unseen, classes. This issue can be resolved by adopting a zero-shot learning (ZSL) approach for 3D data, similar to the 2D image version of the same problem. ZSL attempts to classify unseen objects by comparing semantic information (attribute/word vector) of seen and unseen classes. Here, we adapt several recent 3D point cloud recognition systems to the ZSL setting with some changes to their architectures. To the best of our knowledge, this is the first attempt to classify unseen 3D point cloud objects in the ZSL setting. A standard protocol (which includes the choice of datasets and the seen/unseen split) to evaluate such systems is also proposed. Baseline performances are reported using the new protocol on the investigated models. This investigation throws a new challenge to the 3D point cloud recognition community that may instigate numerous future works.
2404.01652
Zixuan Zhang
Zixuan Zhang, Revanth Gangi Reddy, Kevin Small, Tong Zhang, Heng Ji
Towards Better Generalization in Open-Domain Question Answering by Mitigating Context Memorization
Accepted to NAACL 2024 Findings
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open-domain Question Answering (OpenQA) aims at answering factual questions with an external large-scale knowledge corpus. However, real-world knowledge is not static; it updates and evolves continually. Such a dynamic characteristic of knowledge poses a vital challenge for these models, as the trained models need to constantly adapt to the latest information to make sure that the answers remain accurate. In addition, it is still unclear how well an OpenQA model can transfer to completely new knowledge domains. In this paper, we investigate the generalization performance of a retrieval-augmented QA model in two specific scenarios: 1) adapting to updated versions of the same knowledge corpus; 2) switching to completely different knowledge domains. We observe that the generalization challenges of OpenQA models stem from the reader's over-reliance on memorizing the knowledge from the external corpus, which hinders the model from generalizing to a new knowledge corpus. We introduce Corpus-Invariant Tuning (CIT), a simple but effective training strategy, to mitigate the knowledge over-memorization by controlling the likelihood of retrieved contexts during training. Extensive experimental results on multiple OpenQA benchmarks show that CIT achieves significantly better generalizability without compromising the model's performance in its original corpus and domain.
[ { "created": "Tue, 2 Apr 2024 05:44:50 GMT", "version": "v1" } ]
2024-04-03
[ [ "Zhang", "Zixuan", "" ], [ "Reddy", "Revanth Gangi", "" ], [ "Small", "Kevin", "" ], [ "Zhang", "Tong", "" ], [ "Ji", "Heng", "" ] ]
Open-domain Question Answering (OpenQA) aims at answering factual questions with an external large-scale knowledge corpus. However, real-world knowledge is not static; it updates and evolves continually. Such a dynamic characteristic of knowledge poses a vital challenge for these models, as the trained models need to constantly adapt to the latest information to make sure that the answers remain accurate. In addition, it is still unclear how well an OpenQA model can transfer to completely new knowledge domains. In this paper, we investigate the generalization performance of a retrieval-augmented QA model in two specific scenarios: 1) adapting to updated versions of the same knowledge corpus; 2) switching to completely different knowledge domains. We observe that the generalization challenges of OpenQA models stem from the reader's over-reliance on memorizing the knowledge from the external corpus, which hinders the model from generalizing to a new knowledge corpus. We introduce Corpus-Invariant Tuning (CIT), a simple but effective training strategy, to mitigate the knowledge over-memorization by controlling the likelihood of retrieved contexts during training. Extensive experimental results on multiple OpenQA benchmarks show that CIT achieves significantly better generalizability without compromising the model's performance in its original corpus and domain.
2106.10901
Quim Motger
Quim Motger, Xavier Franch and Jordi Marco
Software-Based Dialogue Systems: Survey, Taxonomy and Challenges
null
null
10.1145/3527450
null
cs.CL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of natural language interfaces in the field of human-computer interaction is undergoing intense study through dedicated scientific and industrial research. The latest contributions in the field, including deep learning approaches like recurrent neural networks, the potential of context-aware strategies and user-centred design approaches, have brought back the attention of the community to software-based dialogue systems, generally known as conversational agents or chatbots. Nonetheless, and given the novelty of the field, a generic, context-independent overview on the current state of research of conversational agents covering all research perspectives involved is missing. Motivated by this context, this paper reports a survey of the current state of research of conversational agents through a systematic literature review of secondary studies. The conducted research is designed to develop an exhaustive perspective through a clear presentation of the aggregated knowledge published by recent literature within a variety of domains, research focuses and contexts. As a result, this research proposes a holistic taxonomy of the different dimensions involved in the conversational agents' field, which is expected to help researchers and to lay the groundwork for future research in the field of natural language interfaces.
[ { "created": "Mon, 21 Jun 2021 07:41:44 GMT", "version": "v1" }, { "created": "Tue, 6 Feb 2024 10:22:52 GMT", "version": "v2" } ]
2024-02-07
[ [ "Motger", "Quim", "" ], [ "Franch", "Xavier", "" ], [ "Marco", "Jordi", "" ] ]
The use of natural language interfaces in the field of human-computer interaction is undergoing intense study through dedicated scientific and industrial research. The latest contributions in the field, including deep learning approaches like recurrent neural networks, the potential of context-aware strategies and user-centred design approaches, have brought back the attention of the community to software-based dialogue systems, generally known as conversational agents or chatbots. Nonetheless, and given the novelty of the field, a generic, context-independent overview on the current state of research of conversational agents covering all research perspectives involved is missing. Motivated by this context, this paper reports a survey of the current state of research of conversational agents through a systematic literature review of secondary studies. The conducted research is designed to develop an exhaustive perspective through a clear presentation of the aggregated knowledge published by recent literature within a variety of domains, research focuses and contexts. As a result, this research proposes a holistic taxonomy of the different dimensions involved in the conversational agents' field, which is expected to help researchers and to lay the groundwork for future research in the field of natural language interfaces.
2401.08964
Yixin Cheng
Yixin Cheng, Kayley Lyons, Guanliang Chen, Dragan Gasevic and Zachari Swiecki
Evidence-centered Assessment for Writing with Generative AI
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
We propose a learning analytics-based methodology for assessing the collaborative writing of humans and generative artificial intelligence. Framed by the evidence-centered design, we used elements of knowledge-telling, knowledge transformation, and cognitive presence to identify assessment claims; we used data collected from the CoAuthor writing tool as potential evidence for these claims; and we used epistemic network analysis to make inferences from the data about the claims. Our findings revealed significant differences in the writing processes of different groups of CoAuthor users, suggesting that our method is a plausible approach to assessing human-AI collaborative writing.
[ { "created": "Wed, 17 Jan 2024 04:36:06 GMT", "version": "v1" } ]
2024-01-18
[ [ "Cheng", "Yixin", "" ], [ "Lyons", "Kayley", "" ], [ "Chen", "Guanliang", "" ], [ "Gasevic", "Dragan", "" ], [ "Swiecki", "Zachari", "" ] ]
We propose a learning analytics-based methodology for assessing the collaborative writing of humans and generative artificial intelligence. Framed by the evidence-centered design, we used elements of knowledge-telling, knowledge transformation, and cognitive presence to identify assessment claims; we used data collected from the CoAuthor writing tool as potential evidence for these claims; and we used epistemic network analysis to make inferences from the data about the claims. Our findings revealed significant differences in the writing processes of different groups of CoAuthor users, suggesting that our method is a plausible approach to assessing human-AI collaborative writing.
1907.11322
Nasour Bagheri
Masoumeh Safkhani, Nasour Bagheri
Cryptanalysis of two recently proposed ultralightweight authentication protocol for IoT
Early Results
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the Wang et al. protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, Sidorv et al. recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well.
[ { "created": "Thu, 25 Jul 2019 22:17:36 GMT", "version": "v1" } ]
2019-07-29
[ [ "Safkhani", "Masoumeh", "" ], [ "Bagheri", "Nasour", "" ] ]
By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the Wang et al. protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, Sidorv et al. recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well.
2310.09706
Yang Yu
Yang Yu, Qi Liu, Kai Zhang, Yuren Zhang, Chao Song, Min Hou, Yuqing Yuan, Zhihao Ye, Zaixi Zhang, Sanshi Lei Yu
AdaptSSR: Pre-training User Model with Augmentation-Adaptive Self-Supervised Ranking
Accepted by NeurIPS 2023
null
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
User modeling, which aims to capture users' characteristics or interests, heavily relies on task-specific labeled data and suffers from the data sparsity issue. Several recent studies tackled this problem by pre-training the user model on massive user behavior sequences with a contrastive learning task. Generally, these methods assume different views of the same behavior sequence constructed via data augmentation are semantically consistent, i.e., reflecting similar characteristics or interests of the user, and thus maximizing their agreement in the feature space. However, due to the diverse interests and heavy noise in user behaviors, existing augmentation methods tend to lose certain characteristics of the user or introduce noisy behaviors. Thus, forcing the user model to directly maximize the similarity between the augmented views may result in a negative transfer. To this end, we propose to replace the contrastive learning task with a new pretext task: Augmentation-Adaptive SelfSupervised Ranking (AdaptSSR), which alleviates the requirement of semantic consistency between the augmented views while pre-training a discriminative user model. Specifically, we adopt a multiple pairwise ranking loss which trains the user model to capture the similarity orders between the implicitly augmented view, the explicitly augmented view, and views from other users. We further employ an in-batch hard negative sampling strategy to facilitate model training. Moreover, considering the distinct impacts of data augmentation on different behavior sequences, we design an augmentation-adaptive fusion mechanism to automatically adjust the similarity order constraint applied to each sample based on the estimated similarity between the augmented views. Extensive experiments on both public and industrial datasets with six downstream tasks verify the effectiveness of AdaptSSR.
[ { "created": "Sun, 15 Oct 2023 02:19:28 GMT", "version": "v1" }, { "created": "Tue, 24 Oct 2023 08:48:06 GMT", "version": "v2" } ]
2023-10-25
[ [ "Yu", "Yang", "" ], [ "Liu", "Qi", "" ], [ "Zhang", "Kai", "" ], [ "Zhang", "Yuren", "" ], [ "Song", "Chao", "" ], [ "Hou", "Min", "" ], [ "Yuan", "Yuqing", "" ], [ "Ye", "Zhihao", "" ], [ "Zhang", "Zaixi", "" ], [ "Yu", "Sanshi Lei", "" ] ]
User modeling, which aims to capture users' characteristics or interests, heavily relies on task-specific labeled data and suffers from the data sparsity issue. Several recent studies tackled this problem by pre-training the user model on massive user behavior sequences with a contrastive learning task. Generally, these methods assume different views of the same behavior sequence constructed via data augmentation are semantically consistent, i.e., reflecting similar characteristics or interests of the user, and thus maximizing their agreement in the feature space. However, due to the diverse interests and heavy noise in user behaviors, existing augmentation methods tend to lose certain characteristics of the user or introduce noisy behaviors. Thus, forcing the user model to directly maximize the similarity between the augmented views may result in a negative transfer. To this end, we propose to replace the contrastive learning task with a new pretext task: Augmentation-Adaptive SelfSupervised Ranking (AdaptSSR), which alleviates the requirement of semantic consistency between the augmented views while pre-training a discriminative user model. Specifically, we adopt a multiple pairwise ranking loss which trains the user model to capture the similarity orders between the implicitly augmented view, the explicitly augmented view, and views from other users. We further employ an in-batch hard negative sampling strategy to facilitate model training. Moreover, considering the distinct impacts of data augmentation on different behavior sequences, we design an augmentation-adaptive fusion mechanism to automatically adjust the similarity order constraint applied to each sample based on the estimated similarity between the augmented views. Extensive experiments on both public and industrial datasets with six downstream tasks verify the effectiveness of AdaptSSR.
2307.10718
Mahsa Forouzesh
Mahsa Forouzesh and Patrick Thiran
Differences Between Hard and Noisy-labeled Samples: An Empirical Study
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Extracting noisy or incorrectly labeled samples from a labeled dataset with hard/difficult samples is an important yet under-explored topic. Two general and often independent lines of work exist, one focuses on addressing noisy labels, and another deals with hard samples. However, when both types of data are present, most existing methods treat them equally, which results in a decline in the overall performance of the model. In this paper, we first design various synthetic datasets with custom hardness and noisiness levels for different samples. Our proposed systematic empirical study enables us to better understand the similarities and more importantly the differences between hard-to-learn samples and incorrectly-labeled samples. These controlled experiments pave the way for the development of methods that distinguish between hard and noisy samples. Through our study, we introduce a simple yet effective metric that filters out noisy-labeled samples while keeping the hard samples. We study various data partitioning methods in the presence of label noise and observe that filtering out noisy samples from hard samples with this proposed metric results in the best datasets as evidenced by the high test accuracy achieved after models are trained on the filtered datasets. We demonstrate this for both our created synthetic datasets and for datasets with real-world label noise. Furthermore, our proposed data partitioning method significantly outperforms other methods when employed within a semi-supervised learning framework.
[ { "created": "Thu, 20 Jul 2023 09:24:23 GMT", "version": "v1" } ]
2023-07-21
[ [ "Forouzesh", "Mahsa", "" ], [ "Thiran", "Patrick", "" ] ]
Extracting noisy or incorrectly labeled samples from a labeled dataset with hard/difficult samples is an important yet under-explored topic. Two general and often independent lines of work exist, one focuses on addressing noisy labels, and another deals with hard samples. However, when both types of data are present, most existing methods treat them equally, which results in a decline in the overall performance of the model. In this paper, we first design various synthetic datasets with custom hardness and noisiness levels for different samples. Our proposed systematic empirical study enables us to better understand the similarities and more importantly the differences between hard-to-learn samples and incorrectly-labeled samples. These controlled experiments pave the way for the development of methods that distinguish between hard and noisy samples. Through our study, we introduce a simple yet effective metric that filters out noisy-labeled samples while keeping the hard samples. We study various data partitioning methods in the presence of label noise and observe that filtering out noisy samples from hard samples with this proposed metric results in the best datasets as evidenced by the high test accuracy achieved after models are trained on the filtered datasets. We demonstrate this for both our created synthetic datasets and for datasets with real-world label noise. Furthermore, our proposed data partitioning method significantly outperforms other methods when employed within a semi-supervised learning framework.
2111.02018
Sia Huat Tan
Sia Huat Tan, Runpei Dong, Kaisheng Ma
Multi-Glimpse Network: A Robust and Efficient Classification Architecture based on Recurrent Downsampled Attention
Accepted at BMVC 2021
The British Machine Vision Conference (BMVC) 2021
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Most feedforward convolutional neural networks spend roughly the same efforts for each pixel. Yet human visual recognition is an interaction between eye movements and spatial attention, which we will have several glimpses of an object in different regions. Inspired by this observation, we propose an end-to-end trainable Multi-Glimpse Network (MGNet) which aims to tackle the challenges of high computation and the lack of robustness based on recurrent downsampled attention mechanism. Specifically, MGNet sequentially selects task-relevant regions of an image to focus on and then adaptively combines all collected information for the final prediction. MGNet expresses strong resistance against adversarial attacks and common corruptions with less computation. Also, MGNet is inherently more interpretable as it explicitly informs us where it focuses during each iteration. Our experiments on ImageNet100 demonstrate the potential of recurrent downsampled attention mechanisms to improve a single feedforward manner. For example, MGNet improves 4.76% accuracy on average in common corruptions with only 36.9% computational cost. Moreover, while the baseline incurs an accuracy drop to 7.6%, MGNet manages to maintain 44.2% accuracy in the same PGD attack strength with ResNet-50 backbone. Our code is available at https://github.com/siahuat0727/MGNet.
[ { "created": "Wed, 3 Nov 2021 04:46:26 GMT", "version": "v1" }, { "created": "Wed, 12 Apr 2023 14:36:38 GMT", "version": "v2" } ]
2023-04-13
[ [ "Tan", "Sia Huat", "" ], [ "Dong", "Runpei", "" ], [ "Ma", "Kaisheng", "" ] ]
Most feedforward convolutional neural networks spend roughly the same efforts for each pixel. Yet human visual recognition is an interaction between eye movements and spatial attention, which we will have several glimpses of an object in different regions. Inspired by this observation, we propose an end-to-end trainable Multi-Glimpse Network (MGNet) which aims to tackle the challenges of high computation and the lack of robustness based on recurrent downsampled attention mechanism. Specifically, MGNet sequentially selects task-relevant regions of an image to focus on and then adaptively combines all collected information for the final prediction. MGNet expresses strong resistance against adversarial attacks and common corruptions with less computation. Also, MGNet is inherently more interpretable as it explicitly informs us where it focuses during each iteration. Our experiments on ImageNet100 demonstrate the potential of recurrent downsampled attention mechanisms to improve a single feedforward manner. For example, MGNet improves 4.76% accuracy on average in common corruptions with only 36.9% computational cost. Moreover, while the baseline incurs an accuracy drop to 7.6%, MGNet manages to maintain 44.2% accuracy in the same PGD attack strength with ResNet-50 backbone. Our code is available at https://github.com/siahuat0727/MGNet.
2008.05088
Julie Iskander Dr
Julie Iskander and Mohammed Hossny
An ocular biomechanics environment for reinforcement learning
5 figures, 2 tables
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement learning has been applied to human movement through physiologically-based biomechanical models to add insights into the neural control of these movements; it is also useful in the design of prosthetics and robotics. In this paper, we extend the use of reinforcement learning into controlling an ocular biomechanical system to perform saccades, which is one of the fastest eye movement systems. We describe an ocular environment and an agent trained using Deep Deterministic Policy Gradients method to perform saccades. The agent was able to match the desired eye position with a mean deviation angle of 3:5+/-1:25 degrees. The proposed framework is a first step towards using the capabilities of deep reinforcement learning to enhance our understanding of ocular biomechanics.
[ { "created": "Wed, 12 Aug 2020 03:39:37 GMT", "version": "v1" } ]
2020-08-13
[ [ "Iskander", "Julie", "" ], [ "Hossny", "Mohammed", "" ] ]
Reinforcement learning has been applied to human movement through physiologically-based biomechanical models to add insights into the neural control of these movements; it is also useful in the design of prosthetics and robotics. In this paper, we extend the use of reinforcement learning into controlling an ocular biomechanical system to perform saccades, which is one of the fastest eye movement systems. We describe an ocular environment and an agent trained using Deep Deterministic Policy Gradients method to perform saccades. The agent was able to match the desired eye position with a mean deviation angle of 3:5+/-1:25 degrees. The proposed framework is a first step towards using the capabilities of deep reinforcement learning to enhance our understanding of ocular biomechanics.
2211.09940
Hamed Jalali
Hamed Jalali and Gjergji Kasneci
Entry Dependent Expert Selection in Distributed Gaussian Processes Using Multilabel Classification
A condensed version of this work has been accepted at the Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems workshop during NeurIPS 2022
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by/4.0/
By distributing the training process, local approximation reduces the cost of the standard Gaussian Process. An ensemble technique combines local predictions from Gaussian experts trained on different partitions of the data. Ensemble methods aggregate models' predictions by assuming a perfect diversity of local predictors. Although it keeps the aggregation tractable, this assumption is often violated in practice. Even though ensemble methods provide consistent results by assuming dependencies between experts, they have a high computational cost, which is cubic in the number of experts involved. By implementing an expert selection strategy, the final aggregation step uses fewer experts and is more efficient. However, a selection approach that assigns a fixed set of experts to each new data point cannot encode the specific properties of each unique data point. This paper proposes a flexible expert selection approach based on the characteristics of entry data points. To this end, we investigate the selection task as a multi-label classification problem where the experts define labels, and each entry point is assigned to some experts. The proposed solution's prediction quality, efficiency, and asymptotic properties are discussed in detail. We demonstrate the efficacy of our method through extensive numerical experiments using synthetic and real-world data sets.
[ { "created": "Thu, 17 Nov 2022 23:23:26 GMT", "version": "v1" }, { "created": "Mon, 8 Jan 2024 15:33:20 GMT", "version": "v2" } ]
2024-01-09
[ [ "Jalali", "Hamed", "" ], [ "Kasneci", "Gjergji", "" ] ]
By distributing the training process, local approximation reduces the cost of the standard Gaussian Process. An ensemble technique combines local predictions from Gaussian experts trained on different partitions of the data. Ensemble methods aggregate models' predictions by assuming a perfect diversity of local predictors. Although it keeps the aggregation tractable, this assumption is often violated in practice. Even though ensemble methods provide consistent results by assuming dependencies between experts, they have a high computational cost, which is cubic in the number of experts involved. By implementing an expert selection strategy, the final aggregation step uses fewer experts and is more efficient. However, a selection approach that assigns a fixed set of experts to each new data point cannot encode the specific properties of each unique data point. This paper proposes a flexible expert selection approach based on the characteristics of entry data points. To this end, we investigate the selection task as a multi-label classification problem where the experts define labels, and each entry point is assigned to some experts. The proposed solution's prediction quality, efficiency, and asymptotic properties are discussed in detail. We demonstrate the efficacy of our method through extensive numerical experiments using synthetic and real-world data sets.