id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1710.02160
Diego Ruano
Carlos Galindo, Fernando Hernando and Diego Ruano
Classical and Quantum Evaluation Codes at the Trace Roots
null
IEEE Transactions on Information Theory. Volume 65, Issue 4, pages 2593-2602 (2019)
10.1109/TIT.2018.2868442
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new class of evaluation linear codes by evaluating polynomials at the roots of a suitable trace function. We give conditions for self-orthogonality of these codes and their subfield-subcodes with respect to the Hermitian inner product. They allow us to construct stabilizer quantum codes over several finite fields which substantially improve the codes in the literature and that are records at [http://www.codetables.de] for the binary case. Moreover, we obtain several classical linear codes over the field $\mathbb{F}_4$ which are records at [http://www.codetables.de].
[ { "created": "Thu, 5 Oct 2017 18:02:46 GMT", "version": "v1" }, { "created": "Thu, 2 Nov 2017 22:00:39 GMT", "version": "v2" } ]
2019-11-25
[ [ "Galindo", "Carlos", "" ], [ "Hernando", "Fernando", "" ], [ "Ruano", "Diego", "" ] ]
We introduce a new class of evaluation linear codes by evaluating polynomials at the roots of a suitable trace function. We give conditions for self-orthogonality of these codes and their subfield-subcodes with respect to the Hermitian inner product. They allow us to construct stabilizer quantum codes over several finite fields which substantially improve the codes in the literature and that are records at [http://www.codetables.de] for the binary case. Moreover, we obtain several classical linear codes over the field $\mathbb{F}_4$ which are records at [http://www.codetables.de].
2403.04121
Subbarao Kambhampati
Subbarao Kambhampati
Can Large Language Models Reason and Plan?
arXiv admin note: text overlap with arXiv:2402.01817 (v2 add creative commons attribution to Figure 2 graphic)
Annals of The New York Academy of Sciences; March 2024
10.1111/nyas.15125
null
cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
While humans sometimes do show the capability of correcting their own erroneous guesses with self-critiquing, there seems to be no basis for that assumption in the case of LLMs.
[ { "created": "Thu, 7 Mar 2024 00:36:32 GMT", "version": "v1" }, { "created": "Fri, 8 Mar 2024 19:51:14 GMT", "version": "v2" } ]
2024-03-12
[ [ "Kambhampati", "Subbarao", "" ] ]
While humans sometimes do show the capability of correcting their own erroneous guesses with self-critiquing, there seems to be no basis for that assumption in the case of LLMs.
1810.08452
Rodrigo Caye Daudt
Rodrigo Caye Daudt, Bertrand Le Saux, Alexandre Boulch, Yann Gousseau
Multitask Learning for Large-scale Semantic Change Detection
Preprint submitted to Computer Vision and Image Understanding
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Change detection is one of the main problems in remote sensing, and is essential to the accurate processing and understanding of the large scale Earth observation data available through programs such as Sentinel and Landsat. Most of the recently proposed change detection methods bring deep learning to this context, but openly available change detection datasets are still very scarce, which limits the methods that can be proposed and tested. In this paper we present the first large scale high resolution semantic change detection (HRSCD) dataset, which enables the usage of deep learning methods for semantic change detection. The dataset contains coregistered RGB image pairs, pixel-wise change information and land cover information. We then propose several methods using fully convolutional neural networks to perform semantic change detection. Most notably, we present a network architecture that performs change detection and land cover mapping simultaneously, while using the predicted land cover information to help to predict changes. We also describe a sequential training scheme that allows this network to be trained without setting a hyperparameter that balances different loss functions and achieves the best overall results.
[ { "created": "Fri, 19 Oct 2018 12:01:51 GMT", "version": "v1" }, { "created": "Wed, 28 Aug 2019 15:29:38 GMT", "version": "v2" } ]
2019-08-29
[ [ "Daudt", "Rodrigo Caye", "" ], [ "Saux", "Bertrand Le", "" ], [ "Boulch", "Alexandre", "" ], [ "Gousseau", "Yann", "" ] ]
Change detection is one of the main problems in remote sensing, and is essential to the accurate processing and understanding of the large scale Earth observation data available through programs such as Sentinel and Landsat. Most of the recently proposed change detection methods bring deep learning to this context, but openly available change detection datasets are still very scarce, which limits the methods that can be proposed and tested. In this paper we present the first large scale high resolution semantic change detection (HRSCD) dataset, which enables the usage of deep learning methods for semantic change detection. The dataset contains coregistered RGB image pairs, pixel-wise change information and land cover information. We then propose several methods using fully convolutional neural networks to perform semantic change detection. Most notably, we present a network architecture that performs change detection and land cover mapping simultaneously, while using the predicted land cover information to help to predict changes. We also describe a sequential training scheme that allows this network to be trained without setting a hyperparameter that balances different loss functions and achieves the best overall results.
1607.00494
Mehdi Korki
Hadi Zayyani, Farzan Haddadi, and Mehdi Korki
Double-detector for Sparse Signal Detection from One Bit Compressed Sensing Measurements
5 pages, 4 figures
null
10.1109/LSP.2016.2613898
null
cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This letter presents the sparse vector signal detection from one bit compressed sensing measurements, in contrast to the previous works which deal with scalar signal detection. In this letter, available results are extended to the vector case and the GLRT detector and the optimal quantizer design are obtained. Also, a double-detector scheme is introduced in which a sensor level threshold detector is integrated into network level GLRT to improve the performance. The detection criteria of oracle and clairvoyant detectors are also derived. Simulation results show that with careful design of the threshold detector, the overall detection performance of double-detector scheme would be better than the sign-GLRT proposed in [1] and close to oracle and clairvoyant detectors. Also, the proposed detector is applied to spectrum sensing and the results are near the well known energy detector which uses the real valued data while the proposed detector only uses the sign of the data.
[ { "created": "Sat, 2 Jul 2016 11:51:25 GMT", "version": "v1" } ]
2016-11-03
[ [ "Zayyani", "Hadi", "" ], [ "Haddadi", "Farzan", "" ], [ "Korki", "Mehdi", "" ] ]
This letter presents the sparse vector signal detection from one bit compressed sensing measurements, in contrast to the previous works which deal with scalar signal detection. In this letter, available results are extended to the vector case and the GLRT detector and the optimal quantizer design are obtained. Also, a double-detector scheme is introduced in which a sensor level threshold detector is integrated into network level GLRT to improve the performance. The detection criteria of oracle and clairvoyant detectors are also derived. Simulation results show that with careful design of the threshold detector, the overall detection performance of double-detector scheme would be better than the sign-GLRT proposed in [1] and close to oracle and clairvoyant detectors. Also, the proposed detector is applied to spectrum sensing and the results are near the well known energy detector which uses the real valued data while the proposed detector only uses the sign of the data.
1506.01573
Lance Williams
Lance R. Williams
Programs as Polypeptides
in European Conference on Artificial Life (ECAL '15), York, UK, 2015
null
null
null
cs.NE cs.ET cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a visual programming language for defining behaviors manifested by reified actors in a 2D virtual world that can be compiled into programs comprised of sequences of combinators that are themselves reified as actors. This makes it possible to build programs that build programs from components of a few fixed types delivered by diffusion using processes that resemble chemistry as much as computation.
[ { "created": "Thu, 4 Jun 2015 13:08:04 GMT", "version": "v1" } ]
2015-06-05
[ [ "Williams", "Lance R.", "" ] ]
We describe a visual programming language for defining behaviors manifested by reified actors in a 2D virtual world that can be compiled into programs comprised of sequences of combinators that are themselves reified as actors. This makes it possible to build programs that build programs from components of a few fixed types delivered by diffusion using processes that resemble chemistry as much as computation.
1404.5367
Alexandre Passos
Alexandre Passos, Vineet Kumar, Andrew McCallum
Lexicon Infused Phrase Embeddings for Named Entity Resolution
Accepted in CoNLL 2014
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003---significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data.
[ { "created": "Tue, 22 Apr 2014 02:12:06 GMT", "version": "v1" } ]
2014-04-23
[ [ "Passos", "Alexandre", "" ], [ "Kumar", "Vineet", "" ], [ "McCallum", "Andrew", "" ] ]
Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003---significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data.
1910.08732
Kranti Kumar Parida
Kranti Kumar Parida, Neeraj Matiyali, Tanaya Guha, Gaurav Sharma
Coordinated Joint Multimodal Embeddings for Generalized Audio-Visual Zeroshot Classification and Retrieval of Videos
To appear in WACV 2020, Project Page: https://cse.iitk.ac.in/users/kranti/avzsl.html
null
null
null
cs.CV cs.MM cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an audio-visual multimodal approach for the task of zeroshot learning (ZSL) for classification and retrieval of videos. ZSL has been studied extensively in the recent past but has primarily been limited to visual modality and to images. We demonstrate that both audio and visual modalities are important for ZSL for videos. Since a dataset to study the task is currently not available, we also construct an appropriate multimodal dataset with 33 classes containing 156,416 videos, from an existing large scale audio event dataset. We empirically show that the performance improves by adding audio modality for both tasks of zeroshot classification and retrieval, when using multimodal extensions of embedding learning methods. We also propose a novel method to predict the `dominant' modality using a jointly learned modality attention network. We learn the attention in a semi-supervised setting and thus do not require any additional explicit labelling for the modalities. We provide qualitative validation of the modality specific attention, which also successfully generalizes to unseen test classes.
[ { "created": "Sat, 19 Oct 2019 09:39:28 GMT", "version": "v1" } ]
2019-10-22
[ [ "Parida", "Kranti Kumar", "" ], [ "Matiyali", "Neeraj", "" ], [ "Guha", "Tanaya", "" ], [ "Sharma", "Gaurav", "" ] ]
We present an audio-visual multimodal approach for the task of zeroshot learning (ZSL) for classification and retrieval of videos. ZSL has been studied extensively in the recent past but has primarily been limited to visual modality and to images. We demonstrate that both audio and visual modalities are important for ZSL for videos. Since a dataset to study the task is currently not available, we also construct an appropriate multimodal dataset with 33 classes containing 156,416 videos, from an existing large scale audio event dataset. We empirically show that the performance improves by adding audio modality for both tasks of zeroshot classification and retrieval, when using multimodal extensions of embedding learning methods. We also propose a novel method to predict the `dominant' modality using a jointly learned modality attention network. We learn the attention in a semi-supervised setting and thus do not require any additional explicit labelling for the modalities. We provide qualitative validation of the modality specific attention, which also successfully generalizes to unseen test classes.
1312.7187
Hiroshi Saito
Hiroshi Saito
Analysis of Geometric Disaster Evaluation Model for Physical Networks
12 pages
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A geometric model of a physical network affected by a disaster is proposed and analyzed using integral geometry (geometric probability). This analysis provides a theoretical method of evaluating performance metrics, such as the probability of maintaining connectivity, and a network design rule that can make the network robust against disasters. The proposed model is of when the disaster area is much larger than the part of the network in which we are interested. Performance metrics, such as the probability of maintaining connectivity, are explicitly given by linear functions of the perimeter length of convex hulls determined by physical routes. The derived network design rule includes the following. (1) Reducing the convex hull of the physical route reduces the expected number of nodes that cannot connect to the destination. (2) The probability of maintaining the connectivity of two nodes on a loop cannot be changed by changing the physical route of that loop. (3) The effect of introducing a loop is identical to that of a single physical route implemented by the straight-line route.
[ { "created": "Fri, 27 Dec 2013 04:34:31 GMT", "version": "v1" } ]
2013-12-30
[ [ "Saito", "Hiroshi", "" ] ]
A geometric model of a physical network affected by a disaster is proposed and analyzed using integral geometry (geometric probability). This analysis provides a theoretical method of evaluating performance metrics, such as the probability of maintaining connectivity, and a network design rule that can make the network robust against disasters. The proposed model is of when the disaster area is much larger than the part of the network in which we are interested. Performance metrics, such as the probability of maintaining connectivity, are explicitly given by linear functions of the perimeter length of convex hulls determined by physical routes. The derived network design rule includes the following. (1) Reducing the convex hull of the physical route reduces the expected number of nodes that cannot connect to the destination. (2) The probability of maintaining the connectivity of two nodes on a loop cannot be changed by changing the physical route of that loop. (3) The effect of introducing a loop is identical to that of a single physical route implemented by the straight-line route.
1811.04354
Xinsong Zhang
Xinsong Zhang, Pengshuai Li, Weijia Jia and Hai Zhao
Multi-labeled Relation Extraction with Attentive Capsule Network
To be published in AAAI 2019
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To disclose overlapped multiple relations from a sentence still keeps challenging. Most current works in terms of neural models inconveniently assuming that each sentence is explicitly mapped to a relation label, cannot handle multiple relations properly as the overlapped features of the relations are either ignored or very difficult to identify. To tackle with the new issue, we propose a novel approach for multi-labeled relation extraction with capsule network which acts considerably better than current convolutional or recurrent net in identifying the highly overlapped relations within an individual sentence. To better cluster the features and precisely extract the relations, we further devise attention-based routing algorithm and sliding-margin loss function, and embed them into our capsule network. The experimental results show that the proposed approach can indeed extract the highly overlapped features and achieve significant performance improvement for relation extraction comparing to the state-of-the-art works.
[ { "created": "Sun, 11 Nov 2018 05:29:17 GMT", "version": "v1" } ]
2018-11-13
[ [ "Zhang", "Xinsong", "" ], [ "Li", "Pengshuai", "" ], [ "Jia", "Weijia", "" ], [ "Zhao", "Hai", "" ] ]
To disclose overlapped multiple relations from a sentence still keeps challenging. Most current works in terms of neural models inconveniently assuming that each sentence is explicitly mapped to a relation label, cannot handle multiple relations properly as the overlapped features of the relations are either ignored or very difficult to identify. To tackle with the new issue, we propose a novel approach for multi-labeled relation extraction with capsule network which acts considerably better than current convolutional or recurrent net in identifying the highly overlapped relations within an individual sentence. To better cluster the features and precisely extract the relations, we further devise attention-based routing algorithm and sliding-margin loss function, and embed them into our capsule network. The experimental results show that the proposed approach can indeed extract the highly overlapped features and achieve significant performance improvement for relation extraction comparing to the state-of-the-art works.
2102.08449
Juan Tapia Dr.
Juan Tapia, Andres Valenzuela, Rodrigo Lara, Marta Gomez-Barrero, Christoph Busch
Selfie Periocular Verification using an Efficient Super-Resolution Approach
null
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Selfie-based biometrics has great potential for a wide range of applications since, e.g. periocular verification is contactless and is safe to use in pandemics such as COVID-19, when a major portion of a face is covered by a facial mask. Despite its advantages, selfie-based biometrics presents challenges since there is limited control over data acquisition at different distances. Therefore, Super-Resolution (SR) has to be used to increase the quality of the eye images and to keep or improve the recognition performance. We propose an Efficient Single Image Super-Resolution algorithm, which takes into account a trade-off between the efficiency and the size of its filters. To that end, the method implements a loss function based on the Sharpness metric used to evaluate iris images quality. Our method drastically reduces the number of parameters compared to the state-of-the-art: from 2,170,142 to 28,654. Our best results on remote verification systems with no redimensioning reached an EER of 8.89\% for FaceNet, 12.14% for VGGFace, and 12.81% for ArcFace. Then, embedding vectors were extracted from SR images, the FaceNet-based system yielded an EER of 8.92% for a resizing of x2, 8.85% for x3, and 9.32% for x4.
[ { "created": "Tue, 16 Feb 2021 21:01:12 GMT", "version": "v1" }, { "created": "Fri, 18 Mar 2022 20:51:12 GMT", "version": "v2" } ]
2022-03-22
[ [ "Tapia", "Juan", "" ], [ "Valenzuela", "Andres", "" ], [ "Lara", "Rodrigo", "" ], [ "Gomez-Barrero", "Marta", "" ], [ "Busch", "Christoph", "" ] ]
Selfie-based biometrics has great potential for a wide range of applications since, e.g. periocular verification is contactless and is safe to use in pandemics such as COVID-19, when a major portion of a face is covered by a facial mask. Despite its advantages, selfie-based biometrics presents challenges since there is limited control over data acquisition at different distances. Therefore, Super-Resolution (SR) has to be used to increase the quality of the eye images and to keep or improve the recognition performance. We propose an Efficient Single Image Super-Resolution algorithm, which takes into account a trade-off between the efficiency and the size of its filters. To that end, the method implements a loss function based on the Sharpness metric used to evaluate iris images quality. Our method drastically reduces the number of parameters compared to the state-of-the-art: from 2,170,142 to 28,654. Our best results on remote verification systems with no redimensioning reached an EER of 8.89\% for FaceNet, 12.14% for VGGFace, and 12.81% for ArcFace. Then, embedding vectors were extracted from SR images, the FaceNet-based system yielded an EER of 8.92% for a resizing of x2, 8.85% for x3, and 9.32% for x4.
2208.00949
Marek Kowalski
Stephan J. Garbin, Marek Kowalski, Virginia Estellers, Stanislaw Szymanowicz, Shideh Rezaeifar, Jingjing Shen, Matthew Johnson, Julien Valentin
VolTeMorph: Realtime, Controllable and Generalisable Animation of Volumetric Representations
18 pages, 21 figures
null
null
null
cs.GR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent increase in popularity of volumetric representations for scene reconstruction and novel view synthesis has put renewed focus on animating volumetric content at high visual quality and in real-time. While implicit deformation methods based on learned functions can produce impressive results, they are `black boxes' to artists and content creators, they require large amounts of training data to generalise meaningfully, and they do not produce realistic extrapolations outside the training data. In this work we solve these issues by introducing a volume deformation method which is real-time, easy to edit with off-the-shelf software and can extrapolate convincingly. To demonstrate the versatility of our method, we apply it in two scenarios: physics-based object deformation and telepresence where avatars are controlled using blendshapes. We also perform thorough experiments showing that our method compares favourably to both volumetric approaches combined with implicit deformation and methods based on mesh deformation.
[ { "created": "Mon, 1 Aug 2022 16:04:38 GMT", "version": "v1" } ]
2022-08-02
[ [ "Garbin", "Stephan J.", "" ], [ "Kowalski", "Marek", "" ], [ "Estellers", "Virginia", "" ], [ "Szymanowicz", "Stanislaw", "" ], [ "Rezaeifar", "Shideh", "" ], [ "Shen", "Jingjing", "" ], [ "Johnson", "Matthew",...
The recent increase in popularity of volumetric representations for scene reconstruction and novel view synthesis has put renewed focus on animating volumetric content at high visual quality and in real-time. While implicit deformation methods based on learned functions can produce impressive results, they are `black boxes' to artists and content creators, they require large amounts of training data to generalise meaningfully, and they do not produce realistic extrapolations outside the training data. In this work we solve these issues by introducing a volume deformation method which is real-time, easy to edit with off-the-shelf software and can extrapolate convincingly. To demonstrate the versatility of our method, we apply it in two scenarios: physics-based object deformation and telepresence where avatars are controlled using blendshapes. We also perform thorough experiments showing that our method compares favourably to both volumetric approaches combined with implicit deformation and methods based on mesh deformation.
2104.09957
Matt Groh
Matthew Groh, Caleb Harris, Luis Soenksen, Felix Lau, Rachel Han, Aerin Kim, Arash Koochek, Omar Badri
Evaluating Deep Neural Networks Trained on Clinical Images in Dermatology with the Fitzpatrick 17k Dataset
null
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1820-1828. 2021
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
How does the accuracy of deep neural network models trained to classify clinical images of skin conditions vary across skin color? While recent studies demonstrate computer vision models can serve as a useful decision support tool in healthcare and provide dermatologist-level classification on a number of specific tasks, darker skin is underrepresented in the data. Most publicly available data sets do not include Fitzpatrick skin type labels. We annotate 16,577 clinical images sourced from two dermatology atlases with Fitzpatrick skin type labels and open-source these annotations. Based on these labels, we find that there are significantly more images of light skin types than dark skin types in this dataset. We train a deep neural network model to classify 114 skin conditions and find that the model is most accurate on skin types similar to those it was trained on. In addition, we evaluate how an algorithmic approach to identifying skin tones, individual typology angle, compares with Fitzpatrick skin type labels annotated by a team of human labelers.
[ { "created": "Tue, 20 Apr 2021 13:37:30 GMT", "version": "v1" } ]
2022-02-28
[ [ "Groh", "Matthew", "" ], [ "Harris", "Caleb", "" ], [ "Soenksen", "Luis", "" ], [ "Lau", "Felix", "" ], [ "Han", "Rachel", "" ], [ "Kim", "Aerin", "" ], [ "Koochek", "Arash", "" ], [ "Badri", "O...
How does the accuracy of deep neural network models trained to classify clinical images of skin conditions vary across skin color? While recent studies demonstrate computer vision models can serve as a useful decision support tool in healthcare and provide dermatologist-level classification on a number of specific tasks, darker skin is underrepresented in the data. Most publicly available data sets do not include Fitzpatrick skin type labels. We annotate 16,577 clinical images sourced from two dermatology atlases with Fitzpatrick skin type labels and open-source these annotations. Based on these labels, we find that there are significantly more images of light skin types than dark skin types in this dataset. We train a deep neural network model to classify 114 skin conditions and find that the model is most accurate on skin types similar to those it was trained on. In addition, we evaluate how an algorithmic approach to identifying skin tones, individual typology angle, compares with Fitzpatrick skin type labels annotated by a team of human labelers.
1201.4376
Emiliano De Cristofaro
Emiliano De Cristofaro and Claudio Soriente
Participatory Privacy: Enabling Privacy in Participatory Sensing
To appear in IEEE Network. Vol. 27, No. 1. January 2013. Submitted March 2011, Accepted January 2012
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Participatory Sensing is an emerging computing paradigm that enables the distributed collection of data by self-selected participants. It allows the increasing number of mobile phone users to share local knowledge acquired by their sensor-equipped devices, e.g., to monitor temperature, pollution level or consumer pricing information. While research initiatives and prototypes proliferate, their real-world impact is often bounded to comprehensive user participation. If users have no incentive, or feel that their privacy might be endangered, it is likely that they will not participate. In this article, we focus on privacy protection in Participatory Sensing and introduce a suitable privacy-enhanced infrastructure. First, we provide a set of definitions of privacy requirements for both data producers (i.e., users providing sensed information) and consumers (i.e., applications accessing the data). Then, we propose an efficient solution designed for mobile phone users, which incurs very low overhead. Finally, we discuss a number of open problems and possible research directions.
[ { "created": "Fri, 20 Jan 2012 20:15:53 GMT", "version": "v1" }, { "created": "Fri, 8 Feb 2013 03:22:25 GMT", "version": "v2" } ]
2013-02-11
[ [ "De Cristofaro", "Emiliano", "" ], [ "Soriente", "Claudio", "" ] ]
Participatory Sensing is an emerging computing paradigm that enables the distributed collection of data by self-selected participants. It allows the increasing number of mobile phone users to share local knowledge acquired by their sensor-equipped devices, e.g., to monitor temperature, pollution level or consumer pricing information. While research initiatives and prototypes proliferate, their real-world impact is often bounded to comprehensive user participation. If users have no incentive, or feel that their privacy might be endangered, it is likely that they will not participate. In this article, we focus on privacy protection in Participatory Sensing and introduce a suitable privacy-enhanced infrastructure. First, we provide a set of definitions of privacy requirements for both data producers (i.e., users providing sensed information) and consumers (i.e., applications accessing the data). Then, we propose an efficient solution designed for mobile phone users, which incurs very low overhead. Finally, we discuss a number of open problems and possible research directions.
2203.09334
Eldon Chung
Eldon Chung, Kasper Green Larsen
Stronger 3SUM-Indexing Lower Bounds
SODA 2023
null
10.1137/1.9781611977554.ch19
null
cs.DS cs.CC
http://creativecommons.org/licenses/by/4.0/
The $3$SUM-Indexing problem was introduced as a data structure version of the $3$SUM problem, with the goal of proving strong conditional lower bounds for static data structures via reductions. Ideally, the conjectured hardness of $3$SUM-Indexing should be replaced by an unconditional lower bound. Unfortunately, we are far from proving this, with the strongest current lower bound being a logarithmic query time lower bound by Golovnev et al. from STOC'20. Moreover, their lower bound holds only for non-adaptive data structures and they explicitly asked for a lower bound for adaptive data structures. Our main contribution is precisely such a lower bound against adaptive data structures. As a secondary result, we also strengthen the non-adaptive lower bound of Golovnev et al. and prove strong lower bounds for $2$-bit-probe non-adaptive $3$SUM-Indexing data structures via a completely new approach that we find interesting in its own right.
[ { "created": "Thu, 17 Mar 2022 14:08:10 GMT", "version": "v1" }, { "created": "Tue, 25 Oct 2022 06:07:06 GMT", "version": "v2" }, { "created": "Sat, 25 Mar 2023 17:12:54 GMT", "version": "v3" } ]
2023-03-28
[ [ "Chung", "Eldon", "" ], [ "Larsen", "Kasper Green", "" ] ]
The $3$SUM-Indexing problem was introduced as a data structure version of the $3$SUM problem, with the goal of proving strong conditional lower bounds for static data structures via reductions. Ideally, the conjectured hardness of $3$SUM-Indexing should be replaced by an unconditional lower bound. Unfortunately, we are far from proving this, with the strongest current lower bound being a logarithmic query time lower bound by Golovnev et al. from STOC'20. Moreover, their lower bound holds only for non-adaptive data structures and they explicitly asked for a lower bound for adaptive data structures. Our main contribution is precisely such a lower bound against adaptive data structures. As a secondary result, we also strengthen the non-adaptive lower bound of Golovnev et al. and prove strong lower bounds for $2$-bit-probe non-adaptive $3$SUM-Indexing data structures via a completely new approach that we find interesting in its own right.
1911.00272
Yuqing Kong
Yuqing Kong
Dominantly Truthful Multi-task Peer Prediction with a Constant Number of Tasks
To appear in SODA20
null
null
null
cs.GT econ.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the setting where participants are asked multiple similar possibly subjective multi-choice questions (e.g. Do you like Panda Express? Y/N; do you like Chick-fil-A? Y/N), a series of peer prediction mechanisms are designed to incentivize honest reports and some of them achieve dominantly truthfulness: truth-telling is a dominant strategy and strictly dominate other "non-permutation strategy" with some mild conditions. However, a major issue hinders the practical usage of those mechanisms: they require the participants to perform an infinite number of tasks. When the participants perform a finite number of tasks, these mechanisms only achieve approximated dominant truthfulness. The existence of a dominantly truthful multi-task peer prediction mechanism that only requires a finite number of tasks remains to be an open question that may have a negative result, even with full prior knowledge. This paper answers this open question by proposing a new mechanism, Determinant based Mutual Information Mechanism (DMI-Mechanism), that is dominantly truthful when the number of tasks is at least 2C and the number of participants is at least 2. C is the number of choices for each question (C=2 for binary-choice questions). In addition to incentivizing honest reports, DMI-Mechanism can also be transferred into an information evaluation rule that identifies high-quality information without verification when there are at least 3 participants. To the best of our knowledge, DMI-Mechanism is the first dominantly truthful mechanism that works for a finite number of tasks, not to say a small constant number of tasks.
[ { "created": "Fri, 1 Nov 2019 09:15:46 GMT", "version": "v1" } ]
2019-11-04
[ [ "Kong", "Yuqing", "" ] ]
In the setting where participants are asked multiple similar possibly subjective multi-choice questions (e.g. Do you like Panda Express? Y/N; do you like Chick-fil-A? Y/N), a series of peer prediction mechanisms are designed to incentivize honest reports and some of them achieve dominantly truthfulness: truth-telling is a dominant strategy and strictly dominate other "non-permutation strategy" with some mild conditions. However, a major issue hinders the practical usage of those mechanisms: they require the participants to perform an infinite number of tasks. When the participants perform a finite number of tasks, these mechanisms only achieve approximated dominant truthfulness. The existence of a dominantly truthful multi-task peer prediction mechanism that only requires a finite number of tasks remains to be an open question that may have a negative result, even with full prior knowledge. This paper answers this open question by proposing a new mechanism, Determinant based Mutual Information Mechanism (DMI-Mechanism), that is dominantly truthful when the number of tasks is at least 2C and the number of participants is at least 2. C is the number of choices for each question (C=2 for binary-choice questions). In addition to incentivizing honest reports, DMI-Mechanism can also be transferred into an information evaluation rule that identifies high-quality information without verification when there are at least 3 participants. To the best of our knowledge, DMI-Mechanism is the first dominantly truthful mechanism that works for a finite number of tasks, not to say a small constant number of tasks.
1901.09608
Martin Asenov
Martin Asenov, Marius Rutkauskas, Derryck Reid, Kartic Subr, Subramanian Ramamoorthy
Active Localization of Gas Leaks using Fluid Simulation
Accepted as a journal paper at IEEE Robotics and Automation Letters (RA-L)
null
10.1109/LRA.2019.2895820
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sensors are routinely mounted on robots to acquire various forms of measurements in spatio-temporal fields. Locating features within these fields and reconstruction (mapping) of the dense fields can be challenging in resource-constrained situations, such as when trying to locate the source of a gas leak from a small number of measurements. In such cases, a model of the underlying complex dynamics can be exploited to discover informative paths within the field. We use a fluid simulator as a model, to guide inference for the location of a gas leak. We perform localization via minimization of the discrepancy between observed measurements and gas concentrations predicted by the simulator. Our method is able to account for dynamically varying parameters of wind flow (e.g., direction and strength), and its effects on the observed distribution of gas. We develop algorithms for off-line inference as well as for on-line path discovery via active sensing. We demonstrate the efficiency, accuracy and versatility of our algorithm using experiments with a physical robot conducted in outdoor environments. We deploy an unmanned air vehicle (UAV) mounted with a CO2 sensor to automatically seek out a gas cylinder emitting CO2 via a nozzle. We evaluate the accuracy of our algorithm by measuring the error in the inferred location of the nozzle, based on which we show that our proposed approach is competitive with respect to state of the art baselines.
[ { "created": "Mon, 28 Jan 2019 11:26:49 GMT", "version": "v1" } ]
2019-01-29
[ [ "Asenov", "Martin", "" ], [ "Rutkauskas", "Marius", "" ], [ "Reid", "Derryck", "" ], [ "Subr", "Kartic", "" ], [ "Ramamoorthy", "Subramanian", "" ] ]
Sensors are routinely mounted on robots to acquire various forms of measurements in spatio-temporal fields. Locating features within these fields and reconstruction (mapping) of the dense fields can be challenging in resource-constrained situations, such as when trying to locate the source of a gas leak from a small number of measurements. In such cases, a model of the underlying complex dynamics can be exploited to discover informative paths within the field. We use a fluid simulator as a model, to guide inference for the location of a gas leak. We perform localization via minimization of the discrepancy between observed measurements and gas concentrations predicted by the simulator. Our method is able to account for dynamically varying parameters of wind flow (e.g., direction and strength), and its effects on the observed distribution of gas. We develop algorithms for off-line inference as well as for on-line path discovery via active sensing. We demonstrate the efficiency, accuracy and versatility of our algorithm using experiments with a physical robot conducted in outdoor environments. We deploy an unmanned air vehicle (UAV) mounted with a CO2 sensor to automatically seek out a gas cylinder emitting CO2 via a nozzle. We evaluate the accuracy of our algorithm by measuring the error in the inferred location of the nozzle, based on which we show that our proposed approach is competitive with respect to state of the art baselines.
2009.11067
Kiichi Tokuyama
Yukang Jiang, Kiichi Tokuyama, Yuichiro Wada, and Moeko Yajima
Correlation Coefficient Analysis of the Age of Information in Multi-Source Systems
6 pages, 4 figures
null
null
null
cs.PF math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the age of information (AoI) on an information updating system such that multiple sources share one server to process packets of updated information. In such systems, packets from different sources compete for the server, and thus they may suffer from being interrupted, being backlogged, and becoming stale. Therefore, in order to grasp structures of such systems, it is crucially important to study a metric indicating a correlation of different sources. In this paper, we aim to analyze the correlation of AoIs on a single-server queueing system with multiple sources. As our contribution, we provide the closed-form expression of the correlation coefficient of the AoIs. To this end, we first derive the Laplace-Stieltjes transform of the stationary distribution of each AoI for the multiple sources. Some nontrivial properties on the systems are revealed from our analysis results.
[ { "created": "Wed, 23 Sep 2020 11:40:38 GMT", "version": "v1" } ]
2020-09-24
[ [ "Jiang", "Yukang", "" ], [ "Tokuyama", "Kiichi", "" ], [ "Wada", "Yuichiro", "" ], [ "Yajima", "Moeko", "" ] ]
This paper studies the age of information (AoI) on an information updating system such that multiple sources share one server to process packets of updated information. In such systems, packets from different sources compete for the server, and thus they may suffer from being interrupted, being backlogged, and becoming stale. Therefore, in order to grasp structures of such systems, it is crucially important to study a metric indicating a correlation of different sources. In this paper, we aim to analyze the correlation of AoIs on a single-server queueing system with multiple sources. As our contribution, we provide the closed-form expression of the correlation coefficient of the AoIs. To this end, we first derive the Laplace-Stieltjes transform of the stationary distribution of each AoI for the multiple sources. Some nontrivial properties on the systems are revealed from our analysis results.
2304.04918
Xiaofeng Zhu
Xiaofeng Zhu, Thomas Lin, Vishal Anand, Matthew Calderwood, Eric Clausen-Brown, Gord Lueck, Wen-wai Yim, Cheng Wu
Explicit and Implicit Semantic Ranking Framework
null
Companion Proceedings of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA
10.1145/3543873.3584621
null
cs.IR cs.AI
http://creativecommons.org/licenses/by/4.0/
The core challenge in numerous real-world applications is to match an inquiry to the best document from a mutable and finite set of candidates. Existing industry solutions, especially latency-constrained services, often rely on similarity algorithms that sacrifice quality for speed. In this paper we introduce a generic semantic learning-to-rank framework, Self-training Semantic Cross-attention Ranking (sRank). This transformer-based framework uses linear pairwise loss with mutable training batch sizes and achieves quality gains and high efficiency, and has been applied effectively to show gains on two industry tasks at Microsoft over real-world large-scale data sets: Smart Reply (SR) and Ambient Clinical Intelligence (ACI). In Smart Reply, $sRank$ assists live customers with technical support by selecting the best reply from predefined solutions based on consumer and support agent messages. It achieves 11.7% gain in offline top-one accuracy on the SR task over the previous system, and has enabled 38.7% time reduction in composing messages in telemetry recorded since its general release in January 2021. In the ACI task, sRank selects relevant historical physician templates that serve as guidance for a text summarization model to generate higher quality medical notes. It achieves 35.5% top-one accuracy gain, along with 46% relative ROUGE-L gain in generated medical notes.
[ { "created": "Tue, 11 Apr 2023 01:10:49 GMT", "version": "v1" } ]
2023-04-12
[ [ "Zhu", "Xiaofeng", "" ], [ "Lin", "Thomas", "" ], [ "Anand", "Vishal", "" ], [ "Calderwood", "Matthew", "" ], [ "Clausen-Brown", "Eric", "" ], [ "Lueck", "Gord", "" ], [ "Yim", "Wen-wai", "" ], [ "W...
The core challenge in numerous real-world applications is to match an inquiry to the best document from a mutable and finite set of candidates. Existing industry solutions, especially latency-constrained services, often rely on similarity algorithms that sacrifice quality for speed. In this paper we introduce a generic semantic learning-to-rank framework, Self-training Semantic Cross-attention Ranking (sRank). This transformer-based framework uses linear pairwise loss with mutable training batch sizes and achieves quality gains and high efficiency, and has been applied effectively to show gains on two industry tasks at Microsoft over real-world large-scale data sets: Smart Reply (SR) and Ambient Clinical Intelligence (ACI). In Smart Reply, $sRank$ assists live customers with technical support by selecting the best reply from predefined solutions based on consumer and support agent messages. It achieves 11.7% gain in offline top-one accuracy on the SR task over the previous system, and has enabled 38.7% time reduction in composing messages in telemetry recorded since its general release in January 2021. In the ACI task, sRank selects relevant historical physician templates that serve as guidance for a text summarization model to generate higher quality medical notes. It achieves 35.5% top-one accuracy gain, along with 46% relative ROUGE-L gain in generated medical notes.
1909.13003
Yunbo Wang
Yunbo Wang, Bo Liu, Jiajun Wu, Yuke Zhu, Simon S. Du, Li Fei-Fei, Joshua B. Tenenbaum
DualSMC: Tunneling Differentiable Filtering and Planning under Continuous POMDPs
IJCAI 2020
null
null
null
cs.LG cs.AI cs.RO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major difficulty of solving continuous POMDPs is to infer the multi-modal distribution of the unobserved true states and to make the planning algorithm dependent on the perceived uncertainty. We cast POMDP filtering and planning problems as two closely related Sequential Monte Carlo (SMC) processes, one over the real states and the other over the future optimal trajectories, and combine the merits of these two parts in a new model named the DualSMC network. In particular, we first introduce an adversarial particle filter that leverages the adversarial relationship between its internal components. Based on the filtering results, we then propose a planning algorithm that extends the previous SMC planning approach [Piche et al., 2018] to continuous POMDPs with an uncertainty-dependent policy. Crucially, not only can DualSMC handle complex observations such as image input but also it remains highly interpretable. It is shown to be effective in three continuous POMDP domains: the floor positioning domain, the 3D light-dark navigation domain, and a modified Reacher domain.
[ { "created": "Sat, 28 Sep 2019 01:52:27 GMT", "version": "v1" }, { "created": "Wed, 29 Apr 2020 07:35:53 GMT", "version": "v2" }, { "created": "Thu, 30 Apr 2020 04:23:39 GMT", "version": "v3" }, { "created": "Thu, 7 May 2020 06:27:36 GMT", "version": "v4" } ]
2020-05-08
[ [ "Wang", "Yunbo", "" ], [ "Liu", "Bo", "" ], [ "Wu", "Jiajun", "" ], [ "Zhu", "Yuke", "" ], [ "Du", "Simon S.", "" ], [ "Fei-Fei", "Li", "" ], [ "Tenenbaum", "Joshua B.", "" ] ]
A major difficulty of solving continuous POMDPs is to infer the multi-modal distribution of the unobserved true states and to make the planning algorithm dependent on the perceived uncertainty. We cast POMDP filtering and planning problems as two closely related Sequential Monte Carlo (SMC) processes, one over the real states and the other over the future optimal trajectories, and combine the merits of these two parts in a new model named the DualSMC network. In particular, we first introduce an adversarial particle filter that leverages the adversarial relationship between its internal components. Based on the filtering results, we then propose a planning algorithm that extends the previous SMC planning approach [Piche et al., 2018] to continuous POMDPs with an uncertainty-dependent policy. Crucially, not only can DualSMC handle complex observations such as image input but also it remains highly interpretable. It is shown to be effective in three continuous POMDP domains: the floor positioning domain, the 3D light-dark navigation domain, and a modified Reacher domain.
2105.00375
Harish Panneer Selvam
Harish Panneer Selvam, Yan Li, Pengyue Wang, William F. Northrop, Shashi Shekhar
Vehicle Emissions Prediction with Physics-Aware AI Models: Preliminary Results
Accepted by Association for Advancement of Artificial Intelligence (AAAI) Fall Symposium Series 2020: Physics-Guided AI to Accelerate Scientific Discovery (https://sites.google.com/vt.edu/pgai-aaai-20)
PGAI-AAAI-20(2020)
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Given an on-board diagnostics (OBD) dataset and a physics-based emissions prediction model, this paper aims to develop an accurate and computational-efficient AI (Artificial Intelligence) method that predicts vehicle emissions. The problem is of societal importance because vehicular emissions lead to climate change and impact human health. This problem is challenging because the OBD data does not contain enough parameters needed by high-order physics models. Conversely, related work has shown that low-order physics models have poor predictive accuracy when using available OBD data. This paper uses a divergent window co-occurrence pattern detection method to develop a spatiotemporal variability-aware AI model for predicting emission values from the OBD datasets. We conducted a case study using real-world OBD data from a local public transportation agency. Results show that the proposed AI method has approximately 65% improved predictive accuracy than a non-AI low-order physics model and is approximately 35% more accurate than a baseline model.
[ { "created": "Sun, 2 May 2021 01:52:59 GMT", "version": "v1" } ]
2021-05-06
[ [ "Selvam", "Harish Panneer", "" ], [ "Li", "Yan", "" ], [ "Wang", "Pengyue", "" ], [ "Northrop", "William F.", "" ], [ "Shekhar", "Shashi", "" ] ]
Given an on-board diagnostics (OBD) dataset and a physics-based emissions prediction model, this paper aims to develop an accurate and computational-efficient AI (Artificial Intelligence) method that predicts vehicle emissions. The problem is of societal importance because vehicular emissions lead to climate change and impact human health. This problem is challenging because the OBD data does not contain enough parameters needed by high-order physics models. Conversely, related work has shown that low-order physics models have poor predictive accuracy when using available OBD data. This paper uses a divergent window co-occurrence pattern detection method to develop a spatiotemporal variability-aware AI model for predicting emission values from the OBD datasets. We conducted a case study using real-world OBD data from a local public transportation agency. Results show that the proposed AI method has approximately 65% improved predictive accuracy than a non-AI low-order physics model and is approximately 35% more accurate than a baseline model.
2107.07011
Nicola Anselmi
Marco Salucci, Nicola Anselmi, Marco Donald Migliore, and Andrea Massa
A Bayesian Compressive Sensing Approach to Robust Near-Field Antenna Characterization
Submitted to IEEE
null
10.1109/TAP.2022.3177528
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel probabilistic sparsity-promoting method for robust near-field (NF) antenna characterization is proposed. It leverages on the measurements-by-design (MebD) paradigm and it exploits some a-priori information on the antenna under test (AUT) to generate an over-complete representation basis. Accordingly, the problem at hand is reformulated in a compressive sensing (CS) framework as the retrieval of a maximally-sparse distribution (with respect to the overcomplete basis) from a reduced set of measured data and then it is solved by means of a Bayesian strategy. Representative numerical results are presented to, also comparatively, assess the effectiveness of the proposed approach in reducing the "burden/cost" of the acquisition process as well as to mitigate (possible) truncation errors when dealing with space-constrained probing systems.
[ { "created": "Wed, 14 Jul 2021 21:20:32 GMT", "version": "v1" } ]
2022-10-26
[ [ "Salucci", "Marco", "" ], [ "Anselmi", "Nicola", "" ], [ "Migliore", "Marco Donald", "" ], [ "Massa", "Andrea", "" ] ]
A novel probabilistic sparsity-promoting method for robust near-field (NF) antenna characterization is proposed. It leverages on the measurements-by-design (MebD) paradigm and it exploits some a-priori information on the antenna under test (AUT) to generate an over-complete representation basis. Accordingly, the problem at hand is reformulated in a compressive sensing (CS) framework as the retrieval of a maximally-sparse distribution (with respect to the overcomplete basis) from a reduced set of measured data and then it is solved by means of a Bayesian strategy. Representative numerical results are presented to, also comparatively, assess the effectiveness of the proposed approach in reducing the "burden/cost" of the acquisition process as well as to mitigate (possible) truncation errors when dealing with space-constrained probing systems.
2310.20419
Naoki Ono
Naoki Ono, Yusuke Matsui
Relative NN-Descent: A Fast Index Construction for Graph-Based Approximate Nearest Neighbor Search
Accepted by ACMMM 2023
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Approximate Nearest Neighbor Search (ANNS) is the task of finding the database vector that is closest to a given query vector. Graph-based ANNS is the family of methods with the best balance of accuracy and speed for million-scale datasets. However, graph-based methods have the disadvantage of long index construction time. Recently, many researchers have improved the tradeoff between accuracy and speed during a search. However, there is little research on accelerating index construction. We propose a fast graph construction algorithm, Relative NN-Descent (RNN-Descent). RNN-Descent combines NN-Descent, an algorithm for constructing approximate K-nearest neighbor graphs (K-NN graphs), and RNG Strategy, an algorithm for selecting edges effective for search. This algorithm allows the direct construction of graph-based indexes without ANNS. Experimental results demonstrated that the proposed method had the fastest index construction speed, while its search performance is comparable to existing state-of-the-art methods such as NSG. For example, in experiments on the GIST1M dataset, the construction of the proposed method is 2x faster than NSG. Additionally, it was even faster than the construction speed of NN-Descent.
[ { "created": "Tue, 31 Oct 2023 12:46:18 GMT", "version": "v1" } ]
2023-11-01
[ [ "Ono", "Naoki", "" ], [ "Matsui", "Yusuke", "" ] ]
Approximate Nearest Neighbor Search (ANNS) is the task of finding the database vector that is closest to a given query vector. Graph-based ANNS is the family of methods with the best balance of accuracy and speed for million-scale datasets. However, graph-based methods have the disadvantage of long index construction time. Recently, many researchers have improved the tradeoff between accuracy and speed during a search. However, there is little research on accelerating index construction. We propose a fast graph construction algorithm, Relative NN-Descent (RNN-Descent). RNN-Descent combines NN-Descent, an algorithm for constructing approximate K-nearest neighbor graphs (K-NN graphs), and RNG Strategy, an algorithm for selecting edges effective for search. This algorithm allows the direct construction of graph-based indexes without ANNS. Experimental results demonstrated that the proposed method had the fastest index construction speed, while its search performance is comparable to existing state-of-the-art methods such as NSG. For example, in experiments on the GIST1M dataset, the construction of the proposed method is 2x faster than NSG. Additionally, it was even faster than the construction speed of NN-Descent.
2305.02797
Jan Philip Wahle
Mohamed Abdalla and Jan Philip Wahle and Terry Ruas and Aur\'elie N\'ev\'eol and Fanny Ducel and Saif M. Mohammad and Kar\"en Fort
The Elephant in the Room: Analyzing the Presence of Big Tech in Natural Language Processing Research
Published at ACL 2023
ACL 2023
10.18653/v1/2023.acl-long.734
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Recent advances in deep learning methods for natural language processing (NLP) have created new business opportunities and made NLP research critical for industry development. As one of the big players in the field of NLP, together with governments and universities, it is important to track the influence of industry on research. In this study, we seek to quantify and characterize industry presence in the NLP community over time. Using a corpus with comprehensive metadata of 78,187 NLP publications and 701 resumes of NLP publication authors, we explore the industry presence in the field since the early 90s. We find that industry presence among NLP authors has been steady before a steep increase over the past five years (180% growth from 2017 to 2022). A few companies account for most of the publications and provide funding to academic researchers through grants and internships. Our study shows that the presence and impact of the industry on natural language processing research are significant and fast-growing. This work calls for increased transparency of industry influence in the field.
[ { "created": "Thu, 4 May 2023 12:57:18 GMT", "version": "v1" }, { "created": "Tue, 9 May 2023 10:39:11 GMT", "version": "v2" }, { "created": "Mon, 1 Jul 2024 12:30:57 GMT", "version": "v3" }, { "created": "Tue, 16 Jul 2024 08:53:19 GMT", "version": "v4" } ]
2024-07-17
[ [ "Abdalla", "Mohamed", "" ], [ "Wahle", "Jan Philip", "" ], [ "Ruas", "Terry", "" ], [ "Névéol", "Aurélie", "" ], [ "Ducel", "Fanny", "" ], [ "Mohammad", "Saif M.", "" ], [ "Fort", "Karën", "" ] ]
Recent advances in deep learning methods for natural language processing (NLP) have created new business opportunities and made NLP research critical for industry development. As one of the big players in the field of NLP, together with governments and universities, it is important to track the influence of industry on research. In this study, we seek to quantify and characterize industry presence in the NLP community over time. Using a corpus with comprehensive metadata of 78,187 NLP publications and 701 resumes of NLP publication authors, we explore the industry presence in the field since the early 90s. We find that industry presence among NLP authors has been steady before a steep increase over the past five years (180% growth from 2017 to 2022). A few companies account for most of the publications and provide funding to academic researchers through grants and internships. Our study shows that the presence and impact of the industry on natural language processing research are significant and fast-growing. This work calls for increased transparency of industry influence in the field.
1904.06145
Ari Heljakka
Ari Heljakka, Arno Solin, Juho Kannala
Towards Photographic Image Manipulation with Balanced Growing of Generative Autoencoders
WACV 2020
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a generative autoencoder that provides fast encoding, faithful reconstructions (eg. retaining the identity of a face), sharp generated/reconstructed samples in high resolutions, and a well-structured latent space that supports semantic manipulation of the inputs. There are no current autoencoder or GAN models that satisfactorily achieve all of these. We build on the progressively growing autoencoder model PIONEER, for which we completely alter the training dynamics based on a careful analysis of recently introduced normalization schemes. We show significantly improved visual and quantitative results for face identity conservation in CelebAHQ. Our model achieves state-of-the-art disentanglement of latent space, both quantitatively and via realistic image attribute manipulations. On the LSUN Bedrooms dataset, we improve the disentanglement performance of the vanilla PIONEER, despite having a simpler model. Overall, our results indicate that the PIONEER networks provide a way towards photorealistic face manipulation.
[ { "created": "Fri, 12 Apr 2019 10:31:45 GMT", "version": "v1" }, { "created": "Thu, 20 Feb 2020 18:36:05 GMT", "version": "v2" } ]
2020-02-21
[ [ "Heljakka", "Ari", "" ], [ "Solin", "Arno", "" ], [ "Kannala", "Juho", "" ] ]
We present a generative autoencoder that provides fast encoding, faithful reconstructions (eg. retaining the identity of a face), sharp generated/reconstructed samples in high resolutions, and a well-structured latent space that supports semantic manipulation of the inputs. There are no current autoencoder or GAN models that satisfactorily achieve all of these. We build on the progressively growing autoencoder model PIONEER, for which we completely alter the training dynamics based on a careful analysis of recently introduced normalization schemes. We show significantly improved visual and quantitative results for face identity conservation in CelebAHQ. Our model achieves state-of-the-art disentanglement of latent space, both quantitatively and via realistic image attribute manipulations. On the LSUN Bedrooms dataset, we improve the disentanglement performance of the vanilla PIONEER, despite having a simpler model. Overall, our results indicate that the PIONEER networks provide a way towards photorealistic face manipulation.
2002.10898
Tesshu Hanaka
Hans L. Bodlaender, Tesshu Hanaka, Lars Jaffke, Hirotaka Ono, Yota Otachi and Tom C. van der Zanden
Hedonic Seat Arrangement Problems
null
null
null
null
cs.GT cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study a variant of hedonic games, called \textsc{Seat Arrangement}. The model is defined by a bijection from agents with preferences to vertices in a graph. The utility of an agent depends on the neighbors assigned in the graph. More precisely, it is the sum over all neighbors of the preferences that the agent has towards the agent assigned to the neighbor. We first consider the price of stability and fairness for different classes of preferences. In particular, we show that there is an instance such that the price of fairness ({\sf PoF}) is unbounded in general. Moreover, we show an upper bound $\tilde{d}(G)$ and an almost tight lower bound $\tilde{d}(G)-1/4$ of {\sf PoF}, where $\tilde{d}(G)$ is the average degree of an input graph. Then we investigate the computational complexity of problems to find certain ``good'' seat arrangements, say \textsc{Maximum Welfare Arrangement}, \textsc{Maximin Utility Arrangement}, \textsc{Stable Arrangement}, and \textsc{Envy-free Arrangement}. We give dichotomies of computational complexity of four \textsc{Seat Arrangement} problems from the perspective of the maximum order of connected components in an input graph. For the parameterized complexity, \textsc{Maximum Welfare Arrangement} can be solved in time $n^{O(\gamma)}$, while it cannot be solved in time $f(\gamma)^{o(\gamma)}$ under ETH, where $\gamma$ is the vertex cover number of an input graph. Moreover, we show that \textsc{Maximin Utility Arrangement} and \textsc{Envy-free Arrangement} are weakly NP-hard even on graphs of bounded vertex cover number. Finally, we prove that determining whether a stable arrangement can be obtained from a given arrangement by $k$ swaps is W[1]-hard when parameterized by $k+\gamma$, whereas it can be solved in time $n^{O(k)}$.
[ { "created": "Tue, 25 Feb 2020 14:38:14 GMT", "version": "v1" } ]
2020-02-26
[ [ "Bodlaender", "Hans L.", "" ], [ "Hanaka", "Tesshu", "" ], [ "Jaffke", "Lars", "" ], [ "Ono", "Hirotaka", "" ], [ "Otachi", "Yota", "" ], [ "van der Zanden", "Tom C.", "" ] ]
In this paper, we study a variant of hedonic games, called \textsc{Seat Arrangement}. The model is defined by a bijection from agents with preferences to vertices in a graph. The utility of an agent depends on the neighbors assigned in the graph. More precisely, it is the sum over all neighbors of the preferences that the agent has towards the agent assigned to the neighbor. We first consider the price of stability and fairness for different classes of preferences. In particular, we show that there is an instance such that the price of fairness ({\sf PoF}) is unbounded in general. Moreover, we show an upper bound $\tilde{d}(G)$ and an almost tight lower bound $\tilde{d}(G)-1/4$ of {\sf PoF}, where $\tilde{d}(G)$ is the average degree of an input graph. Then we investigate the computational complexity of problems to find certain ``good'' seat arrangements, say \textsc{Maximum Welfare Arrangement}, \textsc{Maximin Utility Arrangement}, \textsc{Stable Arrangement}, and \textsc{Envy-free Arrangement}. We give dichotomies of computational complexity of four \textsc{Seat Arrangement} problems from the perspective of the maximum order of connected components in an input graph. For the parameterized complexity, \textsc{Maximum Welfare Arrangement} can be solved in time $n^{O(\gamma)}$, while it cannot be solved in time $f(\gamma)^{o(\gamma)}$ under ETH, where $\gamma$ is the vertex cover number of an input graph. Moreover, we show that \textsc{Maximin Utility Arrangement} and \textsc{Envy-free Arrangement} are weakly NP-hard even on graphs of bounded vertex cover number. Finally, we prove that determining whether a stable arrangement can be obtained from a given arrangement by $k$ swaps is W[1]-hard when parameterized by $k+\gamma$, whereas it can be solved in time $n^{O(k)}$.
2401.00081
Vamsi Potluru
Vamsi K. Potluru, Daniel Borrajo, Andrea Coletta, Niccol\`o Dalmasso, Yousef El-Laham, Elizabeth Fons, Mohsen Ghassemi, Sriram Gopalakrishnan, Vikesh Gosai, Eleonora Krea\v{c}i\'c, Ganapathy Mani, Saheed Obitayo, Deepak Paramanand, Natraj Raman, Mikhail Solonin, Srijan Sood, Svitlana Vyetrenko, Haibei Zhu, Manuela Veloso, Tucker Balch
Synthetic Data Applications in Finance
50 pages, journal submission; updated 6 privacy levels
null
null
null
cs.LG q-fin.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synthetic data has made tremendous strides in various commercial settings including finance, healthcare, and virtual reality. We present a broad overview of prototypical applications of synthetic data in the financial sector and in particular provide richer details for a few select ones. These cover a wide variety of data modalities including tabular, time-series, event-series, and unstructured arising from both markets and retail financial applications. Since finance is a highly regulated industry, synthetic data is a potential approach for dealing with issues related to privacy, fairness, and explainability. Various metrics are utilized in evaluating the quality and effectiveness of our approaches in these applications. We conclude with open directions in synthetic data in the context of the financial domain.
[ { "created": "Fri, 29 Dec 2023 21:49:23 GMT", "version": "v1" }, { "created": "Wed, 20 Mar 2024 20:21:35 GMT", "version": "v2" } ]
2024-03-22
[ [ "Potluru", "Vamsi K.", "" ], [ "Borrajo", "Daniel", "" ], [ "Coletta", "Andrea", "" ], [ "Dalmasso", "Niccolò", "" ], [ "El-Laham", "Yousef", "" ], [ "Fons", "Elizabeth", "" ], [ "Ghassemi", "Mohsen", "" ...
Synthetic data has made tremendous strides in various commercial settings including finance, healthcare, and virtual reality. We present a broad overview of prototypical applications of synthetic data in the financial sector and in particular provide richer details for a few select ones. These cover a wide variety of data modalities including tabular, time-series, event-series, and unstructured arising from both markets and retail financial applications. Since finance is a highly regulated industry, synthetic data is a potential approach for dealing with issues related to privacy, fairness, and explainability. Various metrics are utilized in evaluating the quality and effectiveness of our approaches in these applications. We conclude with open directions in synthetic data in the context of the financial domain.
2008.05242
Dong Hwan Kim
Myoungha Song, Jeongho Lee, Donghwan Kim
PAM:Point-wise Attention Module for 6D Object Pose Estimation
11 pages, 5figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
6D pose estimation refers to object recognition and estimation of 3D rotation and 3D translation. The key technology for estimating 6D pose is to estimate pose by extracting enough features to find pose in any environment. Previous methods utilized depth information in the refinement process or were designed as a heterogeneous architecture for each data space to extract feature. However, these methods are limited in that they cannot extract sufficient feature. Therefore, this paper proposes a Point Attention Module that can efficiently extract powerful feature from RGB-D. In our Module, attention map is formed through a Geometric Attention Path(GAP) and Channel Attention Path(CAP). In GAP, it is designed to pay attention to important information in geometric information, and CAP is designed to pay attention to important information in Channel information. We show that the attention module efficiently creates feature representations without significantly increasing computational complexity. Experimental results show that the proposed method outperforms the existing methods in benchmarks, YCB Video and LineMod. In addition, the attention module was applied to the classification task, and it was confirmed that the performance significantly improved compared to the existing model.
[ { "created": "Wed, 12 Aug 2020 11:29:48 GMT", "version": "v1" } ]
2020-08-13
[ [ "Song", "Myoungha", "" ], [ "Lee", "Jeongho", "" ], [ "Kim", "Donghwan", "" ] ]
6D pose estimation refers to object recognition and estimation of 3D rotation and 3D translation. The key technology for estimating 6D pose is to estimate pose by extracting enough features to find pose in any environment. Previous methods utilized depth information in the refinement process or were designed as a heterogeneous architecture for each data space to extract feature. However, these methods are limited in that they cannot extract sufficient feature. Therefore, this paper proposes a Point Attention Module that can efficiently extract powerful feature from RGB-D. In our Module, attention map is formed through a Geometric Attention Path(GAP) and Channel Attention Path(CAP). In GAP, it is designed to pay attention to important information in geometric information, and CAP is designed to pay attention to important information in Channel information. We show that the attention module efficiently creates feature representations without significantly increasing computational complexity. Experimental results show that the proposed method outperforms the existing methods in benchmarks, YCB Video and LineMod. In addition, the attention module was applied to the classification task, and it was confirmed that the performance significantly improved compared to the existing model.
1102.4699
Shengyu Zhang
Rahul Jain, Shengyu Zhang
The influence lower bound via query elimination
5 pages
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give a simpler proof, via query elimination, of a result due to O'Donnell, Saks, Schramm and Servedio, which shows a lower bound on the zero-error randomized query complexity of a function f in terms of the maximum influence of any variable of f. Our lower bound also applies to the two-sided error distributional query complexity of f, and it allows an immediate extension which can be used to prove stronger lower bounds for some functions.
[ { "created": "Wed, 23 Feb 2011 09:38:14 GMT", "version": "v1" } ]
2011-02-24
[ [ "Jain", "Rahul", "" ], [ "Zhang", "Shengyu", "" ] ]
We give a simpler proof, via query elimination, of a result due to O'Donnell, Saks, Schramm and Servedio, which shows a lower bound on the zero-error randomized query complexity of a function f in terms of the maximum influence of any variable of f. Our lower bound also applies to the two-sided error distributional query complexity of f, and it allows an immediate extension which can be used to prove stronger lower bounds for some functions.
2110.09936
Vadim Tschernezki
Vadim Tschernezki, Diane Larlus, Andrea Vedaldi
NeuralDiff: Segmenting 3D objects that move in egocentric videos
3DV2021. Project page: https://www.robots.ox.ac.uk/~vadim/neuraldiff/
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a raw video sequence taken from a freely-moving camera, we study the problem of decomposing the observed 3D scene into a static background and a dynamic foreground containing the objects that move in the video sequence. This task is reminiscent of the classic background subtraction problem, but is significantly harder because all parts of the scene, static and dynamic, generate a large apparent motion due to the camera large viewpoint change. In particular, we consider egocentric videos and further separate the dynamic component into objects and the actor that observes and moves them. We achieve this factorization by reconstructing the video via a triple-stream neural rendering network that explains the different motions based on corresponding inductive biases. We demonstrate that our method can successfully separate the different types of motion, outperforming recent neural rendering baselines at this task, and can accurately segment moving objects. We do so by assessing the method empirically on challenging videos from the EPIC-KITCHENS dataset which we augment with appropriate annotations to create a new benchmark for the task of dynamic object segmentation on unconstrained video sequences, for complex 3D environments.
[ { "created": "Tue, 19 Oct 2021 12:51:35 GMT", "version": "v1" } ]
2021-10-20
[ [ "Tschernezki", "Vadim", "" ], [ "Larlus", "Diane", "" ], [ "Vedaldi", "Andrea", "" ] ]
Given a raw video sequence taken from a freely-moving camera, we study the problem of decomposing the observed 3D scene into a static background and a dynamic foreground containing the objects that move in the video sequence. This task is reminiscent of the classic background subtraction problem, but is significantly harder because all parts of the scene, static and dynamic, generate a large apparent motion due to the camera large viewpoint change. In particular, we consider egocentric videos and further separate the dynamic component into objects and the actor that observes and moves them. We achieve this factorization by reconstructing the video via a triple-stream neural rendering network that explains the different motions based on corresponding inductive biases. We demonstrate that our method can successfully separate the different types of motion, outperforming recent neural rendering baselines at this task, and can accurately segment moving objects. We do so by assessing the method empirically on challenging videos from the EPIC-KITCHENS dataset which we augment with appropriate annotations to create a new benchmark for the task of dynamic object segmentation on unconstrained video sequences, for complex 3D environments.
2203.11695
Ece Gelal Soyak
Ece Gelal Soyak, Ozgur Ercetin
Effective Communications for 6G: Challenges and Opportunities
null
null
10.1016/j.comcom.2023.12.002
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article studies effective communication, one of the three forms identified by Weaver and Shannon, as an enabler for the upcoming 6G use cases. The envisioned tactile, holographic, and multi-sensory communications require bandwidths in the order of terabits per second and latencies in the order of microseconds for an immersive experience. We argue that a theoretical framework for transporting information tailored to end-users' goals is necessary to support such applications. Different from the recently emerging discussions focusing on the meaning of exchanged messages, we focus on using these messages to take actions in the desired way. We highlight the essential characteristics of distributed knowledge accumulation as a facilitator for this upcoming paradigm, and discuss the challenges of making effective communications a reality and the potential opportunities for future research to address these challenges. In a real-life use case, we showcase the potential reduction in the number of bits transferred owing to the transferred accumulated knowledge.
[ { "created": "Tue, 22 Mar 2022 13:07:37 GMT", "version": "v1" } ]
2024-05-29
[ [ "Soyak", "Ece Gelal", "" ], [ "Ercetin", "Ozgur", "" ] ]
This article studies effective communication, one of the three forms identified by Weaver and Shannon, as an enabler for the upcoming 6G use cases. The envisioned tactile, holographic, and multi-sensory communications require bandwidths in the order of terabits per second and latencies in the order of microseconds for an immersive experience. We argue that a theoretical framework for transporting information tailored to end-users' goals is necessary to support such applications. Different from the recently emerging discussions focusing on the meaning of exchanged messages, we focus on using these messages to take actions in the desired way. We highlight the essential characteristics of distributed knowledge accumulation as a facilitator for this upcoming paradigm, and discuss the challenges of making effective communications a reality and the potential opportunities for future research to address these challenges. In a real-life use case, we showcase the potential reduction in the number of bits transferred owing to the transferred accumulated knowledge.
1009.0026
Dimitrios Panagopoulos
Dimitrios Panagopoulos
A secret sharing scheme using groups
4 pages
null
null
null
cs.CR math.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper a secret sharing scheme based on the word problem in groups is introduced. The security of the scheme and possible variations are discussed in section 2. The article concludes with the suggestion of two categories of platform groups for the implementation of the scheme.
[ { "created": "Tue, 31 Aug 2010 20:38:01 GMT", "version": "v1" } ]
2010-09-02
[ [ "Panagopoulos", "Dimitrios", "" ] ]
In this paper a secret sharing scheme based on the word problem in groups is introduced. The security of the scheme and possible variations are discussed in section 2. The article concludes with the suggestion of two categories of platform groups for the implementation of the scheme.
2209.09019
Dongxu Li
Dongxu Li, Junnan Li, Hung Le, Guangsen Wang, Silvio Savarese, Steven C.H. Hoi
LAVIS: A Library for Language-Vision Intelligence
Preprint of LAVIS technical report
null
null
null
cs.CV cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
We introduce LAVIS, an open-source deep learning library for LAnguage-VISion research and applications. LAVIS aims to serve as a one-stop comprehensive library that brings recent advancements in the language-vision field accessible for researchers and practitioners, as well as fertilizing future research and development. It features a unified interface to easily access state-of-the-art image-language, video-language models and common datasets. LAVIS supports training, evaluation and benchmarking on a rich variety of tasks, including multimodal classification, retrieval, captioning, visual question answering, dialogue and pre-training. In the meantime, the library is also highly extensible and configurable, facilitating future development and customization. In this technical report, we describe design principles, key components and functionalities of the library, and also present benchmarking results across common language-vision tasks. The library is available at: https://github.com/salesforce/LAVIS.
[ { "created": "Thu, 15 Sep 2022 18:04:10 GMT", "version": "v1" } ]
2022-09-20
[ [ "Li", "Dongxu", "" ], [ "Li", "Junnan", "" ], [ "Le", "Hung", "" ], [ "Wang", "Guangsen", "" ], [ "Savarese", "Silvio", "" ], [ "Hoi", "Steven C. H.", "" ] ]
We introduce LAVIS, an open-source deep learning library for LAnguage-VISion research and applications. LAVIS aims to serve as a one-stop comprehensive library that brings recent advancements in the language-vision field accessible for researchers and practitioners, as well as fertilizing future research and development. It features a unified interface to easily access state-of-the-art image-language, video-language models and common datasets. LAVIS supports training, evaluation and benchmarking on a rich variety of tasks, including multimodal classification, retrieval, captioning, visual question answering, dialogue and pre-training. In the meantime, the library is also highly extensible and configurable, facilitating future development and customization. In this technical report, we describe design principles, key components and functionalities of the library, and also present benchmarking results across common language-vision tasks. The library is available at: https://github.com/salesforce/LAVIS.
2405.00958
Xingyu Li
Xingyu Li, Fei Tao, Wei Ye, Aydin Nassehi, John W. Sutherland
Generative manufacturing systems using diffusion models and ChatGPT
null
null
null
null
cs.LG cs.AI cs.HC cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this study, we introduce Generative Manufacturing Systems (GMS) as a novel approach to effectively manage and coordinate autonomous manufacturing assets, thereby enhancing their responsiveness and flexibility to address a wide array of production objectives and human preferences. Deviating from traditional explicit modeling, GMS employs generative AI, including diffusion models and ChatGPT, for implicit learning from envisioned futures, marking a shift from a model-optimum to a training-sampling decision-making. Through the integration of generative AI, GMS enables complex decision-making through interactive dialogue with humans, allowing manufacturing assets to generate multiple high-quality global decisions that can be iteratively refined based on human feedback. Empirical findings showcase GMS's substantial improvement in system resilience and responsiveness to uncertainties, with decision times reduced from seconds to milliseconds. The study underscores the inherent creativity and diversity in the generated solutions, facilitating human-centric decision-making through seamless and continuous human-machine interactions.
[ { "created": "Thu, 2 May 2024 02:50:58 GMT", "version": "v1" } ]
2024-05-03
[ [ "Li", "Xingyu", "" ], [ "Tao", "Fei", "" ], [ "Ye", "Wei", "" ], [ "Nassehi", "Aydin", "" ], [ "Sutherland", "John W.", "" ] ]
In this study, we introduce Generative Manufacturing Systems (GMS) as a novel approach to effectively manage and coordinate autonomous manufacturing assets, thereby enhancing their responsiveness and flexibility to address a wide array of production objectives and human preferences. Deviating from traditional explicit modeling, GMS employs generative AI, including diffusion models and ChatGPT, for implicit learning from envisioned futures, marking a shift from a model-optimum to a training-sampling decision-making. Through the integration of generative AI, GMS enables complex decision-making through interactive dialogue with humans, allowing manufacturing assets to generate multiple high-quality global decisions that can be iteratively refined based on human feedback. Empirical findings showcase GMS's substantial improvement in system resilience and responsiveness to uncertainties, with decision times reduced from seconds to milliseconds. The study underscores the inherent creativity and diversity in the generated solutions, facilitating human-centric decision-making through seamless and continuous human-machine interactions.
2004.05461
Keigo Nakamura
Keigo Nakamura and Yoshiro Suzuki
Deep learning-based topological optimization for representing a user-specified design area
12 pages, 16 figures
null
null
null
cs.CE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Presently, topology optimization requires multiple iterations to create an optimized structure for given conditions. Among the conditions for topology optimization,the design area is one of the most important for structural design. In this study, we propose a new deep learning model to generate an optimized structure for a given design domain and other boundary conditions without iteration. For this purpose, we used open-source topology optimization MATLAB code to generate a pair of optimized structures under various design conditions. The resolution of the optimized structure is 32 * 32 pixels, and the design conditions are design area, volume fraction, distribution of external forces, and load value. Our deep learning model is primarily composed of a convolutional neural network (CNN)-based encoder and decoder, trained with datasets generated with MATLAB code. In the encoder, we use batch normalization (BN) to increase the stability of the CNN model. In the decoder, we use SPADE (spatially adaptive denormalization) to reinforce the design area information. Comparing the performance of our proposed model with a CNN model that does not use BN and SPADE, values for mean absolute error (MAE), mean compliance error, and volume error with the optimized topology structure generated in MAT-LAB code were smaller, and the proposed model was able to represent the design area more precisely. The proposed method generates near-optimal structures reflecting the design area in less computational time, compared with the open-source topology optimization MATLAB code.
[ { "created": "Sat, 11 Apr 2020 18:54:07 GMT", "version": "v1" }, { "created": "Sun, 19 Apr 2020 13:44:20 GMT", "version": "v2" } ]
2020-04-21
[ [ "Nakamura", "Keigo", "" ], [ "Suzuki", "Yoshiro", "" ] ]
Presently, topology optimization requires multiple iterations to create an optimized structure for given conditions. Among the conditions for topology optimization,the design area is one of the most important for structural design. In this study, we propose a new deep learning model to generate an optimized structure for a given design domain and other boundary conditions without iteration. For this purpose, we used open-source topology optimization MATLAB code to generate a pair of optimized structures under various design conditions. The resolution of the optimized structure is 32 * 32 pixels, and the design conditions are design area, volume fraction, distribution of external forces, and load value. Our deep learning model is primarily composed of a convolutional neural network (CNN)-based encoder and decoder, trained with datasets generated with MATLAB code. In the encoder, we use batch normalization (BN) to increase the stability of the CNN model. In the decoder, we use SPADE (spatially adaptive denormalization) to reinforce the design area information. Comparing the performance of our proposed model with a CNN model that does not use BN and SPADE, values for mean absolute error (MAE), mean compliance error, and volume error with the optimized topology structure generated in MAT-LAB code were smaller, and the proposed model was able to represent the design area more precisely. The proposed method generates near-optimal structures reflecting the design area in less computational time, compared with the open-source topology optimization MATLAB code.
2301.01382
Kazuhiro Sasabuchi
Kazuhiro Sasabuchi, Daichi Saito, Atsushi Kanehira, Naoki Wake, Jun Takamatsu, Katsushi Ikeuchi
Task-sequencing Simulator: Integrated Machine Learning to Execution Simulation for Robot Manipulation
7 pages, 6 figures
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
A task-sequencing simulator in robotics manipulation to integrate simulation-for-learning and simulation-for-execution is introduced. Unlike existing machine-learning simulation where a non-decomposed simulation is used to simulate a training scenario, the task-sequencing simulator runs a composed simulation using building blocks. This way, the simulation-for-learning is structured similarly to a multi-step simulation-for-execution. To compose both learning and execution scenarios, a unified trainable-and-composable description of blocks called a concept model is proposed and used. Using the simulator design and concept models, a reusable simulator for learning different tasks, a common-ground system for learning-to-execution, simulation-to-real is achieved and shown.
[ { "created": "Tue, 3 Jan 2023 22:34:59 GMT", "version": "v1" } ]
2023-01-05
[ [ "Sasabuchi", "Kazuhiro", "" ], [ "Saito", "Daichi", "" ], [ "Kanehira", "Atsushi", "" ], [ "Wake", "Naoki", "" ], [ "Takamatsu", "Jun", "" ], [ "Ikeuchi", "Katsushi", "" ] ]
A task-sequencing simulator in robotics manipulation to integrate simulation-for-learning and simulation-for-execution is introduced. Unlike existing machine-learning simulation where a non-decomposed simulation is used to simulate a training scenario, the task-sequencing simulator runs a composed simulation using building blocks. This way, the simulation-for-learning is structured similarly to a multi-step simulation-for-execution. To compose both learning and execution scenarios, a unified trainable-and-composable description of blocks called a concept model is proposed and used. Using the simulator design and concept models, a reusable simulator for learning different tasks, a common-ground system for learning-to-execution, simulation-to-real is achieved and shown.
1207.3944
Alejandro Frery
Alejandro C. Frery and Julio Jacobo-Berlles and Juliana Gambini and Marta Mejail
Polarimetric SAR Image Segmentation with B-Splines and a New Statistical Model
null
Multidimensional Systems and Signal Processing, vol. 21, 319-342, 2010
10.1007/s11045-010-0113-4
null
cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an approach for polarimetric Synthetic Aperture Radar (SAR) image region boundary detection based on the use of B-Spline active contours and a new model for polarimetric SAR data: the GHP distribution. In order to detect the boundary of a region, initial B-Spline curves are specified, either automatically or manually, and the proposed algorithm uses a deformable contours technique to find the boundary. In doing this, the parameters of the polarimetric GHP model for the data are estimated, in order to find the transition points between the region being segmented and the surrounding area. This is a local algorithm since it works only on the region to be segmented. Results of its performance are presented.
[ { "created": "Tue, 17 Jul 2012 11:09:37 GMT", "version": "v1" } ]
2012-07-18
[ [ "Frery", "Alejandro C.", "" ], [ "Jacobo-Berlles", "Julio", "" ], [ "Gambini", "Juliana", "" ], [ "Mejail", "Marta", "" ] ]
We present an approach for polarimetric Synthetic Aperture Radar (SAR) image region boundary detection based on the use of B-Spline active contours and a new model for polarimetric SAR data: the GHP distribution. In order to detect the boundary of a region, initial B-Spline curves are specified, either automatically or manually, and the proposed algorithm uses a deformable contours technique to find the boundary. In doing this, the parameters of the polarimetric GHP model for the data are estimated, in order to find the transition points between the region being segmented and the surrounding area. This is a local algorithm since it works only on the region to be segmented. Results of its performance are presented.
2205.09717
Shibal Ibrahim
Shibal Ibrahim and Hussein Hazimeh and Rahul Mazumder
Flexible Modeling and Multitask Learning using Differentiable Tree Ensembles
Accepted at SIGKDD'2022
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Decision tree ensembles are widely used and competitive learning models. Despite their success, popular toolkits for learning tree ensembles have limited modeling capabilities. For instance, these toolkits support a limited number of loss functions and are restricted to single task learning. We propose a flexible framework for learning tree ensembles, which goes beyond existing toolkits to support arbitrary loss functions, missing responses, and multi-task learning. Our framework builds on differentiable (a.k.a. soft) tree ensembles, which can be trained using first-order methods. However, unlike classical trees, differentiable trees are difficult to scale. We therefore propose a novel tensor-based formulation of differentiable trees that allows for efficient vectorization on GPUs. We perform experiments on a collection of 28 real open-source and proprietary datasets, which demonstrate that our framework can lead to 100x more compact and 23% more expressive tree ensembles than those by popular toolkits.
[ { "created": "Thu, 19 May 2022 17:30:49 GMT", "version": "v1" } ]
2022-05-20
[ [ "Ibrahim", "Shibal", "" ], [ "Hazimeh", "Hussein", "" ], [ "Mazumder", "Rahul", "" ] ]
Decision tree ensembles are widely used and competitive learning models. Despite their success, popular toolkits for learning tree ensembles have limited modeling capabilities. For instance, these toolkits support a limited number of loss functions and are restricted to single task learning. We propose a flexible framework for learning tree ensembles, which goes beyond existing toolkits to support arbitrary loss functions, missing responses, and multi-task learning. Our framework builds on differentiable (a.k.a. soft) tree ensembles, which can be trained using first-order methods. However, unlike classical trees, differentiable trees are difficult to scale. We therefore propose a novel tensor-based formulation of differentiable trees that allows for efficient vectorization on GPUs. We perform experiments on a collection of 28 real open-source and proprietary datasets, which demonstrate that our framework can lead to 100x more compact and 23% more expressive tree ensembles than those by popular toolkits.
2011.00630
Peter Schrammel
Peter Schrammel
How Testable is Business Software?
null
null
null
null
cs.SE cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most businesses rely on a significant stack of software to perform their daily operations. This software is business-critical as defects in this software have major impacts on revenue and customer satisfaction. The primary means for verification of this software is testing. We conducted an extensive analysis of Java software packages to evaluate their unit-testability. The results show that code in software repositories is typically split into portions of very trivial code, non-trivial code that is unit-testable, and code that cannot be unit-tested easily. This brings up interesting considerations regarding the use of test coverage metrics and design for testability, which is crucial for testing efficiency and effectiveness. Lack of unit-testability is an obstacle to applying tools that perform automated verification and test generation. These tools cannot make up for poor testability of the code and have a hard time in succeeding or are not even applicable without first improving the design of the software system.
[ { "created": "Sun, 1 Nov 2020 21:29:27 GMT", "version": "v1" } ]
2020-11-03
[ [ "Schrammel", "Peter", "" ] ]
Most businesses rely on a significant stack of software to perform their daily operations. This software is business-critical as defects in this software have major impacts on revenue and customer satisfaction. The primary means for verification of this software is testing. We conducted an extensive analysis of Java software packages to evaluate their unit-testability. The results show that code in software repositories is typically split into portions of very trivial code, non-trivial code that is unit-testable, and code that cannot be unit-tested easily. This brings up interesting considerations regarding the use of test coverage metrics and design for testability, which is crucial for testing efficiency and effectiveness. Lack of unit-testability is an obstacle to applying tools that perform automated verification and test generation. These tools cannot make up for poor testability of the code and have a hard time in succeeding or are not even applicable without first improving the design of the software system.
1812.02848
Anthony Palladino
Anthony Palladino and Christopher J. Thissen
Cyber Anomaly Detection Using Graph-node Role-dynamics
null
Proceedings of DYnamic and Novel Advances in Machine Learning and Intelligent Cyber Security Workshop (DYNAMICS'18). ACM, New York, NY, USA. (2019)
10.1145/3306195.3306198
null
cs.CR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intrusion detection systems (IDSs) generate valuable knowledge about network security, but an abundance of false alarms and a lack of methods to capture the interdependence among alerts hampers their utility for network defense. Here, we explore a graph-based approach for fusing alerts generated by multiple IDSs (e.g., Snort, OSSEC, and Bro). Our approach generates a weighted graph of alert fields (not network topology) that makes explicit the connections between multiple alerts, IDS systems, and other cyber artifacts. We use this multi-modal graph to identify anomalous changes in the alert patterns of a network. To detect the anomalies, we apply the role-dynamics approach, which has successfully identified anomalies in social media, email, and IP communication graphs. In the cyber domain, each node (alert field) in the fused IDS alert graph is assigned a probability distribution across a small set of roles based on that node's features. A cyber attack should trigger IDS alerts and cause changes in the node features, but rather than track every feature for every alert-field node individually, roles provide a succinct, integrated summary of those feature changes. We measure changes in each node's probabilistic role assignment over time, and identify anomalies as deviations from expected roles. We test our approach using simulations including three weeks of normal background traffic, as well as cyber attacks that occur near the end of the simulations. This paper presents a novel approach to multi-modal data fusion and a novel application of role dynamics within the cyber-security domain. Our results show a drastic decrease in the false-positive rate when considering our anomaly indicator instead of the IDS alerts themselves, thereby reducing alarm fatigue and providing a promising avenue for threat intelligence in network defense.
[ { "created": "Thu, 6 Dec 2018 23:05:00 GMT", "version": "v1" }, { "created": "Wed, 16 Jan 2019 14:33:48 GMT", "version": "v2" } ]
2019-01-17
[ [ "Palladino", "Anthony", "" ], [ "Thissen", "Christopher J.", "" ] ]
Intrusion detection systems (IDSs) generate valuable knowledge about network security, but an abundance of false alarms and a lack of methods to capture the interdependence among alerts hampers their utility for network defense. Here, we explore a graph-based approach for fusing alerts generated by multiple IDSs (e.g., Snort, OSSEC, and Bro). Our approach generates a weighted graph of alert fields (not network topology) that makes explicit the connections between multiple alerts, IDS systems, and other cyber artifacts. We use this multi-modal graph to identify anomalous changes in the alert patterns of a network. To detect the anomalies, we apply the role-dynamics approach, which has successfully identified anomalies in social media, email, and IP communication graphs. In the cyber domain, each node (alert field) in the fused IDS alert graph is assigned a probability distribution across a small set of roles based on that node's features. A cyber attack should trigger IDS alerts and cause changes in the node features, but rather than track every feature for every alert-field node individually, roles provide a succinct, integrated summary of those feature changes. We measure changes in each node's probabilistic role assignment over time, and identify anomalies as deviations from expected roles. We test our approach using simulations including three weeks of normal background traffic, as well as cyber attacks that occur near the end of the simulations. This paper presents a novel approach to multi-modal data fusion and a novel application of role dynamics within the cyber-security domain. Our results show a drastic decrease in the false-positive rate when considering our anomaly indicator instead of the IDS alerts themselves, thereby reducing alarm fatigue and providing a promising avenue for threat intelligence in network defense.
1903.11437
Franck Burlot
Franck Burlot and Fran\c{c}ois Yvon
Using Monolingual Data in Neural Machine Translation: a Systematic Study
Published in the Proceedings of the Third Conference on Machine Translation (Research Papers), 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural Machine Translation (MT) has radically changed the way systems are developed. A major difference with the previous generation (Phrase-Based MT) is the way monolingual target data, which often abounds, is used in these two paradigms. While Phrase-Based MT can seamlessly integrate very large language models trained on billions of sentences, the best option for Neural MT developers seems to be the generation of artificial parallel data through \textsl{back-translation} - a technique that fails to fully take advantage of existing datasets. In this paper, we conduct a systematic study of back-translation, comparing alternative uses of monolingual data, as well as multiple data generation procedures. Our findings confirm that back-translation is very effective and give new explanations as to why this is the case. We also introduce new data simulation techniques that are almost as effective, yet much cheaper to implement.
[ { "created": "Wed, 27 Mar 2019 14:11:18 GMT", "version": "v1" } ]
2019-03-28
[ [ "Burlot", "Franck", "" ], [ "Yvon", "François", "" ] ]
Neural Machine Translation (MT) has radically changed the way systems are developed. A major difference with the previous generation (Phrase-Based MT) is the way monolingual target data, which often abounds, is used in these two paradigms. While Phrase-Based MT can seamlessly integrate very large language models trained on billions of sentences, the best option for Neural MT developers seems to be the generation of artificial parallel data through \textsl{back-translation} - a technique that fails to fully take advantage of existing datasets. In this paper, we conduct a systematic study of back-translation, comparing alternative uses of monolingual data, as well as multiple data generation procedures. Our findings confirm that back-translation is very effective and give new explanations as to why this is the case. We also introduce new data simulation techniques that are almost as effective, yet much cheaper to implement.
2309.08206
Gongyang Li
Gongyang Li and Zhen Bai and Zhi Liu and Xinpeng Zhang and Haibin Ling
Salient Object Detection in Optical Remote Sensing Images Driven by Transformer
13 pages, 6 figures, Accepted by IEEE Transactions on Image Processing 2023
null
10.1109/TIP.2023.3314285
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Existing methods for Salient Object Detection in Optical Remote Sensing Images (ORSI-SOD) mainly adopt Convolutional Neural Networks (CNNs) as the backbone, such as VGG and ResNet. Since CNNs can only extract features within certain receptive fields, most ORSI-SOD methods generally follow the local-to-contextual paradigm. In this paper, we propose a novel Global Extraction Local Exploration Network (GeleNet) for ORSI-SOD following the global-to-local paradigm. Specifically, GeleNet first adopts a transformer backbone to generate four-level feature embeddings with global long-range dependencies. Then, GeleNet employs a Direction-aware Shuffle Weighted Spatial Attention Module (D-SWSAM) and its simplified version (SWSAM) to enhance local interactions, and a Knowledge Transfer Module (KTM) to further enhance cross-level contextual interactions. D-SWSAM comprehensively perceives the orientation information in the lowest-level features through directional convolutions to adapt to various orientations of salient objects in ORSIs, and effectively enhances the details of salient objects with an improved attention mechanism. SWSAM discards the direction-aware part of D-SWSAM to focus on localizing salient objects in the highest-level features. KTM models the contextual correlation knowledge of two middle-level features of different scales based on the self-attention mechanism, and transfers the knowledge to the raw features to generate more discriminative features. Finally, a saliency predictor is used to generate the saliency map based on the outputs of the above three modules. Extensive experiments on three public datasets demonstrate that the proposed GeleNet outperforms relevant state-of-the-art methods. The code and results of our method are available at https://github.com/MathLee/GeleNet.
[ { "created": "Fri, 15 Sep 2023 07:14:43 GMT", "version": "v1" } ]
2023-09-18
[ [ "Li", "Gongyang", "" ], [ "Bai", "Zhen", "" ], [ "Liu", "Zhi", "" ], [ "Zhang", "Xinpeng", "" ], [ "Ling", "Haibin", "" ] ]
Existing methods for Salient Object Detection in Optical Remote Sensing Images (ORSI-SOD) mainly adopt Convolutional Neural Networks (CNNs) as the backbone, such as VGG and ResNet. Since CNNs can only extract features within certain receptive fields, most ORSI-SOD methods generally follow the local-to-contextual paradigm. In this paper, we propose a novel Global Extraction Local Exploration Network (GeleNet) for ORSI-SOD following the global-to-local paradigm. Specifically, GeleNet first adopts a transformer backbone to generate four-level feature embeddings with global long-range dependencies. Then, GeleNet employs a Direction-aware Shuffle Weighted Spatial Attention Module (D-SWSAM) and its simplified version (SWSAM) to enhance local interactions, and a Knowledge Transfer Module (KTM) to further enhance cross-level contextual interactions. D-SWSAM comprehensively perceives the orientation information in the lowest-level features through directional convolutions to adapt to various orientations of salient objects in ORSIs, and effectively enhances the details of salient objects with an improved attention mechanism. SWSAM discards the direction-aware part of D-SWSAM to focus on localizing salient objects in the highest-level features. KTM models the contextual correlation knowledge of two middle-level features of different scales based on the self-attention mechanism, and transfers the knowledge to the raw features to generate more discriminative features. Finally, a saliency predictor is used to generate the saliency map based on the outputs of the above three modules. Extensive experiments on three public datasets demonstrate that the proposed GeleNet outperforms relevant state-of-the-art methods. The code and results of our method are available at https://github.com/MathLee/GeleNet.
1304.2382
Alexander Yeh
Alexander Yeh
Predicting the Likely Behaviors of Continuous Nonlinear Systems in Equilibrium
Appears in Proceedings of the Fourth Conference on Uncertainty in Artificial Intelligence (UAI1988)
null
null
UAI-P-1988-PG-374-381
cs.SY cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a method for predicting the likely behaviors of continuous nonlinear systems in equilibrium in which the input values can vary. The method uses a parameterized equation model and a lower bound on the input joint density to bound the likelihood that some behavior will occur, such as a state variable being inside a given numeric range. Using a bound on the density instead of the density itself is desirable because often the input density's parameters and shape are not exactly known. The new method is called SAB after its basic operations: split the input value space into smaller regions, and then bound those regions' possible behaviors and the probability of being in them. SAB finds rough bounds at first, and then refines them as more time is given. In contrast to other researchers' methods, SAB can (1) find all the possible system behaviors, and indicate how likely they are, (2) does not approximate the distribution of possible outcomes without some measure of the error magnitude, (3) does not use discretized variable values, which limit the events one can find probability bounds for, (4) can handle density bounds, and (5) can handle such criteria as two state variables both being inside a numeric range.
[ { "created": "Wed, 27 Mar 2013 19:45:44 GMT", "version": "v1" } ]
2013-04-10
[ [ "Yeh", "Alexander", "" ] ]
This paper introduces a method for predicting the likely behaviors of continuous nonlinear systems in equilibrium in which the input values can vary. The method uses a parameterized equation model and a lower bound on the input joint density to bound the likelihood that some behavior will occur, such as a state variable being inside a given numeric range. Using a bound on the density instead of the density itself is desirable because often the input density's parameters and shape are not exactly known. The new method is called SAB after its basic operations: split the input value space into smaller regions, and then bound those regions' possible behaviors and the probability of being in them. SAB finds rough bounds at first, and then refines them as more time is given. In contrast to other researchers' methods, SAB can (1) find all the possible system behaviors, and indicate how likely they are, (2) does not approximate the distribution of possible outcomes without some measure of the error magnitude, (3) does not use discretized variable values, which limit the events one can find probability bounds for, (4) can handle density bounds, and (5) can handle such criteria as two state variables both being inside a numeric range.
2406.04236
Samyadeep Basu
Samyadeep Basu, Martin Grayson, Cecily Morrison, Besmira Nushi, Soheil Feizi, Daniela Massiceti
Understanding Information Storage and Transfer in Multi-modal Large Language Models
20 pages
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Understanding the mechanisms of information storage and transfer in Transformer-based models is important for driving model understanding progress. Recent work has studied these mechanisms for Large Language Models (LLMs), revealing insights on how information is stored in a model's parameters and how information flows to and from these parameters in response to specific prompts. However, these studies have not yet been extended to Multi-modal Large Language Models (MLLMs). Given their expanding capabilities and real-world use, we start by studying one aspect of these models -- how MLLMs process information in a factual visual question answering task. We use a constraint-based formulation which views a visual question as having a set of visual or textual constraints that the model's generated answer must satisfy to be correct (e.g. What movie directed by the director in this photo has won a Golden Globe?). Under this setting, we contribute i) a method that extends causal information tracing from pure language to the multi-modal setting, and ii) VQA-Constraints, a test-bed of 9.7K visual questions annotated with constraints. We use these tools to study two open-source MLLMs, LLaVa and multi-modal Phi-2. Our key findings show that these MLLMs rely on MLP and self-attention blocks in much earlier layers for information storage, compared to LLMs whose mid-layer MLPs are more important. We also show that a consistent small subset of visual tokens output by the vision encoder are responsible for transferring information from the image to these causal blocks. We validate these mechanisms by introducing MultEdit, a model-editing algorithm that can correct errors and insert new long-tailed information into MLLMs by targeting these causal blocks.
[ { "created": "Thu, 6 Jun 2024 16:35:36 GMT", "version": "v1" } ]
2024-06-07
[ [ "Basu", "Samyadeep", "" ], [ "Grayson", "Martin", "" ], [ "Morrison", "Cecily", "" ], [ "Nushi", "Besmira", "" ], [ "Feizi", "Soheil", "" ], [ "Massiceti", "Daniela", "" ] ]
Understanding the mechanisms of information storage and transfer in Transformer-based models is important for driving model understanding progress. Recent work has studied these mechanisms for Large Language Models (LLMs), revealing insights on how information is stored in a model's parameters and how information flows to and from these parameters in response to specific prompts. However, these studies have not yet been extended to Multi-modal Large Language Models (MLLMs). Given their expanding capabilities and real-world use, we start by studying one aspect of these models -- how MLLMs process information in a factual visual question answering task. We use a constraint-based formulation which views a visual question as having a set of visual or textual constraints that the model's generated answer must satisfy to be correct (e.g. What movie directed by the director in this photo has won a Golden Globe?). Under this setting, we contribute i) a method that extends causal information tracing from pure language to the multi-modal setting, and ii) VQA-Constraints, a test-bed of 9.7K visual questions annotated with constraints. We use these tools to study two open-source MLLMs, LLaVa and multi-modal Phi-2. Our key findings show that these MLLMs rely on MLP and self-attention blocks in much earlier layers for information storage, compared to LLMs whose mid-layer MLPs are more important. We also show that a consistent small subset of visual tokens output by the vision encoder are responsible for transferring information from the image to these causal blocks. We validate these mechanisms by introducing MultEdit, a model-editing algorithm that can correct errors and insert new long-tailed information into MLLMs by targeting these causal blocks.
cs/0607104
Hao Chen
Hao Chen
Reducing the Computation of Linear Complexities of Periodic Sequences over $GF(p^m)$
10 pages. To appear in IEEE Transactions on Innformation Theory
null
null
null
cs.CR cs.IT math.IT
null
The linear complexity of a periodic sequence over $GF(p^m)$ plays an important role in cryptography and communication [12]. In this correspondence, we prove a result which reduces the computation of the linear complexity and minimal connection polynomial of a period $un$ sequence over $GF(p^m)$ to the computation of the linear complexities and minimal connection polynomials of $u$ period $n$ sequences. The conditions $u|p^m-1$ and $\gcd(n,p^m-1)=1$ are required for the result to hold. Some applications of this reduction in fast algorithms to determine the linear complexities and minimal connection polynomials of sequences over $GF(p^m)$ are presented.
[ { "created": "Mon, 24 Jul 2006 02:54:21 GMT", "version": "v1" }, { "created": "Mon, 18 Sep 2006 03:15:08 GMT", "version": "v2" } ]
2016-08-31
[ [ "Chen", "Hao", "" ] ]
The linear complexity of a periodic sequence over $GF(p^m)$ plays an important role in cryptography and communication [12]. In this correspondence, we prove a result which reduces the computation of the linear complexity and minimal connection polynomial of a period $un$ sequence over $GF(p^m)$ to the computation of the linear complexities and minimal connection polynomials of $u$ period $n$ sequences. The conditions $u|p^m-1$ and $\gcd(n,p^m-1)=1$ are required for the result to hold. Some applications of this reduction in fast algorithms to determine the linear complexities and minimal connection polynomials of sequences over $GF(p^m)$ are presented.
cs/0409014
Manoj Kumar
Sunder lal and Manoj Kumar
A Digital Signature with Threshold Generation and Verification
10 Pages
null
null
null
cs.CR
null
This paper proposes a signature scheme where the signatures are generated by the cooperation of a number of people from a given group of senders and the signatures are verified by a certain number of people from the group of recipients. Shamir's threshold scheme and Schnorr's signature scheme are used to realize the proposed scheme.
[ { "created": "Wed, 8 Sep 2004 11:55:58 GMT", "version": "v1" }, { "created": "Fri, 17 Sep 2004 12:29:23 GMT", "version": "v2" }, { "created": "Tue, 2 Nov 2004 12:21:31 GMT", "version": "v3" } ]
2007-05-23
[ [ "lal", "Sunder", "" ], [ "Kumar", "Manoj", "" ] ]
This paper proposes a signature scheme where the signatures are generated by the cooperation of a number of people from a given group of senders and the signatures are verified by a certain number of people from the group of recipients. Shamir's threshold scheme and Schnorr's signature scheme are used to realize the proposed scheme.
1304.1502
Henri Farrency
Henri Farrency, Henri Prade
Positive and Negative Explanations of Uncertain Reasoning in the Framework of Possibility Theory
Appears in Proceedings of the Fifth Conference on Uncertainty in Artificial Intelligence (UAI1989)
null
null
UAI-P-1989-PG-95-101
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an approach for developing the explanation capabilities of rule-based expert systems managing imprecise and uncertain knowledge. The treatment of uncertainty takes place in the framework of possibility theory where the available information concerning the value of a logical or numerical variable is represented by a possibility distribution which restricts its more or less possible values. We first discuss different kinds of queries asking for explanations before focusing on the two following types : i) how, a particular possibility distribution is obtained (emphasizing the main reasons only) ; ii) why in a computed possibility distribution, a particular value has received a possibility degree which is so high, so low or so contrary to the expectation. The approach is based on the exploitation of equations in max-min algebra. This formalism includes the limit case of certain and precise information.
[ { "created": "Wed, 27 Mar 2013 19:37:53 GMT", "version": "v1" } ]
2013-04-08
[ [ "Farrency", "Henri", "" ], [ "Prade", "Henri", "" ] ]
This paper presents an approach for developing the explanation capabilities of rule-based expert systems managing imprecise and uncertain knowledge. The treatment of uncertainty takes place in the framework of possibility theory where the available information concerning the value of a logical or numerical variable is represented by a possibility distribution which restricts its more or less possible values. We first discuss different kinds of queries asking for explanations before focusing on the two following types : i) how, a particular possibility distribution is obtained (emphasizing the main reasons only) ; ii) why in a computed possibility distribution, a particular value has received a possibility degree which is so high, so low or so contrary to the expectation. The approach is based on the exploitation of equations in max-min algebra. This formalism includes the limit case of certain and precise information.
2105.13650
Huayang Li
Wei Bi, Huayang Li, Jiacheng Huang
Data Augmentation for Text Generation Without Any Augmented Data
Accepted into the main conference of ACL 2021
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data augmentation is an effective way to improve the performance of many neural text generation models. However, current data augmentation methods need to define or choose proper data mapping functions that map the original samples into the augmented samples. In this work, we derive an objective to formulate the problem of data augmentation on text generation tasks without any use of augmented data constructed by specific mapping functions. Our proposed objective can be efficiently optimized and applied to popular loss functions on text generation tasks with a convergence rate guarantee. Experiments on five datasets of two text generation tasks show that our approach can approximate or even surpass popular data augmentation methods.
[ { "created": "Fri, 28 May 2021 07:56:51 GMT", "version": "v1" } ]
2021-05-31
[ [ "Bi", "Wei", "" ], [ "Li", "Huayang", "" ], [ "Huang", "Jiacheng", "" ] ]
Data augmentation is an effective way to improve the performance of many neural text generation models. However, current data augmentation methods need to define or choose proper data mapping functions that map the original samples into the augmented samples. In this work, we derive an objective to formulate the problem of data augmentation on text generation tasks without any use of augmented data constructed by specific mapping functions. Our proposed objective can be efficiently optimized and applied to popular loss functions on text generation tasks with a convergence rate guarantee. Experiments on five datasets of two text generation tasks show that our approach can approximate or even surpass popular data augmentation methods.
1910.05384
Yinsong Wang
Yinsong Wang and Shahin Shahrampour
ORCCA: Optimal Randomized Canonical Correlation Analysis
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Random features approach has been widely used for kernel approximation in large-scale machine learning. A number of recent studies have explored data-dependent sampling of features, modifying the stochastic oracle from which random features are sampled. While proposed techniques in this realm improve the approximation, their suitability is often verified on a single learning task. In this paper, we propose a task-specific scoring rule for selecting random features, which can be employed for different applications with some adjustments. We restrict our attention to Canonical Correlation Analysis (CCA), and we provide a novel, principled guide for finding the score function maximizing the canonical correlations. We prove that this method, called ORCCA, can outperform (in expectation) the corresponding Kernel CCA with a default kernel. Numerical experiments verify that ORCCA is significantly superior than other approximation techniques in the CCA task.
[ { "created": "Fri, 11 Oct 2019 19:50:00 GMT", "version": "v1" }, { "created": "Sat, 8 Feb 2020 05:24:58 GMT", "version": "v2" }, { "created": "Mon, 1 Nov 2021 19:57:31 GMT", "version": "v3" } ]
2021-11-03
[ [ "Wang", "Yinsong", "" ], [ "Shahrampour", "Shahin", "" ] ]
Random features approach has been widely used for kernel approximation in large-scale machine learning. A number of recent studies have explored data-dependent sampling of features, modifying the stochastic oracle from which random features are sampled. While proposed techniques in this realm improve the approximation, their suitability is often verified on a single learning task. In this paper, we propose a task-specific scoring rule for selecting random features, which can be employed for different applications with some adjustments. We restrict our attention to Canonical Correlation Analysis (CCA), and we provide a novel, principled guide for finding the score function maximizing the canonical correlations. We prove that this method, called ORCCA, can outperform (in expectation) the corresponding Kernel CCA with a default kernel. Numerical experiments verify that ORCCA is significantly superior than other approximation techniques in the CCA task.
1110.6298
Jason McEwen
J. D. McEwen, Y. Wiaux
A novel sampling theorem on the sphere
13 pages, 5 figures, accepted for publication by IEEE Trans. Sig. Proc.; We make our Spin Spherical Harmonic Transform (SSHT) package available publicly from http://www.ssht.org.uk
IEEE Trans. Signal Process. 59 (2011) 5876-5887
10.1109/TSP.2011.2166394
null
cs.IT astro-ph.IM math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a novel sampling theorem on the sphere and corresponding fast algorithms by associating the sphere with the torus through a periodic extension. The fundamental property of any sampling theorem is the number of samples required to represent a band-limited signal. To represent exactly a signal on the sphere band-limited at L, all sampling theorems on the sphere require O(L^2) samples. However, our sampling theorem requires less than half the number of samples of other equiangular sampling theorems on the sphere and an asymptotically identical, but smaller, number of samples than the Gauss-Legendre sampling theorem. The complexity of our algorithms scale as O(L^3), however, the continual use of fast Fourier transforms reduces the constant prefactor associated with the asymptotic scaling considerably, resulting in algorithms that are fast. Furthermore, we do not require any precomputation and our algorithms apply to both scalar and spin functions on the sphere without any change in computational complexity or computation time. We make our implementation of these algorithms available publicly and perform numerical experiments demonstrating their speed and accuracy up to very high band-limits. Finally, we highlight the advantages of our sampling theorem in the context of potential applications, notably in the field of compressive sampling.
[ { "created": "Fri, 28 Oct 2011 11:23:29 GMT", "version": "v1" } ]
2012-01-18
[ [ "McEwen", "J. D.", "" ], [ "Wiaux", "Y.", "" ] ]
We develop a novel sampling theorem on the sphere and corresponding fast algorithms by associating the sphere with the torus through a periodic extension. The fundamental property of any sampling theorem is the number of samples required to represent a band-limited signal. To represent exactly a signal on the sphere band-limited at L, all sampling theorems on the sphere require O(L^2) samples. However, our sampling theorem requires less than half the number of samples of other equiangular sampling theorems on the sphere and an asymptotically identical, but smaller, number of samples than the Gauss-Legendre sampling theorem. The complexity of our algorithms scale as O(L^3), however, the continual use of fast Fourier transforms reduces the constant prefactor associated with the asymptotic scaling considerably, resulting in algorithms that are fast. Furthermore, we do not require any precomputation and our algorithms apply to both scalar and spin functions on the sphere without any change in computational complexity or computation time. We make our implementation of these algorithms available publicly and perform numerical experiments demonstrating their speed and accuracy up to very high band-limits. Finally, we highlight the advantages of our sampling theorem in the context of potential applications, notably in the field of compressive sampling.
2005.00163
Sajad Sotudeh
Sajad Sotudeh and Nazli Goharian and Ross W. Filice
Attend to Medical Ontologies: Content Selection for Clinical Abstractive Summarization
Accepted to ACL 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequence-to-sequence (seq2seq) network is a well-established model for text summarization task. It can learn to produce readable content; however, it falls short in effectively identifying key regions of the source. In this paper, we approach the content selection problem for clinical abstractive summarization by augmenting salient ontological terms into the summarizer. Our experiments on two publicly available clinical data sets (107,372 reports of MIMIC-CXR, and 3,366 reports of OpenI) show that our model statistically significantly boosts state-of-the-art results in terms of Rouge metrics (with improvements: 2.9% RG-1, 2.5% RG-2, 1.9% RG-L), in the healthcare domain where any range of improvement impacts patients' welfare.
[ { "created": "Fri, 1 May 2020 01:12:49 GMT", "version": "v1" } ]
2020-05-04
[ [ "Sotudeh", "Sajad", "" ], [ "Goharian", "Nazli", "" ], [ "Filice", "Ross W.", "" ] ]
Sequence-to-sequence (seq2seq) network is a well-established model for text summarization task. It can learn to produce readable content; however, it falls short in effectively identifying key regions of the source. In this paper, we approach the content selection problem for clinical abstractive summarization by augmenting salient ontological terms into the summarizer. Our experiments on two publicly available clinical data sets (107,372 reports of MIMIC-CXR, and 3,366 reports of OpenI) show that our model statistically significantly boosts state-of-the-art results in terms of Rouge metrics (with improvements: 2.9% RG-1, 2.5% RG-2, 1.9% RG-L), in the healthcare domain where any range of improvement impacts patients' welfare.
2211.14437
Simon Bertrand
Simon Bertrand, Nadia Tawbi, Jos\'ee Desharnais
Unsupervised User-Based Insider Threat Detection Using Bayesian Gaussian Mixture Models
16 pages
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
Insider threats are a growing concern for organizations due to the amount of damage that their members can inflict by combining their privileged access and domain knowledge. Nonetheless, the detection of such threats is challenging, precisely because of the ability of the authorized personnel to easily conduct malicious actions and because of the immense size and diversity of audit data produced by organizations in which the few malicious footprints are hidden. In this paper, we propose an unsupervised insider threat detection system based on audit data using Bayesian Gaussian Mixture Models. The proposed approach leverages a user-based model to optimize specific behaviors modelization and an automatic feature extraction system based on Word2Vec for ease of use in a real-life scenario. The solution distinguishes itself by not requiring data balancing nor to be trained only on normal instances, and by its little domain knowledge required to implement. Still, results indicate that the proposed method competes with state-of-the-art approaches, presenting a good recall of 88\%, accuracy and true negative rate of 93%, and a false positive rate of 6.9%. For our experiments, we used the benchmark dataset CERT version 4.2.
[ { "created": "Wed, 23 Nov 2022 13:45:19 GMT", "version": "v1" } ]
2022-11-29
[ [ "Bertrand", "Simon", "" ], [ "Tawbi", "Nadia", "" ], [ "Desharnais", "Josée", "" ] ]
Insider threats are a growing concern for organizations due to the amount of damage that their members can inflict by combining their privileged access and domain knowledge. Nonetheless, the detection of such threats is challenging, precisely because of the ability of the authorized personnel to easily conduct malicious actions and because of the immense size and diversity of audit data produced by organizations in which the few malicious footprints are hidden. In this paper, we propose an unsupervised insider threat detection system based on audit data using Bayesian Gaussian Mixture Models. The proposed approach leverages a user-based model to optimize specific behaviors modelization and an automatic feature extraction system based on Word2Vec for ease of use in a real-life scenario. The solution distinguishes itself by not requiring data balancing nor to be trained only on normal instances, and by its little domain knowledge required to implement. Still, results indicate that the proposed method competes with state-of-the-art approaches, presenting a good recall of 88\%, accuracy and true negative rate of 93%, and a false positive rate of 6.9%. For our experiments, we used the benchmark dataset CERT version 4.2.
2210.16834
Jing Xu
Jing Xu, Xu Luo, Xinglin Pan, Wenjie Pei, Yanan Li, Zenglin Xu
Alleviating the Sample Selection Bias in Few-shot Learning by Removing Projection to the Centroid
Accepted at NeurIPS 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Few-shot learning (FSL) targets at generalization of vision models towards unseen tasks without sufficient annotations. Despite the emergence of a number of few-shot learning methods, the sample selection bias problem, i.e., the sensitivity to the limited amount of support data, has not been well understood. In this paper, we find that this problem usually occurs when the positions of support samples are in the vicinity of task centroid -- the mean of all class centroids in the task. This motivates us to propose an extremely simple feature transformation to alleviate this problem, dubbed Task Centroid Projection Removing (TCPR). TCPR is applied directly to all image features in a given task, aiming at removing the dimension of features along the direction of the task centroid. While the exact task centroid cannot be accurately obtained from limited data, we estimate it using base features that are each similar to one of the support features. Our method effectively prevents features from being too close to the task centroid. Extensive experiments over ten datasets from different domains show that TCPR can reliably improve classification accuracy across various feature extractors, training algorithms and datasets. The code has been made available at https://github.com/KikimorMay/FSL-TCBR.
[ { "created": "Sun, 30 Oct 2022 13:03:13 GMT", "version": "v1" } ]
2022-11-01
[ [ "Xu", "Jing", "" ], [ "Luo", "Xu", "" ], [ "Pan", "Xinglin", "" ], [ "Pei", "Wenjie", "" ], [ "Li", "Yanan", "" ], [ "Xu", "Zenglin", "" ] ]
Few-shot learning (FSL) targets at generalization of vision models towards unseen tasks without sufficient annotations. Despite the emergence of a number of few-shot learning methods, the sample selection bias problem, i.e., the sensitivity to the limited amount of support data, has not been well understood. In this paper, we find that this problem usually occurs when the positions of support samples are in the vicinity of task centroid -- the mean of all class centroids in the task. This motivates us to propose an extremely simple feature transformation to alleviate this problem, dubbed Task Centroid Projection Removing (TCPR). TCPR is applied directly to all image features in a given task, aiming at removing the dimension of features along the direction of the task centroid. While the exact task centroid cannot be accurately obtained from limited data, we estimate it using base features that are each similar to one of the support features. Our method effectively prevents features from being too close to the task centroid. Extensive experiments over ten datasets from different domains show that TCPR can reliably improve classification accuracy across various feature extractors, training algorithms and datasets. The code has been made available at https://github.com/KikimorMay/FSL-TCBR.
1701.00723
Zhiyuan Zha
Zhiyuan Zha, Xinggan Zhang, Qiong Wang, Yechao Bai, Lan Tang
Image denoising using group sparsity residual and external nonlocal self-similarity prior
arXiv admin note: text overlap with arXiv:1609.03302
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Nonlocal image representation has been successfully used in many image-related inverse problems including denoising, deblurring and deblocking. However, a majority of reconstruction methods only exploit the nonlocal self-similarity (NSS) prior of the degraded observation image, it is very challenging to reconstruct the latent clean image. In this paper we propose a novel model for image denoising via group sparsity residual and external NSS prior. To boost the performance of image denoising, the concept of group sparsity residual is proposed, and thus the problem of image denoising is transformed into one that reduces the group sparsity residual. Due to the fact that the groups contain a large amount of NSS information of natural images, we obtain a good estimation of the group sparse coefficients of the original image by the external NSS prior based on Gaussian Mixture model (GMM) learning and the group sparse coefficients of noisy image is used to approximate the estimation. Experimental results have demonstrated that the proposed method not only outperforms many state-of-the-art methods, but also delivers the best qualitative denoising results with finer details and less ringing artifacts.
[ { "created": "Tue, 3 Jan 2017 15:32:16 GMT", "version": "v1" } ]
2017-01-04
[ [ "Zha", "Zhiyuan", "" ], [ "Zhang", "Xinggan", "" ], [ "Wang", "Qiong", "" ], [ "Bai", "Yechao", "" ], [ "Tang", "Lan", "" ] ]
Nonlocal image representation has been successfully used in many image-related inverse problems including denoising, deblurring and deblocking. However, a majority of reconstruction methods only exploit the nonlocal self-similarity (NSS) prior of the degraded observation image, it is very challenging to reconstruct the latent clean image. In this paper we propose a novel model for image denoising via group sparsity residual and external NSS prior. To boost the performance of image denoising, the concept of group sparsity residual is proposed, and thus the problem of image denoising is transformed into one that reduces the group sparsity residual. Due to the fact that the groups contain a large amount of NSS information of natural images, we obtain a good estimation of the group sparse coefficients of the original image by the external NSS prior based on Gaussian Mixture model (GMM) learning and the group sparse coefficients of noisy image is used to approximate the estimation. Experimental results have demonstrated that the proposed method not only outperforms many state-of-the-art methods, but also delivers the best qualitative denoising results with finer details and less ringing artifacts.
0712.2389
Guido Tack
Martin Mann and Guido Tack and Sebastian Will
Decomposition During Search for Propagation-Based Constraint Solvers
20 pages, 9 figures, 2 tables; longer, more detailed version
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe decomposition during search (DDS), an integration of And/Or tree search into propagation-based constraint solvers. The presented search algorithm dynamically decomposes sub-problems of a constraint satisfaction problem into independent partial problems, avoiding redundant work. The paper discusses how DDS interacts with key features that make propagation-based solvers successful: constraint propagation, especially for global constraints, and dynamic search heuristics. We have implemented DDS for the Gecode constraint programming library. Two applications, solution counting in graph coloring and protein structure prediction, exemplify the benefits of DDS in practice.
[ { "created": "Fri, 14 Dec 2007 18:08:26 GMT", "version": "v1" }, { "created": "Wed, 11 Jun 2008 13:00:11 GMT", "version": "v2" } ]
2008-06-11
[ [ "Mann", "Martin", "" ], [ "Tack", "Guido", "" ], [ "Will", "Sebastian", "" ] ]
We describe decomposition during search (DDS), an integration of And/Or tree search into propagation-based constraint solvers. The presented search algorithm dynamically decomposes sub-problems of a constraint satisfaction problem into independent partial problems, avoiding redundant work. The paper discusses how DDS interacts with key features that make propagation-based solvers successful: constraint propagation, especially for global constraints, and dynamic search heuristics. We have implemented DDS for the Gecode constraint programming library. Two applications, solution counting in graph coloring and protein structure prediction, exemplify the benefits of DDS in practice.
2404.08812
Camelia Daniela Brumar
Camelia D. Brumar, Sam Molnar, Gabriel Appleby, Kristi Potter, Remco Chang
A Typology of Decision-Making Tasks for Visualization
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Despite decision-making being a vital goal of data visualization, little work has been done to differentiate the decision-making tasks within our field. While visualization task taxonomies and typologies exist, they are often too granular for describing complex decision goals and decision-making processes, thus limiting their potential use in designing decision-support tools. In this paper, we contribute a typology of decision-making tasks that were iteratively refined from a list of design goals distilled from a literature review. Our typology is concise and consists of only three tasks: choose, activate, and create. Originally proposed by the scientific community, we extend and provide definitions for these tasks that are suitable for the visualization community. Our proposed typology offers two benefits. First, it facilitates the composition of decisions using these three tasks, allowing for flexible and clear descriptions across varying complexities and domains. Second, diagrams created using this typology encourage productive discourse between visualization designers and domain experts by abstracting the intricacies of data, thereby promoting clarity and rigorous analysis of decision-making processes. We motivate the use of our typology through four case studies and demonstrate the benefits of our approach through semi-structured interviews conducted with experienced members of the visualization community, comprising academic and industry experts, who have contributed to developing or publishing decision support systems for domain experts. Our interviewees composed diagrams using our typology to delineate the decision-making processes that drive their decision-support tools, demonstrating its descriptive capacity and effectiveness.
[ { "created": "Fri, 12 Apr 2024 21:05:16 GMT", "version": "v1" }, { "created": "Mon, 22 Apr 2024 16:03:33 GMT", "version": "v2" } ]
2024-04-23
[ [ "Brumar", "Camelia D.", "" ], [ "Molnar", "Sam", "" ], [ "Appleby", "Gabriel", "" ], [ "Potter", "Kristi", "" ], [ "Chang", "Remco", "" ] ]
Despite decision-making being a vital goal of data visualization, little work has been done to differentiate the decision-making tasks within our field. While visualization task taxonomies and typologies exist, they are often too granular for describing complex decision goals and decision-making processes, thus limiting their potential use in designing decision-support tools. In this paper, we contribute a typology of decision-making tasks that were iteratively refined from a list of design goals distilled from a literature review. Our typology is concise and consists of only three tasks: choose, activate, and create. Originally proposed by the scientific community, we extend and provide definitions for these tasks that are suitable for the visualization community. Our proposed typology offers two benefits. First, it facilitates the composition of decisions using these three tasks, allowing for flexible and clear descriptions across varying complexities and domains. Second, diagrams created using this typology encourage productive discourse between visualization designers and domain experts by abstracting the intricacies of data, thereby promoting clarity and rigorous analysis of decision-making processes. We motivate the use of our typology through four case studies and demonstrate the benefits of our approach through semi-structured interviews conducted with experienced members of the visualization community, comprising academic and industry experts, who have contributed to developing or publishing decision support systems for domain experts. Our interviewees composed diagrams using our typology to delineate the decision-making processes that drive their decision-support tools, demonstrating its descriptive capacity and effectiveness.
2303.12086
Donglin Wang
Donglin Wang, Oneza Saraci, Raja R.Sattiraju, Qiuheng Zhou, and Hans D. Schotten
Effect of Variable Physical Numerologies on Link-Level Performance of 5G NR V2X
6 pages, 5 figures, ICCC 2022
null
null
null
cs.NI eess.SP
http://creativecommons.org/licenses/by/4.0/
With technology and societal development, the 5th generation wireless communication (5G) contributes significantly to different societies like industries or academies. Vehicle-to-Everything (V2X) communication technology has been one of the leading services for 5G which has been applied in vehicles. It is used to exchange their status information with other traffic and traffic participants to increase traffic safety and efficiency. Cellular-V2X (C-V2X) is one of the emerging technologies to enable V2X communications. The first Long-Term Evolution (LTE) based C-V2X was released on the 3rd Generation Partnership Project (3GPP) standard. 3GPP is working towards the development of New Radio (NR) systems that it is called 5G NR V2X. One single numerology in LTE cannot satisfy most performance requirements because of the variety of deployment options and scenarios. For this reason, in order to meet the diverse requirements, the 5G NR Physical Layer (PHY) is designed to provide a highly flexible framework. Scalable Orthogonal Frequency-Division Multiplexing (OFDM) numerologies make flexibility possible. The term numerology refers to the PHY waveform parametrization and allows different Subcarrier Spacings (SCSs), symbols, and slot duration. This paper implements the Link-Level (LL) simulations of LTE C-V2X communication and 5G NR V2X communication where simulation results are used to compare similarities and differences between LTE and 5G NR. We detect the effect of variable PHY Numerologies of 5G NR on the LL performance of V2X. The simulation results show that the performance of 5G NR improved by using variable numerologies.
[ { "created": "Fri, 17 Mar 2023 10:14:32 GMT", "version": "v1" } ]
2023-03-23
[ [ "Wang", "Donglin", "" ], [ "Saraci", "Oneza", "" ], [ "Sattiraju", "Raja R.", "" ], [ "Zhou", "Qiuheng", "" ], [ "Schotten", "Hans D.", "" ] ]
With technology and societal development, the 5th generation wireless communication (5G) contributes significantly to different societies like industries or academies. Vehicle-to-Everything (V2X) communication technology has been one of the leading services for 5G which has been applied in vehicles. It is used to exchange their status information with other traffic and traffic participants to increase traffic safety and efficiency. Cellular-V2X (C-V2X) is one of the emerging technologies to enable V2X communications. The first Long-Term Evolution (LTE) based C-V2X was released on the 3rd Generation Partnership Project (3GPP) standard. 3GPP is working towards the development of New Radio (NR) systems that it is called 5G NR V2X. One single numerology in LTE cannot satisfy most performance requirements because of the variety of deployment options and scenarios. For this reason, in order to meet the diverse requirements, the 5G NR Physical Layer (PHY) is designed to provide a highly flexible framework. Scalable Orthogonal Frequency-Division Multiplexing (OFDM) numerologies make flexibility possible. The term numerology refers to the PHY waveform parametrization and allows different Subcarrier Spacings (SCSs), symbols, and slot duration. This paper implements the Link-Level (LL) simulations of LTE C-V2X communication and 5G NR V2X communication where simulation results are used to compare similarities and differences between LTE and 5G NR. We detect the effect of variable PHY Numerologies of 5G NR on the LL performance of V2X. The simulation results show that the performance of 5G NR improved by using variable numerologies.
2202.03880
Michele Loi Dr.
Nicol\`o Cangiotti and Michele Loi
Group Fairness Is Not Derivable From Justice: a Mathematical Proof
20 pages, 2 figures
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We argue that an imperfect criminal law procedure cannot be group-fair, if 'group fairness' involves ensuring the same chances of acquittal or convictions to all innocent defendants independently of their morally arbitrary features. We show mathematically that only a perfect procedure (involving no mistake), a non-deterministic one, or a degenerate one (everyone or no one is convicted) can guarantee group fairness, in the general case. Following a recent proposal, we adopt a definition of group fairness, requiring that individuals who are equal in merit ought to have the same statistical chances of obtaining advantages and disadvantages, in a way that is statistically independent of any of their feature that does not count as merit. We explain by mathematical argument that the only imperfect procedures offering an a-priori guarantee of fairness in relation to all non-merit trait are lotteries or degenerate ones (i.e., everyone or no one is convicted). To provide a more intuitive point of view, we exploit an adjustment of the well-known ROC space, in order to represent all possible procedures in our model by a schematic diagram. The argument seems to be equally valid for all human procedures, provided they are imperfect. This clearly includes algorithmic decision-making, including decisions based on statistical predictions, since in practice all statistical models are error prone.
[ { "created": "Tue, 8 Feb 2022 14:10:47 GMT", "version": "v1" } ]
2022-02-09
[ [ "Cangiotti", "Nicolò", "" ], [ "Loi", "Michele", "" ] ]
We argue that an imperfect criminal law procedure cannot be group-fair, if 'group fairness' involves ensuring the same chances of acquittal or convictions to all innocent defendants independently of their morally arbitrary features. We show mathematically that only a perfect procedure (involving no mistake), a non-deterministic one, or a degenerate one (everyone or no one is convicted) can guarantee group fairness, in the general case. Following a recent proposal, we adopt a definition of group fairness, requiring that individuals who are equal in merit ought to have the same statistical chances of obtaining advantages and disadvantages, in a way that is statistically independent of any of their feature that does not count as merit. We explain by mathematical argument that the only imperfect procedures offering an a-priori guarantee of fairness in relation to all non-merit trait are lotteries or degenerate ones (i.e., everyone or no one is convicted). To provide a more intuitive point of view, we exploit an adjustment of the well-known ROC space, in order to represent all possible procedures in our model by a schematic diagram. The argument seems to be equally valid for all human procedures, provided they are imperfect. This clearly includes algorithmic decision-making, including decisions based on statistical predictions, since in practice all statistical models are error prone.
1811.00327
Vasileios Argyriou
Vasileios Argyriou
Asymmetric Bilateral Phase Correlation for Optical Flow Estimation in the Frequency Domain
SITIS 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of motion estimation in images operating in the frequency domain. A method is presented which extends phase correlation to handle multiple motions present in an area. Our scheme is based on a novel Bilateral-Phase Correlation (BLPC) technique that incorporates the concept and principles of Bilateral Filters retaining the motion boundaries by taking into account the difference both in value and distance in a manner very similar to Gaussian convolution. The optical flow is obtained by applying the proposed method at certain locations selected based on the present motion differences and then performing non-uniform interpolation in a multi-scale iterative framework. Experiments with several well-known datasets with and without ground-truth show that our scheme outperforms recently proposed state-of-the-art phase correlation based optical flow methods.
[ { "created": "Thu, 1 Nov 2018 11:51:29 GMT", "version": "v1" } ]
2018-11-02
[ [ "Argyriou", "Vasileios", "" ] ]
We address the problem of motion estimation in images operating in the frequency domain. A method is presented which extends phase correlation to handle multiple motions present in an area. Our scheme is based on a novel Bilateral-Phase Correlation (BLPC) technique that incorporates the concept and principles of Bilateral Filters retaining the motion boundaries by taking into account the difference both in value and distance in a manner very similar to Gaussian convolution. The optical flow is obtained by applying the proposed method at certain locations selected based on the present motion differences and then performing non-uniform interpolation in a multi-scale iterative framework. Experiments with several well-known datasets with and without ground-truth show that our scheme outperforms recently proposed state-of-the-art phase correlation based optical flow methods.
2307.12813
Chi Xie
Chi Xie, Zhao Zhang, Yixuan Wu, Feng Zhu, Rui Zhao, Shuang Liang
Described Object Detection: Liberating Object Detection with Flexible Expressions
Accepted by NeurIPS 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Detecting objects based on language information is a popular task that includes Open-Vocabulary object Detection (OVD) and Referring Expression Comprehension (REC). In this paper, we advance them to a more practical setting called Described Object Detection (DOD) by expanding category names to flexible language expressions for OVD and overcoming the limitation of REC only grounding the pre-existing object. We establish the research foundation for DOD by constructing a Description Detection Dataset ($D^3$). This dataset features flexible language expressions, whether short category names or long descriptions, and annotating all described objects on all images without omission. By evaluating previous SOTA methods on $D^3$, we find some troublemakers that fail current REC, OVD, and bi-functional methods. REC methods struggle with confidence scores, rejecting negative instances, and multi-target scenarios, while OVD methods face constraints with long and complex descriptions. Recent bi-functional methods also do not work well on DOD due to their separated training procedures and inference strategies for REC and OVD tasks. Building upon the aforementioned findings, we propose a baseline that largely improves REC methods by reconstructing the training data and introducing a binary classification sub-task, outperforming existing methods. Data and code are available at https://github.com/shikras/d-cube and related works are tracked in https://github.com/Charles-Xie/awesome-described-object-detection.
[ { "created": "Mon, 24 Jul 2023 14:06:54 GMT", "version": "v1" }, { "created": "Wed, 11 Oct 2023 14:35:26 GMT", "version": "v2" } ]
2023-10-12
[ [ "Xie", "Chi", "" ], [ "Zhang", "Zhao", "" ], [ "Wu", "Yixuan", "" ], [ "Zhu", "Feng", "" ], [ "Zhao", "Rui", "" ], [ "Liang", "Shuang", "" ] ]
Detecting objects based on language information is a popular task that includes Open-Vocabulary object Detection (OVD) and Referring Expression Comprehension (REC). In this paper, we advance them to a more practical setting called Described Object Detection (DOD) by expanding category names to flexible language expressions for OVD and overcoming the limitation of REC only grounding the pre-existing object. We establish the research foundation for DOD by constructing a Description Detection Dataset ($D^3$). This dataset features flexible language expressions, whether short category names or long descriptions, and annotating all described objects on all images without omission. By evaluating previous SOTA methods on $D^3$, we find some troublemakers that fail current REC, OVD, and bi-functional methods. REC methods struggle with confidence scores, rejecting negative instances, and multi-target scenarios, while OVD methods face constraints with long and complex descriptions. Recent bi-functional methods also do not work well on DOD due to their separated training procedures and inference strategies for REC and OVD tasks. Building upon the aforementioned findings, we propose a baseline that largely improves REC methods by reconstructing the training data and introducing a binary classification sub-task, outperforming existing methods. Data and code are available at https://github.com/shikras/d-cube and related works are tracked in https://github.com/Charles-Xie/awesome-described-object-detection.
2303.15972
Michael Hagenow
Michael Hagenow, Emmanuel Senft, Nitzan Orr, Robert Radwin, Michael Gleicher, Bilge Mutlu, Dylan P. Losey, Michael Zinn
Coordinated Multi-Robot Shared Autonomy Based on Scheduling and Demonstrations
IEEE Robotics and Automation Letters (RA-L)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shared autonomy methods, where a human operator and a robot arm work together, have enabled robots to complete a range of complex and highly variable tasks. Existing work primarily focuses on one human sharing autonomy with a single robot. By contrast, in this paper we present an approach for multi-robot shared autonomy that enables one operator to provide real-time corrections across two coordinated robots completing the same task in parallel. Sharing autonomy with multiple robots presents fundamental challenges. The human can only correct one robot at a time, and without coordination, the human may be left idle for long periods of time. Accordingly, we develop an approach that aligns the robot's learned motions to best utilize the human's expertise. Our key idea is to leverage Learning from Demonstration (LfD) and time warping to schedule the motions of the robots based on when they may require assistance. Our method uses variability in operator demonstrations to identify the types of corrections an operator might apply during shared autonomy, leverages flexibility in how quickly the task was performed in demonstrations to aid in scheduling, and iteratively estimates the likelihood of when corrections may be needed to ensure that only one robot is completing an action requiring assistance. Through a preliminary study, we show that our method can decrease the scheduled time spent sanding by iteratively estimating the times when each robot could need assistance and generating an optimized schedule that allows the operator to provide corrections to each robot during these times.
[ { "created": "Tue, 28 Mar 2023 13:44:35 GMT", "version": "v1" }, { "created": "Wed, 25 Oct 2023 18:17:13 GMT", "version": "v2" } ]
2023-10-27
[ [ "Hagenow", "Michael", "" ], [ "Senft", "Emmanuel", "" ], [ "Orr", "Nitzan", "" ], [ "Radwin", "Robert", "" ], [ "Gleicher", "Michael", "" ], [ "Mutlu", "Bilge", "" ], [ "Losey", "Dylan P.", "" ], [ ...
Shared autonomy methods, where a human operator and a robot arm work together, have enabled robots to complete a range of complex and highly variable tasks. Existing work primarily focuses on one human sharing autonomy with a single robot. By contrast, in this paper we present an approach for multi-robot shared autonomy that enables one operator to provide real-time corrections across two coordinated robots completing the same task in parallel. Sharing autonomy with multiple robots presents fundamental challenges. The human can only correct one robot at a time, and without coordination, the human may be left idle for long periods of time. Accordingly, we develop an approach that aligns the robot's learned motions to best utilize the human's expertise. Our key idea is to leverage Learning from Demonstration (LfD) and time warping to schedule the motions of the robots based on when they may require assistance. Our method uses variability in operator demonstrations to identify the types of corrections an operator might apply during shared autonomy, leverages flexibility in how quickly the task was performed in demonstrations to aid in scheduling, and iteratively estimates the likelihood of when corrections may be needed to ensure that only one robot is completing an action requiring assistance. Through a preliminary study, we show that our method can decrease the scheduled time spent sanding by iteratively estimating the times when each robot could need assistance and generating an optimized schedule that allows the operator to provide corrections to each robot during these times.
2309.16519
Vincent Mallet
Vincent Mallet, Souhaib Attaiki and Maks Ovsjanikov
AtomSurf : Surface Representation for Learning on Protein Structures
10 pages
null
null
null
cs.LG q-bio.BM
http://creativecommons.org/licenses/by-sa/4.0/
An essential aspect of learning from protein structures is the choice of their representation as a geometric object (be it a grid, graph, or surface), which conditions the associated learning method. The performance of a given approach will then depend on both the representation and its corresponding learning model. In this paper, we investigate representing proteins as $\textit{surfaces embedded in 3D}$ and evaluate this representation within an established benchmark: atom3d. Our first finding is that despite promising results, state-of-the-art surface-based learning approaches alone are not competitive with other modalities on this benchmark. Building on this, we introduce a novel synergistic approach that incorporates graph and surface-based approaches within a single learnable architecture. We show that using this combination, which inherits the strengths of the two representations, we obtain state-of-the-art results across $\textit{all tested tasks}$, on the atom3d benchmark, as well as on binding pocket classification. Our code and data can be found online: https://github.com/Vincentx15/atom2D.
[ { "created": "Thu, 28 Sep 2023 15:25:17 GMT", "version": "v1" }, { "created": "Mon, 5 Feb 2024 11:01:48 GMT", "version": "v2" } ]
2024-02-06
[ [ "Mallet", "Vincent", "" ], [ "Attaiki", "Souhaib", "" ], [ "Ovsjanikov", "Maks", "" ] ]
An essential aspect of learning from protein structures is the choice of their representation as a geometric object (be it a grid, graph, or surface), which conditions the associated learning method. The performance of a given approach will then depend on both the representation and its corresponding learning model. In this paper, we investigate representing proteins as $\textit{surfaces embedded in 3D}$ and evaluate this representation within an established benchmark: atom3d. Our first finding is that despite promising results, state-of-the-art surface-based learning approaches alone are not competitive with other modalities on this benchmark. Building on this, we introduce a novel synergistic approach that incorporates graph and surface-based approaches within a single learnable architecture. We show that using this combination, which inherits the strengths of the two representations, we obtain state-of-the-art results across $\textit{all tested tasks}$, on the atom3d benchmark, as well as on binding pocket classification. Our code and data can be found online: https://github.com/Vincentx15/atom2D.
1301.2137
Xiaowang Zhang
Dai Xu and Xiaowang Zhang and Zuoquan Lin
A Forgetting-based Approach to Merging Knowledge Bases
5 pages
2010 International Conference on Progress in Informatics and Computing, IEEE Computer Society, vol 1, pp. 321-325
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel approach based on variable forgetting, which is a useful tool in resolving contradictory by filtering some given variables, to merging multiple knowledge bases. This paper first builds a relationship between belief merging and variable forgetting by using dilation. Variable forgetting is applied to capture belief merging operation. Finally, some new merging operators are developed by modifying candidate variables to amend the shortage of traditional merging operators. Different from model selection of traditional merging operators, as an alternative approach, variable selection in those new operators could provide intuitive information about an atom variable among whole knowledge bases.
[ { "created": "Thu, 10 Jan 2013 14:41:52 GMT", "version": "v1" } ]
2013-01-11
[ [ "Xu", "Dai", "" ], [ "Zhang", "Xiaowang", "" ], [ "Lin", "Zuoquan", "" ] ]
This paper presents a novel approach based on variable forgetting, which is a useful tool in resolving contradictory by filtering some given variables, to merging multiple knowledge bases. This paper first builds a relationship between belief merging and variable forgetting by using dilation. Variable forgetting is applied to capture belief merging operation. Finally, some new merging operators are developed by modifying candidate variables to amend the shortage of traditional merging operators. Different from model selection of traditional merging operators, as an alternative approach, variable selection in those new operators could provide intuitive information about an atom variable among whole knowledge bases.
2012.06822
Markus Borg
Markus Borg, Raja Ben Abdessalem, Shiva Nejati, Francois-Xavier Jegeden, Donghwan Shin
Digital Twins Are Not Monozygotic -- Cross-Replicating ADAS Testing in Two Industry-Grade Automotive Simulators
To appear in the Proc. of the IEEE International Conference on Software Testing, Verification and Validation (ICST) 2021
null
null
null
cs.SE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing levels of software- and data-intensive driving automation call for an evolution of automotive software testing. As a recommended practice of the Verification and Validation (V&V) process of ISO/PAS 21448, a candidate standard for safety of the intended functionality for road vehicles, simulation-based testing has the potential to reduce both risks and costs. There is a growing body of research on devising test automation techniques using simulators for Advanced Driver-Assistance Systems (ADAS). However, how similar are the results if the same test scenarios are executed in different simulators? We conduct a replication study of applying a Search-Based Software Testing (SBST) solution to a real-world ADAS (PeVi, a pedestrian vision detection system) using two different commercial simulators, namely, TASS/Siemens PreScan and ESI Pro-SiVIC. Based on a minimalistic scene, we compare critical test scenarios generated using our SBST solution in these two simulators. We show that SBST can be used to effectively and efficiently generate critical test scenarios in both simulators, and the test results obtained from the two simulators can reveal several weaknesses of the ADAS under test. However, executing the same test scenarios in the two simulators leads to notable differences in the details of the test outputs, in particular, related to (1) safety violations revealed by tests, and (2) dynamics of cars and pedestrians. Based on our findings, we recommend future V&V plans to include multiple simulators to support robust simulation-based testing and to base test objectives on measures that are less dependant on the internals of the simulators.
[ { "created": "Sat, 12 Dec 2020 14:00:33 GMT", "version": "v1" }, { "created": "Thu, 28 Jan 2021 08:55:23 GMT", "version": "v2" } ]
2021-01-29
[ [ "Borg", "Markus", "" ], [ "Abdessalem", "Raja Ben", "" ], [ "Nejati", "Shiva", "" ], [ "Jegeden", "Francois-Xavier", "" ], [ "Shin", "Donghwan", "" ] ]
The increasing levels of software- and data-intensive driving automation call for an evolution of automotive software testing. As a recommended practice of the Verification and Validation (V&V) process of ISO/PAS 21448, a candidate standard for safety of the intended functionality for road vehicles, simulation-based testing has the potential to reduce both risks and costs. There is a growing body of research on devising test automation techniques using simulators for Advanced Driver-Assistance Systems (ADAS). However, how similar are the results if the same test scenarios are executed in different simulators? We conduct a replication study of applying a Search-Based Software Testing (SBST) solution to a real-world ADAS (PeVi, a pedestrian vision detection system) using two different commercial simulators, namely, TASS/Siemens PreScan and ESI Pro-SiVIC. Based on a minimalistic scene, we compare critical test scenarios generated using our SBST solution in these two simulators. We show that SBST can be used to effectively and efficiently generate critical test scenarios in both simulators, and the test results obtained from the two simulators can reveal several weaknesses of the ADAS under test. However, executing the same test scenarios in the two simulators leads to notable differences in the details of the test outputs, in particular, related to (1) safety violations revealed by tests, and (2) dynamics of cars and pedestrians. Based on our findings, we recommend future V&V plans to include multiple simulators to support robust simulation-based testing and to base test objectives on measures that are less dependant on the internals of the simulators.
1908.01050
Maciej Wielgosz
Marcin Radzio, Maciej Wielgosz, Matej Mertik
Falls Prediction in eldery people using Gated Recurrent Units
short concept paper
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Falls prevention, especially in older people, becomes an increasingly important topic in the times of aging societies. In this work, we present Gated Recurrent Unit-based neural networks models designed for predicting falls (syncope). The cardiovascular systems signals used in the study come from Gravitational Physiology, Aging and Medicine Research Unit, Institute of Physiology, Medical University of Graz. We used two of the collected signals, heart rate, and mean blood pressure. By using bidirectional GRU model, it was possible to predict the syncope occurrence approximately ten minutes before the manual marker.
[ { "created": "Fri, 2 Aug 2019 20:52:04 GMT", "version": "v1" } ]
2019-08-06
[ [ "Radzio", "Marcin", "" ], [ "Wielgosz", "Maciej", "" ], [ "Mertik", "Matej", "" ] ]
Falls prevention, especially in older people, becomes an increasingly important topic in the times of aging societies. In this work, we present Gated Recurrent Unit-based neural networks models designed for predicting falls (syncope). The cardiovascular systems signals used in the study come from Gravitational Physiology, Aging and Medicine Research Unit, Institute of Physiology, Medical University of Graz. We used two of the collected signals, heart rate, and mean blood pressure. By using bidirectional GRU model, it was possible to predict the syncope occurrence approximately ten minutes before the manual marker.
2309.16219
Shilin Shan
Shilin Shan, Quang-Cuong Pham
Sensorless Estimation of Contact Using Deep-Learning for Human-Robot Interaction
Final version accepted to ICRA 2024, 7 pages
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Physical human-robot interaction has been an area of interest for decades. Collaborative tasks, such as joint compliance, demand high-quality joint torque sensing. While external torque sensors are reliable, they come with the drawbacks of being expensive and vulnerable to impacts. To address these issues, studies have been conducted to estimate external torques using only internal signals, such as joint states and current measurements. However, insufficient attention has been given to friction hysteresis approximation, which is crucial for tasks involving extensive dynamic to static state transitions. In this paper, we propose a deep-learning-based method that leverages a novel long-term memory scheme to achieve dynamics identification, accurately approximating the static hysteresis. We also introduce modifications to the well-known Residual Learning architecture, retaining high accuracy while reducing inference time. The robustness of the proposed method is illustrated through a joint compliance and task compliance experiment.
[ { "created": "Thu, 28 Sep 2023 07:51:32 GMT", "version": "v1" }, { "created": "Wed, 6 Mar 2024 04:23:47 GMT", "version": "v2" } ]
2024-03-07
[ [ "Shan", "Shilin", "" ], [ "Pham", "Quang-Cuong", "" ] ]
Physical human-robot interaction has been an area of interest for decades. Collaborative tasks, such as joint compliance, demand high-quality joint torque sensing. While external torque sensors are reliable, they come with the drawbacks of being expensive and vulnerable to impacts. To address these issues, studies have been conducted to estimate external torques using only internal signals, such as joint states and current measurements. However, insufficient attention has been given to friction hysteresis approximation, which is crucial for tasks involving extensive dynamic to static state transitions. In this paper, we propose a deep-learning-based method that leverages a novel long-term memory scheme to achieve dynamics identification, accurately approximating the static hysteresis. We also introduce modifications to the well-known Residual Learning architecture, retaining high accuracy while reducing inference time. The robustness of the proposed method is illustrated through a joint compliance and task compliance experiment.
1204.4166
Yandong Guo
Yuan Qi and Yandong Guo
Message passing with relaxed moment matching
null
null
null
null
cs.LG stat.CO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian learning is often hampered by large computational expense. As a powerful generalization of popular belief propagation, expectation propagation (EP) efficiently approximates the exact Bayesian computation. Nevertheless, EP can be sensitive to outliers and suffer from divergence for difficult cases. To address this issue, we propose a new approximate inference approach, relaxed expectation propagation (REP). It relaxes the moment matching requirement of expectation propagation by adding a relaxation factor into the KL minimization. We penalize this relaxation with a $l_1$ penalty. As a result, when two distributions in the relaxed KL divergence are similar, the relaxation factor will be penalized to zero and, therefore, we obtain the original moment matching; In the presence of outliers, these two distributions are significantly different and the relaxation factor will be used to reduce the contribution of the outlier. Based on this penalized KL minimization, REP is robust to outliers and can greatly improve the posterior approximation quality over EP. To examine the effectiveness of REP, we apply it to Gaussian process classification, a task known to be suitable to EP. Our classification results on synthetic and UCI benchmark datasets demonstrate significant improvement of REP over EP and Power EP--in terms of algorithmic stability, estimation accuracy and predictive performance.
[ { "created": "Wed, 18 Apr 2012 19:21:59 GMT", "version": "v1" }, { "created": "Wed, 29 Aug 2012 16:02:21 GMT", "version": "v2" } ]
2012-08-30
[ [ "Qi", "Yuan", "" ], [ "Guo", "Yandong", "" ] ]
Bayesian learning is often hampered by large computational expense. As a powerful generalization of popular belief propagation, expectation propagation (EP) efficiently approximates the exact Bayesian computation. Nevertheless, EP can be sensitive to outliers and suffer from divergence for difficult cases. To address this issue, we propose a new approximate inference approach, relaxed expectation propagation (REP). It relaxes the moment matching requirement of expectation propagation by adding a relaxation factor into the KL minimization. We penalize this relaxation with a $l_1$ penalty. As a result, when two distributions in the relaxed KL divergence are similar, the relaxation factor will be penalized to zero and, therefore, we obtain the original moment matching; In the presence of outliers, these two distributions are significantly different and the relaxation factor will be used to reduce the contribution of the outlier. Based on this penalized KL minimization, REP is robust to outliers and can greatly improve the posterior approximation quality over EP. To examine the effectiveness of REP, we apply it to Gaussian process classification, a task known to be suitable to EP. Our classification results on synthetic and UCI benchmark datasets demonstrate significant improvement of REP over EP and Power EP--in terms of algorithmic stability, estimation accuracy and predictive performance.
1911.07223
Kiet Nguyen Van
Phu X. V. Nguyen, Tham T. T. Hong, Kiet Van Nguyen, Ngan Luu-Thuy Nguyen
Deep Learning versus Traditional Classifiers on Vietnamese Students' Feedback Corpus
In Proceeding of the 5th NAFOSTED Conference on Information and Computer Science (NICS 2018)
5th NAFOSTED Conference on Information and Computer Science (NICS 2018)
null
null
cs.CL cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Student's feedback is an important source of collecting students' opinions to improve the quality of training activities. Implementing sentiment analysis into student feedback data, we can determine sentiments polarities which express all problems in the institution since changes necessary will be applied to improve the quality of teaching and learning. This study focused on machine learning and natural language processing techniques (NaiveBayes, Maximum Entropy, Long Short-Term Memory, Bi-Directional Long Short-Term Memory) on the VietnameseStudents' Feedback Corpus collected from a university. The final results were compared and evaluated to find the most effective model based on different evaluation criteria. The experimental results show that the Bi-Directional LongShort-Term Memory algorithm outperformed than three other algorithms in terms of the F1-score measurement with 92.0% on the sentiment classification task and 89.6% on the topic classification task. In addition, we developed a sentiment analysis application analyzing student feedback. The application will help the institution to recognize students' opinions about a problem and identify shortcomings that still exist. With the use of this application, the institution can propose an appropriate method to improve the quality of training activities in the future.
[ { "created": "Sun, 17 Nov 2019 12:32:50 GMT", "version": "v1" } ]
2019-11-19
[ [ "Nguyen", "Phu X. V.", "" ], [ "Hong", "Tham T. T.", "" ], [ "Van Nguyen", "Kiet", "" ], [ "Nguyen", "Ngan Luu-Thuy", "" ] ]
Student's feedback is an important source of collecting students' opinions to improve the quality of training activities. Implementing sentiment analysis into student feedback data, we can determine sentiments polarities which express all problems in the institution since changes necessary will be applied to improve the quality of teaching and learning. This study focused on machine learning and natural language processing techniques (NaiveBayes, Maximum Entropy, Long Short-Term Memory, Bi-Directional Long Short-Term Memory) on the VietnameseStudents' Feedback Corpus collected from a university. The final results were compared and evaluated to find the most effective model based on different evaluation criteria. The experimental results show that the Bi-Directional LongShort-Term Memory algorithm outperformed than three other algorithms in terms of the F1-score measurement with 92.0% on the sentiment classification task and 89.6% on the topic classification task. In addition, we developed a sentiment analysis application analyzing student feedback. The application will help the institution to recognize students' opinions about a problem and identify shortcomings that still exist. With the use of this application, the institution can propose an appropriate method to improve the quality of training activities in the future.
2404.10151
Raaghav Ravishankar
Raaghav Ravishankar, Sandeep Kulkarni, Sathya Peri, Gokarna Sharma
Distributing Context-Aware Shared Memory Data Structures: A Case Study on Singly-Linked Lists
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
In this paper, we study the partitioning of a context-aware shared memory data structure so that it can be implemented as a distributed data structure running on multiple machines. By context-aware data structures, we mean that the result of an operation not only depends upon the value of the shared data but also upon the previous operations performed by the same client. While there is substantial work on designing distributed data structures, designing distributed context-aware data structures has not received much attention. We focus on singly-linked lists as a case study of the context-aware data structure. We start with a shared memory context-aware lock-free singly-linked list and show how it can be transformed into a distributed lock-free context-aware singly-linked list. The main challenge in such a transformation is to preserve properties of client-visible operations of the underlying data structure. We present two protocols that preserve these properties of client-visible operations of the linked list. In the first protocol, the distribution is done in the background as a low priority task, while in the second protocol the client-visible operations help the task of distribution without affecting client latency. In both protocols, the client-visible operations remain lock-free. Also, our transformation approach does not utilize any hardware primitives (except a compare-and-swap operation on a single word). We note that our transformation is generic and can be used for other lock-free context-aware data structures that can be constructed from singly-linked lists.
[ { "created": "Mon, 15 Apr 2024 21:51:11 GMT", "version": "v1" }, { "created": "Fri, 24 May 2024 06:43:24 GMT", "version": "v2" } ]
2024-05-27
[ [ "Ravishankar", "Raaghav", "" ], [ "Kulkarni", "Sandeep", "" ], [ "Peri", "Sathya", "" ], [ "Sharma", "Gokarna", "" ] ]
In this paper, we study the partitioning of a context-aware shared memory data structure so that it can be implemented as a distributed data structure running on multiple machines. By context-aware data structures, we mean that the result of an operation not only depends upon the value of the shared data but also upon the previous operations performed by the same client. While there is substantial work on designing distributed data structures, designing distributed context-aware data structures has not received much attention. We focus on singly-linked lists as a case study of the context-aware data structure. We start with a shared memory context-aware lock-free singly-linked list and show how it can be transformed into a distributed lock-free context-aware singly-linked list. The main challenge in such a transformation is to preserve properties of client-visible operations of the underlying data structure. We present two protocols that preserve these properties of client-visible operations of the linked list. In the first protocol, the distribution is done in the background as a low priority task, while in the second protocol the client-visible operations help the task of distribution without affecting client latency. In both protocols, the client-visible operations remain lock-free. Also, our transformation approach does not utilize any hardware primitives (except a compare-and-swap operation on a single word). We note that our transformation is generic and can be used for other lock-free context-aware data structures that can be constructed from singly-linked lists.
2407.09687
Philip Schniter
Saurav K. Shastri and Philip Schniter
Fast and Robust Phase Retrieval via Deep Expectation-Consistent Approximation
null
null
null
null
cs.CV cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Accurately recovering images from phaseless measurements is a challenging and long-standing problem. In this work, we present "deepECpr," which combines expectation-consistent (EC) approximation with deep denoising networks to surpass state-of-the-art phase-retrieval methods in both speed and accuracy. In addition to applying EC in a non-traditional manner, deepECpr includes a novel stochastic damping scheme that is inspired by recent diffusion methods. Like existing phase-retrieval methods based on plug-and-play priors, regularization by denoising, or diffusion, deepECpr iterates a denoising stage with a measurement-exploitation stage. But unlike existing methods, deepECpr requires far fewer denoiser calls. We compare deepECpr to the state-of-the-art prDeep (Metzler et al., 2018), Deep-ITA (Wang et al., 2020), and Diffusion Posterior Sampling (Chung et al., 2023) methods for noisy phase-retrieval of color, natural, and unnatural grayscale images on oversampled-Fourier and coded-diffraction-pattern measurements and find improvements in both PSNR and SSIM with 5x fewer denoiser calls.
[ { "created": "Fri, 12 Jul 2024 21:12:50 GMT", "version": "v1" } ]
2024-07-16
[ [ "Shastri", "Saurav K.", "" ], [ "Schniter", "Philip", "" ] ]
Accurately recovering images from phaseless measurements is a challenging and long-standing problem. In this work, we present "deepECpr," which combines expectation-consistent (EC) approximation with deep denoising networks to surpass state-of-the-art phase-retrieval methods in both speed and accuracy. In addition to applying EC in a non-traditional manner, deepECpr includes a novel stochastic damping scheme that is inspired by recent diffusion methods. Like existing phase-retrieval methods based on plug-and-play priors, regularization by denoising, or diffusion, deepECpr iterates a denoising stage with a measurement-exploitation stage. But unlike existing methods, deepECpr requires far fewer denoiser calls. We compare deepECpr to the state-of-the-art prDeep (Metzler et al., 2018), Deep-ITA (Wang et al., 2020), and Diffusion Posterior Sampling (Chung et al., 2023) methods for noisy phase-retrieval of color, natural, and unnatural grayscale images on oversampled-Fourier and coded-diffraction-pattern measurements and find improvements in both PSNR and SSIM with 5x fewer denoiser calls.
1903.06119
Roberto Bagnara
Roberto Bagnara, Abramo Bagnara, Fabio Biselli, Michele Chiari, Roberta Gori
Correct Approximation of IEEE 754 Floating-Point Arithmetic for Program Verification
64 pages, 19 figures, 2 tables
Constraints 27, 29-69, 2022
10.1007/s10601-021-09322-9
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Verification of programs using floating-point arithmetic is challenging on several accounts. One of the difficulties of reasoning about such programs is due to the peculiarities of floating-point arithmetic: rounding errors, infinities, non-numeric objects (NaNs), signed zeroes, denormal numbers, different rounding modes, etc. One possibility to reason about floating-point arithmetic is to model a program computation path by means of a set of ternary constraints of the form z = x op y and use constraint propagation techniques to infer new information on the variables' possible values. In this setting, we define and prove the correctness of algorithms to precisely bound the value of one of the variables x, y or z, starting from the bounds known for the other two. We do this for each of the operations and for each rounding mode defined by the IEEE 754 binary floating-point standard, even in the case the rounding mode in effect is only partially known. This is the first time that such so-called filtering algorithms are defined and their correctness is formally proved. This is an important slab for paving the way to formal verification of programs that use floating-point arithmetics.
[ { "created": "Mon, 11 Mar 2019 18:31:49 GMT", "version": "v1" }, { "created": "Thu, 28 Oct 2021 20:24:52 GMT", "version": "v2" } ]
2022-06-23
[ [ "Bagnara", "Roberto", "" ], [ "Bagnara", "Abramo", "" ], [ "Biselli", "Fabio", "" ], [ "Chiari", "Michele", "" ], [ "Gori", "Roberta", "" ] ]
Verification of programs using floating-point arithmetic is challenging on several accounts. One of the difficulties of reasoning about such programs is due to the peculiarities of floating-point arithmetic: rounding errors, infinities, non-numeric objects (NaNs), signed zeroes, denormal numbers, different rounding modes, etc. One possibility to reason about floating-point arithmetic is to model a program computation path by means of a set of ternary constraints of the form z = x op y and use constraint propagation techniques to infer new information on the variables' possible values. In this setting, we define and prove the correctness of algorithms to precisely bound the value of one of the variables x, y or z, starting from the bounds known for the other two. We do this for each of the operations and for each rounding mode defined by the IEEE 754 binary floating-point standard, even in the case the rounding mode in effect is only partially known. This is the first time that such so-called filtering algorithms are defined and their correctness is formally proved. This is an important slab for paving the way to formal verification of programs that use floating-point arithmetics.
2306.02161
Manuele Rusci Mr.
Manuele Rusci and Tinne Tuytelaars
Few-Shot Open-Set Learning for On-Device Customization of KeyWord Spotting Systems
Accepted at INTERSPEECH 2023
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A personalized KeyWord Spotting (KWS) pipeline typically requires the training of a Deep Learning model on a large set of user-defined speech utterances, preventing fast customization directly applied on-device. To fill this gap, this paper investigates few-shot learning methods for open-set KWS classification by combining a deep feature encoder with a prototype-based classifier. With user-defined keywords from 10 classes of the Google Speech Command dataset, our study reports an accuracy of up to 76% in a 10-shot scenario while the false acceptance rate of unknown data is kept to 5%. In the analyzed settings, the usage of the triplet loss to train an encoder with normalized output features performs better than the prototypical networks jointly trained with a generator of dummy unknown-class prototypes. This design is also more effective than encoders trained on a classification problem and features fewer parameters than other iso-accuracy approaches.
[ { "created": "Sat, 3 Jun 2023 17:10:33 GMT", "version": "v1" } ]
2023-06-06
[ [ "Rusci", "Manuele", "" ], [ "Tuytelaars", "Tinne", "" ] ]
A personalized KeyWord Spotting (KWS) pipeline typically requires the training of a Deep Learning model on a large set of user-defined speech utterances, preventing fast customization directly applied on-device. To fill this gap, this paper investigates few-shot learning methods for open-set KWS classification by combining a deep feature encoder with a prototype-based classifier. With user-defined keywords from 10 classes of the Google Speech Command dataset, our study reports an accuracy of up to 76% in a 10-shot scenario while the false acceptance rate of unknown data is kept to 5%. In the analyzed settings, the usage of the triplet loss to train an encoder with normalized output features performs better than the prototypical networks jointly trained with a generator of dummy unknown-class prototypes. This design is also more effective than encoders trained on a classification problem and features fewer parameters than other iso-accuracy approaches.
2202.10045
Ravi Suman
Ravi Suman, Ananth Krishnamurthy
Analysis of Two-Station Polling Queues with Setups using Continuous Time Markov Chain
null
null
null
null
cs.PF math.PR
http://creativecommons.org/publicdomain/zero/1.0/
The paper analyzes the performance of tandem network of polling queue with setups. For a system with two-products and two-stations, we propose a new approach based on a partially-collapsible state-space characterization to reduce state-space complexity. In this approach, the size of the state-space is varied depending on the information needed to determine buffer levels and waiting times. We evaluate system performance under different system setting and comment on the numerical accuracy of the approach as well as provide managerial insights. Numerical results show that approach yields reliable estimates of the performance measures. We also show how product and station asymmetry significantly affect the systems performance.
[ { "created": "Mon, 21 Feb 2022 08:28:55 GMT", "version": "v1" } ]
2022-02-22
[ [ "Suman", "Ravi", "" ], [ "Krishnamurthy", "Ananth", "" ] ]
The paper analyzes the performance of tandem network of polling queue with setups. For a system with two-products and two-stations, we propose a new approach based on a partially-collapsible state-space characterization to reduce state-space complexity. In this approach, the size of the state-space is varied depending on the information needed to determine buffer levels and waiting times. We evaluate system performance under different system setting and comment on the numerical accuracy of the approach as well as provide managerial insights. Numerical results show that approach yields reliable estimates of the performance measures. We also show how product and station asymmetry significantly affect the systems performance.
2007.05059
Taylor Webb
Taylor W. Webb, Zachary Dulberg, Steven M. Frankland, Alexander A. Petrov, Randall C. O'Reilly, Jonathan D. Cohen
Learning Representations that Support Extrapolation
ICML 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extrapolation -- the ability to make inferences that go beyond the scope of one's experiences -- is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, temporal context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques.
[ { "created": "Thu, 9 Jul 2020 20:53:45 GMT", "version": "v1" }, { "created": "Sat, 8 Aug 2020 22:36:46 GMT", "version": "v2" }, { "created": "Wed, 6 Sep 2023 18:20:08 GMT", "version": "v3" } ]
2023-09-08
[ [ "Webb", "Taylor W.", "" ], [ "Dulberg", "Zachary", "" ], [ "Frankland", "Steven M.", "" ], [ "Petrov", "Alexander A.", "" ], [ "O'Reilly", "Randall C.", "" ], [ "Cohen", "Jonathan D.", "" ] ]
Extrapolation -- the ability to make inferences that go beyond the scope of one's experiences -- is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, temporal context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques.
2101.10203
Eli Schwartz
Eli Schwartz, Alex Bronstein, Raja Giryes
ISP Distillation
null
IEEE Open Journal of Signal Processing 2023
10.1109/OJSP.2023.3239819
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Nowadays, many of the images captured are `observed' by machines only and not by humans, e.g., in autonomous systems. High-level machine vision models, such as object recognition or semantic segmentation, assume images are transformed into some canonical image space by the camera \ans{Image Signal Processor (ISP)}. However, the camera ISP is optimized for producing visually pleasing images for human observers and not for machines. Therefore, one may spare the ISP compute time and apply vision models directly to RAW images. Yet, it has been shown that training such models directly on RAW images results in a performance drop. To mitigate this drop, we use a RAW and RGB image pairs dataset, which can be easily acquired with no human labeling. We then train a model that is applied directly to the RAW data by using knowledge distillation such that the model predictions for RAW images will be aligned with the predictions of an off-the-shelf pre-trained model for processed RGB images. Our experiments show that our performance on RAW images for object classification and semantic segmentation is significantly better than models trained on labeled RAW images. It also reasonably matches the predictions of a pre-trained model on processed RGB images, while saving the ISP compute overhead.
[ { "created": "Mon, 25 Jan 2021 16:12:24 GMT", "version": "v1" }, { "created": "Thu, 15 Sep 2022 09:02:28 GMT", "version": "v2" }, { "created": "Thu, 4 May 2023 14:27:49 GMT", "version": "v3" } ]
2023-05-05
[ [ "Schwartz", "Eli", "" ], [ "Bronstein", "Alex", "" ], [ "Giryes", "Raja", "" ] ]
Nowadays, many of the images captured are `observed' by machines only and not by humans, e.g., in autonomous systems. High-level machine vision models, such as object recognition or semantic segmentation, assume images are transformed into some canonical image space by the camera \ans{Image Signal Processor (ISP)}. However, the camera ISP is optimized for producing visually pleasing images for human observers and not for machines. Therefore, one may spare the ISP compute time and apply vision models directly to RAW images. Yet, it has been shown that training such models directly on RAW images results in a performance drop. To mitigate this drop, we use a RAW and RGB image pairs dataset, which can be easily acquired with no human labeling. We then train a model that is applied directly to the RAW data by using knowledge distillation such that the model predictions for RAW images will be aligned with the predictions of an off-the-shelf pre-trained model for processed RGB images. Our experiments show that our performance on RAW images for object classification and semantic segmentation is significantly better than models trained on labeled RAW images. It also reasonably matches the predictions of a pre-trained model on processed RGB images, while saving the ISP compute overhead.
1411.3923
Joe Alexandersen
Joe Alexandersen and Boyan S. Lazarov
Robust topology optimisation of microstructural details without length scale separation - using a spectral coarse basis preconditioner
null
Comput.Method.Appl.M. 290 (2015) 156-182
10.1016/j.cma.2015.02.028
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper applies topology optimisation to the design of structures with periodic microstructural details without length scale separation, i.e. considering the complete macroscopic structure and its response, while resolving all microstructural details, as compared to the often used homogenisation approach. The approach takes boundary conditions into account and ensures connected and macroscopically optimised microstructures regardless of the difference in micro- and macroscopic length scales. This results in microstructures tailored for specific applications rather than specific properties. Dealing with the complete macroscopic structure and its response is computationally challenging as very fine discretisations are needed in order to resolve all microstructural details. Therefore, this article shows the benefits of applying a contrast-independent spectral preconditioner based on the multiscale finite element method (MsFEM) to large structures with fully-resolved microstructural details. The density-based topology optimisation approach combined with a Heaviside projection filter and a stochastic robust formulation is used on various problems, with both periodic and layered microstructures. The presented approach is shown to allow for the topology optimisation of very large problems in \textsc{Matlab}, specifically a problem with 26 million displacement degrees of freedom in 26 hours using a single computational thread.
[ { "created": "Thu, 13 Nov 2014 15:49:40 GMT", "version": "v1" } ]
2015-08-19
[ [ "Alexandersen", "Joe", "" ], [ "Lazarov", "Boyan S.", "" ] ]
This paper applies topology optimisation to the design of structures with periodic microstructural details without length scale separation, i.e. considering the complete macroscopic structure and its response, while resolving all microstructural details, as compared to the often used homogenisation approach. The approach takes boundary conditions into account and ensures connected and macroscopically optimised microstructures regardless of the difference in micro- and macroscopic length scales. This results in microstructures tailored for specific applications rather than specific properties. Dealing with the complete macroscopic structure and its response is computationally challenging as very fine discretisations are needed in order to resolve all microstructural details. Therefore, this article shows the benefits of applying a contrast-independent spectral preconditioner based on the multiscale finite element method (MsFEM) to large structures with fully-resolved microstructural details. The density-based topology optimisation approach combined with a Heaviside projection filter and a stochastic robust formulation is used on various problems, with both periodic and layered microstructures. The presented approach is shown to allow for the topology optimisation of very large problems in \textsc{Matlab}, specifically a problem with 26 million displacement degrees of freedom in 26 hours using a single computational thread.
2407.15838
Yue Cao
Yangzhou Liu, Yue Cao, Zhangwei Gao, Weiyun Wang, Zhe Chen, Wenhai Wang, Hao Tian, Lewei Lu, Xizhou Zhu, Tong Lu, Yu Qiao, Jifeng Dai
MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Diversity
18 pages, 8 figures, technical report
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the effectiveness of vision-language supervised fine-tuning in enhancing the performance of Vision Large Language Models (VLLMs). However, existing visual instruction tuning datasets include the following limitations: (1) Instruction annotation quality: despite existing VLLMs exhibiting strong performance, instructions generated by those advanced VLLMs may still suffer from inaccuracies, such as hallucinations. (2) Instructions and image diversity: the limited range of instruction types and the lack of diversity in image data may impact the model's ability to generate diversified and closer to real-world scenarios outputs. To address these challenges, we construct a high-quality, diverse visual instruction tuning dataset MMInstruct, which consists of 973K instructions from 24 domains. There are four instruction types: Judgement, Multiple-Choice, Long Visual Question Answering and Short Visual Question Answering. To construct MMInstruct, we propose an instruction generation data engine that leverages GPT-4V, GPT-3.5, and manual correction. Our instruction generation engine enables semi-automatic, low-cost, and multi-domain instruction generation at 1/6 the cost of manual construction. Through extensive experiment validation and ablation experiments, we demonstrate that MMInstruct could significantly improve the performance of VLLMs, e.g., the model fine-tuning on MMInstruct achieves new state-of-the-art performance on 10 out of 12 benchmarks. The code and data shall be available at https://github.com/yuecao0119/MMInstruct.
[ { "created": "Mon, 22 Jul 2024 17:55:22 GMT", "version": "v1" }, { "created": "Wed, 7 Aug 2024 09:34:25 GMT", "version": "v2" } ]
2024-08-08
[ [ "Liu", "Yangzhou", "" ], [ "Cao", "Yue", "" ], [ "Gao", "Zhangwei", "" ], [ "Wang", "Weiyun", "" ], [ "Chen", "Zhe", "" ], [ "Wang", "Wenhai", "" ], [ "Tian", "Hao", "" ], [ "Lu", "Lewei", "...
Despite the effectiveness of vision-language supervised fine-tuning in enhancing the performance of Vision Large Language Models (VLLMs). However, existing visual instruction tuning datasets include the following limitations: (1) Instruction annotation quality: despite existing VLLMs exhibiting strong performance, instructions generated by those advanced VLLMs may still suffer from inaccuracies, such as hallucinations. (2) Instructions and image diversity: the limited range of instruction types and the lack of diversity in image data may impact the model's ability to generate diversified and closer to real-world scenarios outputs. To address these challenges, we construct a high-quality, diverse visual instruction tuning dataset MMInstruct, which consists of 973K instructions from 24 domains. There are four instruction types: Judgement, Multiple-Choice, Long Visual Question Answering and Short Visual Question Answering. To construct MMInstruct, we propose an instruction generation data engine that leverages GPT-4V, GPT-3.5, and manual correction. Our instruction generation engine enables semi-automatic, low-cost, and multi-domain instruction generation at 1/6 the cost of manual construction. Through extensive experiment validation and ablation experiments, we demonstrate that MMInstruct could significantly improve the performance of VLLMs, e.g., the model fine-tuning on MMInstruct achieves new state-of-the-art performance on 10 out of 12 benchmarks. The code and data shall be available at https://github.com/yuecao0119/MMInstruct.
1404.4067
Tamal Ghosh
Tamal Ghosh, Tanmoy Chakraborty and Pranab K Dan
An effective AHP-based metaheuristic approach to solve supplier selection problem
null
International Journal of Procurement Management, Vol. 5, No. 2, 2012
10.1504/IJPM.2012.045647
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The supplier selection problem is based on electing the best supplier from a group of pre-specified candidates, is identified as a Multi Criteria Decision Making (MCDM), is proportionately significant in terms of qualitative and quantitative attributes. It is a fundamental issue to achieve a trade-off between such quantifiable and unquantifiable attributes with an aim to accomplish the best solution to the abovementioned problem. This article portrays a metaheuristic based optimization model to solve this NP-Complete problem. Initially the Analytic Hierarchy Process (AHP) is implemented to generate an initial feasible solution of the problem. Thereafter a Simulated Annealing (SA) algorithm is exploited to improve the quality of the obtained solution. The Taguchi robust design method is exploited to solve the critical issues on the subject of the parameter selection of the SA technique. In order to verify the proposed methodology the numerical results are demonstrated based on tangible industry data.
[ { "created": "Tue, 15 Apr 2014 20:21:31 GMT", "version": "v1" } ]
2014-04-17
[ [ "Ghosh", "Tamal", "" ], [ "Chakraborty", "Tanmoy", "" ], [ "Dan", "Pranab K", "" ] ]
The supplier selection problem is based on electing the best supplier from a group of pre-specified candidates, is identified as a Multi Criteria Decision Making (MCDM), is proportionately significant in terms of qualitative and quantitative attributes. It is a fundamental issue to achieve a trade-off between such quantifiable and unquantifiable attributes with an aim to accomplish the best solution to the abovementioned problem. This article portrays a metaheuristic based optimization model to solve this NP-Complete problem. Initially the Analytic Hierarchy Process (AHP) is implemented to generate an initial feasible solution of the problem. Thereafter a Simulated Annealing (SA) algorithm is exploited to improve the quality of the obtained solution. The Taguchi robust design method is exploited to solve the critical issues on the subject of the parameter selection of the SA technique. In order to verify the proposed methodology the numerical results are demonstrated based on tangible industry data.
2405.17664
Shisheng Hu
Shisheng Hu, Mushu Li, Jie Gao, Conghao Zhou and Xuemin Shen
Adaptive Device-Edge Collaboration on DNN Inference in AIoT: A Digital Twin-Assisted Approach
null
IEEE Internet Things J. (Volume: 11, Issue: 7, 01 April 2024)
10.1109/JIOT.2023.3336600
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Device-edge collaboration on deep neural network (DNN) inference is a promising approach to efficiently utilizing network resources for supporting artificial intelligence of things (AIoT) applications. In this paper, we propose a novel digital twin (DT)-assisted approach to device-edge collaboration on DNN inference that determines whether and when to stop local inference at a device and upload the intermediate results to complete the inference on an edge server. Instead of determining the collaboration for each DNN inference task only upon its generation, multi-step decision-making is performed during the on-device inference to adapt to the dynamic computing workload status at the device and the edge server. To enhance the adaptivity, a DT is constructed to evaluate all potential offloading decisions for each DNN inference task, which provides augmented training data for a machine learning-assisted decision-making algorithm. Then, another DT is constructed to estimate the inference status at the device to avoid frequently fetching the status information from the device, thus reducing the signaling overhead. We also derive necessary conditions for optimal offloading decisions to reduce the offloading decision space. Simulation results demon-strate the outstanding performance of our DT-assisted approach in terms of balancing the tradeoff among inference accuracy, delay, and energy consumption.
[ { "created": "Mon, 27 May 2024 21:30:52 GMT", "version": "v1" } ]
2024-05-29
[ [ "Hu", "Shisheng", "" ], [ "Li", "Mushu", "" ], [ "Gao", "Jie", "" ], [ "Zhou", "Conghao", "" ], [ "Shen", "Xuemin", "" ] ]
Device-edge collaboration on deep neural network (DNN) inference is a promising approach to efficiently utilizing network resources for supporting artificial intelligence of things (AIoT) applications. In this paper, we propose a novel digital twin (DT)-assisted approach to device-edge collaboration on DNN inference that determines whether and when to stop local inference at a device and upload the intermediate results to complete the inference on an edge server. Instead of determining the collaboration for each DNN inference task only upon its generation, multi-step decision-making is performed during the on-device inference to adapt to the dynamic computing workload status at the device and the edge server. To enhance the adaptivity, a DT is constructed to evaluate all potential offloading decisions for each DNN inference task, which provides augmented training data for a machine learning-assisted decision-making algorithm. Then, another DT is constructed to estimate the inference status at the device to avoid frequently fetching the status information from the device, thus reducing the signaling overhead. We also derive necessary conditions for optimal offloading decisions to reduce the offloading decision space. Simulation results demon-strate the outstanding performance of our DT-assisted approach in terms of balancing the tradeoff among inference accuracy, delay, and energy consumption.
1410.4099
Frank Duque M.S.
Frank Duque, Carlos Hidalgo-Toscano
An upper bound on the k-modem illumination problem
9 pages, 4 figures
null
10.1142/S021819591550017X
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A variation on the classical polygon illumination problem was introduced in [Aichholzer et. al. EuroCG'09]. In this variant light sources are replaced by wireless devices called k-modems, which can penetrate a fixed number k, of "walls". A point in the interior of a polygon is "illuminated" by a k-modem if the line segment joining them intersects at most k edges of the polygon. It is easy to construct polygons of n vertices where the number of k-modems required to illuminate all interior points is Omega(n/k). However, no non-trivial upper bound is known. In this paper we prove that the number of k-modems required to illuminate any polygon of n vertices is at most O(n/k). For the cases of illuminating an orthogonal polygon or a set of disjoint orthogonal segments, we give a tighter bound of 6n/k + 1. Moreover, we present an O(n log n) time algorithm to achieve this bound.
[ { "created": "Wed, 15 Oct 2014 15:20:59 GMT", "version": "v1" } ]
2019-10-21
[ [ "Duque", "Frank", "" ], [ "Hidalgo-Toscano", "Carlos", "" ] ]
A variation on the classical polygon illumination problem was introduced in [Aichholzer et. al. EuroCG'09]. In this variant light sources are replaced by wireless devices called k-modems, which can penetrate a fixed number k, of "walls". A point in the interior of a polygon is "illuminated" by a k-modem if the line segment joining them intersects at most k edges of the polygon. It is easy to construct polygons of n vertices where the number of k-modems required to illuminate all interior points is Omega(n/k). However, no non-trivial upper bound is known. In this paper we prove that the number of k-modems required to illuminate any polygon of n vertices is at most O(n/k). For the cases of illuminating an orthogonal polygon or a set of disjoint orthogonal segments, we give a tighter bound of 6n/k + 1. Moreover, we present an O(n log n) time algorithm to achieve this bound.
2210.08869
Jiakang Zheng
Jiakang Zheng, Zhuoyi Zhao, Jiayi Zhang, Julian Cheng, and Victor C. M. Leung
Performance Analysis of Cell-Free Massive MIMO Systems with Asynchronous Reception
Accepted in IEEE GLOBECOM Workshops 2022
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cell-free (CF) massive multiple-input multiple-output (MIMO) is considered as a promising technology for achieving the ultimate performance limit. However, due to its distributed architecture and low-cost access points (APs), the signals received at user equipments (UEs) are most likely asynchronous. In this paper, we investigate the performance of CF massive MIMO systems with asynchronous reception, including both effects of delay and oscillator phases. Taking into account the imperfect channel state information caused by phase asynchronization and pilot contamination, we obtain novel and closed-form downlink spectral efficiency (SE) expressions with coherent and non-coherent data transmission schemes, respectively. Simulation results show that asynchronous reception destroys the orthogonality of pilots and coherent transmission of data, and thus results in poor system performance. In addition, getting a highly accurate delay phase is substantial for CF massive MIMO systems to achieve coherent transmission gain. Moreover, the oscillator phase of UEs has a larger effect on SE than that of the APs, because the latter can be significantly reduced by increasing the number of antennas.
[ { "created": "Mon, 17 Oct 2022 09:06:48 GMT", "version": "v1" } ]
2022-10-18
[ [ "Zheng", "Jiakang", "" ], [ "Zhao", "Zhuoyi", "" ], [ "Zhang", "Jiayi", "" ], [ "Cheng", "Julian", "" ], [ "Leung", "Victor C. M.", "" ] ]
Cell-free (CF) massive multiple-input multiple-output (MIMO) is considered as a promising technology for achieving the ultimate performance limit. However, due to its distributed architecture and low-cost access points (APs), the signals received at user equipments (UEs) are most likely asynchronous. In this paper, we investigate the performance of CF massive MIMO systems with asynchronous reception, including both effects of delay and oscillator phases. Taking into account the imperfect channel state information caused by phase asynchronization and pilot contamination, we obtain novel and closed-form downlink spectral efficiency (SE) expressions with coherent and non-coherent data transmission schemes, respectively. Simulation results show that asynchronous reception destroys the orthogonality of pilots and coherent transmission of data, and thus results in poor system performance. In addition, getting a highly accurate delay phase is substantial for CF massive MIMO systems to achieve coherent transmission gain. Moreover, the oscillator phase of UEs has a larger effect on SE than that of the APs, because the latter can be significantly reduced by increasing the number of antennas.
2012.11976
Daniel Str\"uber
Johan Aronsson, Philip Lu, Daniel Str\"uber, Thorsten Berger
A Maturity Assessment Framework for Conversational AI Development Platforms
10 pages, 10 figures. Accepted for publication at SAC 2021: ACM/SIGAPP Symposium On Applied Computing
null
10.1145/3412841.3442046
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conversational Artificial Intelligence (AI) systems have recently sky-rocketed in popularity and are now used in many applications, from car assistants to customer support. The development of conversational AI systems is supported by a large variety of software platforms, all with similar goals, but different focus points and functionalities. A systematic foundation for classifying conversational AI platforms is currently lacking. We propose a framework for assessing the maturity level of conversational AI development platforms. Our framework is based on a systematic literature review, in which we extracted common and distinguishing features of various open-source and commercial (or in-house) platforms. Inspired by language reference frameworks, we identify different maturity levels that a conversational AI development platform may exhibit in understanding and responding to user inputs. Our framework can guide organizations in selecting a conversational AI development platform according to their needs, as well as helping researchers and platform developers improving the maturity of their platforms.
[ { "created": "Tue, 22 Dec 2020 12:58:08 GMT", "version": "v1" } ]
2020-12-23
[ [ "Aronsson", "Johan", "" ], [ "Lu", "Philip", "" ], [ "Strüber", "Daniel", "" ], [ "Berger", "Thorsten", "" ] ]
Conversational Artificial Intelligence (AI) systems have recently sky-rocketed in popularity and are now used in many applications, from car assistants to customer support. The development of conversational AI systems is supported by a large variety of software platforms, all with similar goals, but different focus points and functionalities. A systematic foundation for classifying conversational AI platforms is currently lacking. We propose a framework for assessing the maturity level of conversational AI development platforms. Our framework is based on a systematic literature review, in which we extracted common and distinguishing features of various open-source and commercial (or in-house) platforms. Inspired by language reference frameworks, we identify different maturity levels that a conversational AI development platform may exhibit in understanding and responding to user inputs. Our framework can guide organizations in selecting a conversational AI development platform according to their needs, as well as helping researchers and platform developers improving the maturity of their platforms.
1006.2063
Chien-Chung Huang
Danny Hermelin and Chien-Chung Huang and Stefan Kratsch and Magnus Wahlstrom
Parameterized Two-Player Nash Equilibrium
null
null
null
null
cs.CC cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the computation of Nash equilibria in a two-player normal form game from the perspective of parameterized complexity. Recent results proved hardness for a number of variants, when parameterized by the support size. We complement those results, by identifying three cases in which the problem becomes fixed-parameter tractable. These cases occur in the previously studied settings of sparse games and unbalanced games as well as in the newly considered case of locally bounded treewidth games that generalizes both these two cases.
[ { "created": "Thu, 10 Jun 2010 15:27:40 GMT", "version": "v1" } ]
2010-06-11
[ [ "Hermelin", "Danny", "" ], [ "Huang", "Chien-Chung", "" ], [ "Kratsch", "Stefan", "" ], [ "Wahlstrom", "Magnus", "" ] ]
We study the computation of Nash equilibria in a two-player normal form game from the perspective of parameterized complexity. Recent results proved hardness for a number of variants, when parameterized by the support size. We complement those results, by identifying three cases in which the problem becomes fixed-parameter tractable. These cases occur in the previously studied settings of sparse games and unbalanced games as well as in the newly considered case of locally bounded treewidth games that generalizes both these two cases.
2105.13580
Marcus Hebel
Bj\"orn Borgmann (1 and 2), Volker Schatz (1), Marcus Hammer (1), Marcus Hebel (1), Michael Arens (1), Uwe Stilla (2) ((1) Fraunhofer IOSB, Ettlingen, Germany, (2) Technical University of Munich (TUM), Munich, Germany)
MODISSA: a multipurpose platform for the prototypical realization of vehicle-related applications using optical sensors
Authors' version of an article accepted for publication in Applied Optics, 9 May 2021
Applied Optic 60(22), pp. F50-F65, 2021
10.1364/AO.423599
null
cs.CV cs.SY eess.IV eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the current state of development of the sensor-equipped car MODISSA, with which Fraunhofer IOSB realizes a configurable experimental platform for hardware evaluation and software development in the context of mobile mapping and vehicle-related safety and protection. MODISSA is based on a van that has successively been equipped with a variety of optical sensors over the past few years, and contains hardware for complete raw data acquisition, georeferencing, real-time data analysis, and immediate visualization on in-car displays. We demonstrate the capabilities of MODISSA by giving a deeper insight into experiments with its specific configuration in the scope of three different applications. Other research groups can benefit from these experiences when setting up their own mobile sensor system, especially regarding the selection of hardware and software, the knowledge of possible sources of error, and the handling of the acquired sensor data.
[ { "created": "Fri, 28 May 2021 04:21:39 GMT", "version": "v1" } ]
2021-05-31
[ [ "Borgmann", "Björn", "", "1 and 2" ], [ "Schatz", "Volker", "" ], [ "Hammer", "Marcus", "" ], [ "Hebel", "Marcus", "" ], [ "Arens", "Michael", "" ], [ "Stilla", "Uwe", "" ] ]
We present the current state of development of the sensor-equipped car MODISSA, with which Fraunhofer IOSB realizes a configurable experimental platform for hardware evaluation and software development in the context of mobile mapping and vehicle-related safety and protection. MODISSA is based on a van that has successively been equipped with a variety of optical sensors over the past few years, and contains hardware for complete raw data acquisition, georeferencing, real-time data analysis, and immediate visualization on in-car displays. We demonstrate the capabilities of MODISSA by giving a deeper insight into experiments with its specific configuration in the scope of three different applications. Other research groups can benefit from these experiences when setting up their own mobile sensor system, especially regarding the selection of hardware and software, the knowledge of possible sources of error, and the handling of the acquired sensor data.
2111.00169
Nicholas Boucher
Nicholas Boucher, Ross Anderson
Trojan Source: Invisible Vulnerabilities
To appear in the 32nd USENIX Security Symposium. Revisions: Adds 4 languages, 2 encodings, threat model, & scanning details
null
null
null
cs.CR cs.PL
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present a new type of attack in which source code is maliciously encoded so that it appears different to a compiler and to the human eye. This attack exploits subtleties in text-encoding standards such as Unicode to produce source code whose tokens are logically encoded in a different order from the one in which they are displayed, leading to vulnerabilities that cannot be perceived directly by human code reviewers. 'Trojan Source' attacks, as we call them, pose an immediate threat both to first-party software and of supply-chain compromise across the industry. We present working examples of Trojan Source attacks in C, C++, C#, JavaScript, Java, Rust, Go, Python, SQL, Bash, Assembly, and Solidity. We propose definitive compiler-level defenses, and describe other mitigating controls that can be deployed in editors, repositories, and build pipelines while compilers are upgraded to block this attack. We document an industry-wide coordinated disclosure for these vulnerabilities; as they affect most compilers, editors, and repositories, the exercise teaches how different firms, open-source communities, and other stakeholders respond to vulnerability disclosure.
[ { "created": "Sat, 30 Oct 2021 04:05:46 GMT", "version": "v1" }, { "created": "Wed, 8 Mar 2023 15:39:03 GMT", "version": "v2" } ]
2023-03-09
[ [ "Boucher", "Nicholas", "" ], [ "Anderson", "Ross", "" ] ]
We present a new type of attack in which source code is maliciously encoded so that it appears different to a compiler and to the human eye. This attack exploits subtleties in text-encoding standards such as Unicode to produce source code whose tokens are logically encoded in a different order from the one in which they are displayed, leading to vulnerabilities that cannot be perceived directly by human code reviewers. 'Trojan Source' attacks, as we call them, pose an immediate threat both to first-party software and of supply-chain compromise across the industry. We present working examples of Trojan Source attacks in C, C++, C#, JavaScript, Java, Rust, Go, Python, SQL, Bash, Assembly, and Solidity. We propose definitive compiler-level defenses, and describe other mitigating controls that can be deployed in editors, repositories, and build pipelines while compilers are upgraded to block this attack. We document an industry-wide coordinated disclosure for these vulnerabilities; as they affect most compilers, editors, and repositories, the exercise teaches how different firms, open-source communities, and other stakeholders respond to vulnerability disclosure.
2403.09422
Tian Xia
Tian Xia, M\'elanie Roschewitz, Fabio De Sousa Ribeiro, Charles Jones, Ben Glocker
Mitigating attribute amplification in counterfactual image generation
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Causal generative modelling is gaining interest in medical imaging due to its ability to answer interventional and counterfactual queries. Most work focuses on generating counterfactual images that look plausible, using auxiliary classifiers to enforce effectiveness of simulated interventions. We investigate pitfalls in this approach, discovering the issue of attribute amplification, where unrelated attributes are spuriously affected during interventions, leading to biases across protected characteristics and disease status. We show that attribute amplification is caused by the use of hard labels in the counterfactual training process and propose soft counterfactual fine-tuning to mitigate this issue. Our method substantially reduces the amplification effect while maintaining effectiveness of generated images, demonstrated on a large chest X-ray dataset. Our work makes an important advancement towards more faithful and unbiased causal modelling in medical imaging.
[ { "created": "Thu, 14 Mar 2024 14:14:47 GMT", "version": "v1" } ]
2024-03-15
[ [ "Xia", "Tian", "" ], [ "Roschewitz", "Mélanie", "" ], [ "Ribeiro", "Fabio De Sousa", "" ], [ "Jones", "Charles", "" ], [ "Glocker", "Ben", "" ] ]
Causal generative modelling is gaining interest in medical imaging due to its ability to answer interventional and counterfactual queries. Most work focuses on generating counterfactual images that look plausible, using auxiliary classifiers to enforce effectiveness of simulated interventions. We investigate pitfalls in this approach, discovering the issue of attribute amplification, where unrelated attributes are spuriously affected during interventions, leading to biases across protected characteristics and disease status. We show that attribute amplification is caused by the use of hard labels in the counterfactual training process and propose soft counterfactual fine-tuning to mitigate this issue. Our method substantially reduces the amplification effect while maintaining effectiveness of generated images, demonstrated on a large chest X-ray dataset. Our work makes an important advancement towards more faithful and unbiased causal modelling in medical imaging.
2302.02706
Taku Yamagata
Taku Yamagata, Emma L. Tonkin, Benjamin Arana Sanchez, Ian Craddock, Miquel Perello Nieto, Raul Santos-Rodriguez, Weisong Yang, Peter Flach
When the Ground Truth is not True: Modelling Human Biases in Temporal Annotations
null
null
null
null
cs.LG cs.HC
http://creativecommons.org/licenses/by/4.0/
In supervised learning, low quality annotations lead to poorly performing classification and detection models, while also rendering evaluation unreliable. This is particularly apparent on temporal data, where annotation quality is affected by multiple factors. For example, in the post-hoc self-reporting of daily activities, cognitive biases are one of the most common ingredients. In particular, reporting the start and duration of an activity after its finalisation may incorporate biases introduced by personal time perceptions, as well as the imprecision and lack of granularity due to time rounding. Here we propose a method to model human biases on temporal annotations and argue for the use of soft labels. Experimental results in synthetic data show that soft labels provide a better approximation of the ground truth for several metrics. We showcase the method on a real dataset of daily activities.
[ { "created": "Mon, 6 Feb 2023 11:08:25 GMT", "version": "v1" } ]
2023-02-07
[ [ "Yamagata", "Taku", "" ], [ "Tonkin", "Emma L.", "" ], [ "Sanchez", "Benjamin Arana", "" ], [ "Craddock", "Ian", "" ], [ "Nieto", "Miquel Perello", "" ], [ "Santos-Rodriguez", "Raul", "" ], [ "Yang", "Weisong",...
In supervised learning, low quality annotations lead to poorly performing classification and detection models, while also rendering evaluation unreliable. This is particularly apparent on temporal data, where annotation quality is affected by multiple factors. For example, in the post-hoc self-reporting of daily activities, cognitive biases are one of the most common ingredients. In particular, reporting the start and duration of an activity after its finalisation may incorporate biases introduced by personal time perceptions, as well as the imprecision and lack of granularity due to time rounding. Here we propose a method to model human biases on temporal annotations and argue for the use of soft labels. Experimental results in synthetic data show that soft labels provide a better approximation of the ground truth for several metrics. We showcase the method on a real dataset of daily activities.
1507.02159
Limin Wang
Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao
Towards Good Practices for Very Deep Two-Stream ConvNets
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep convolutional networks have achieved great success for object recognition in still images. However, for action recognition in videos, the improvement of deep convolutional networks is not so evident. We argue that there are two reasons that could probably explain this result. First the current network architectures (e.g. Two-stream ConvNets) are relatively shallow compared with those very deep models in image domain (e.g. VGGNet, GoogLeNet), and therefore their modeling capacity is constrained by their depth. Second, probably more importantly, the training dataset of action recognition is extremely small compared with the ImageNet dataset, and thus it will be easy to over-fit on the training dataset. To address these issues, this report presents very deep two-stream ConvNets for action recognition, by adapting recent very deep architectures into video domain. However, this extension is not easy as the size of action recognition is quite small. We design several good practices for the training of very deep two-stream ConvNets, namely (i) pre-training for both spatial and temporal nets, (ii) smaller learning rates, (iii) more data augmentation techniques, (iv) high drop out ratio. Meanwhile, we extend the Caffe toolbox into Multi-GPU implementation with high computational efficiency and low memory consumption. We verify the performance of very deep two-stream ConvNets on the dataset of UCF101 and it achieves the recognition accuracy of $91.4\%$.
[ { "created": "Wed, 8 Jul 2015 14:00:35 GMT", "version": "v1" } ]
2015-07-09
[ [ "Wang", "Limin", "" ], [ "Xiong", "Yuanjun", "" ], [ "Wang", "Zhe", "" ], [ "Qiao", "Yu", "" ] ]
Deep convolutional networks have achieved great success for object recognition in still images. However, for action recognition in videos, the improvement of deep convolutional networks is not so evident. We argue that there are two reasons that could probably explain this result. First the current network architectures (e.g. Two-stream ConvNets) are relatively shallow compared with those very deep models in image domain (e.g. VGGNet, GoogLeNet), and therefore their modeling capacity is constrained by their depth. Second, probably more importantly, the training dataset of action recognition is extremely small compared with the ImageNet dataset, and thus it will be easy to over-fit on the training dataset. To address these issues, this report presents very deep two-stream ConvNets for action recognition, by adapting recent very deep architectures into video domain. However, this extension is not easy as the size of action recognition is quite small. We design several good practices for the training of very deep two-stream ConvNets, namely (i) pre-training for both spatial and temporal nets, (ii) smaller learning rates, (iii) more data augmentation techniques, (iv) high drop out ratio. Meanwhile, we extend the Caffe toolbox into Multi-GPU implementation with high computational efficiency and low memory consumption. We verify the performance of very deep two-stream ConvNets on the dataset of UCF101 and it achieves the recognition accuracy of $91.4\%$.
2307.14825
Dimitri Korsch
Dimitri Korsch, Maha Shadaydeh, Joachim Denzler
Simplified Concrete Dropout -- Improving the Generation of Attribution Masks for Fine-grained Classification
Accepted at the German Conference on Pattern Recognition 2023 (GCPR 2023)
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Fine-grained classification is a particular case of a classification problem, aiming to classify objects that share the visual appearance and can only be distinguished by subtle differences. Fine-grained classification models are often deployed to determine animal species or individuals in automated animal monitoring systems. Precise visual explanations of the model's decision are crucial to analyze systematic errors. Attention- or gradient-based methods are commonly used to identify regions in the image that contribute the most to the classification decision. These methods deliver either too coarse or too noisy explanations, unsuitable for identifying subtle visual differences reliably. However, perturbation-based methods can precisely identify pixels causally responsible for the classification result. Fill-in of the dropout (FIDO) algorithm is one of those methods. It utilizes the concrete dropout (CD) to sample a set of attribution masks and updates the sampling parameters based on the output of the classification model. A known problem of the algorithm is a high variance in the gradient estimates, which the authors have mitigated until now by mini-batch updates of the sampling parameters. This paper presents a solution to circumvent these computational instabilities by simplifying the CD sampling and reducing reliance on large mini-batch sizes. First, it allows estimating the parameters with smaller mini-batch sizes without losing the quality of the estimates but with a reduced computational effort. Furthermore, our solution produces finer and more coherent attribution masks. Finally, we use the resulting attribution masks to improve the classification performance of a trained model without additional fine-tuning of the model.
[ { "created": "Thu, 27 Jul 2023 13:01:49 GMT", "version": "v1" } ]
2023-07-28
[ [ "Korsch", "Dimitri", "" ], [ "Shadaydeh", "Maha", "" ], [ "Denzler", "Joachim", "" ] ]
Fine-grained classification is a particular case of a classification problem, aiming to classify objects that share the visual appearance and can only be distinguished by subtle differences. Fine-grained classification models are often deployed to determine animal species or individuals in automated animal monitoring systems. Precise visual explanations of the model's decision are crucial to analyze systematic errors. Attention- or gradient-based methods are commonly used to identify regions in the image that contribute the most to the classification decision. These methods deliver either too coarse or too noisy explanations, unsuitable for identifying subtle visual differences reliably. However, perturbation-based methods can precisely identify pixels causally responsible for the classification result. Fill-in of the dropout (FIDO) algorithm is one of those methods. It utilizes the concrete dropout (CD) to sample a set of attribution masks and updates the sampling parameters based on the output of the classification model. A known problem of the algorithm is a high variance in the gradient estimates, which the authors have mitigated until now by mini-batch updates of the sampling parameters. This paper presents a solution to circumvent these computational instabilities by simplifying the CD sampling and reducing reliance on large mini-batch sizes. First, it allows estimating the parameters with smaller mini-batch sizes without losing the quality of the estimates but with a reduced computational effort. Furthermore, our solution produces finer and more coherent attribution masks. Finally, we use the resulting attribution masks to improve the classification performance of a trained model without additional fine-tuning of the model.
2305.12359
B.Sundar Rajan
Navya Saxena, Anjana A. Mahesh, and B. Sundar Rajan
An Optimal Two-Step Decoding at Receivers with Side Information in PSK-Modulated Index Coding
24 pages and 7 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies noisy index coding problems over single-input single-output broadcast channels. The codewords from a chosen index code of length $N$ are transmitted after $2^N$-PSK modulation over an AWGN channel. In "Index Coded PSK Modulation for prioritized Receivers," the authors showed that when a length-$N$ index code is transmitted as a $2^N$-PSK symbol, the ML decoder at a receiver decodes directly to the message bit rather than following the two-step decoding process of first demodulating the PSK symbol and equivalently the index-coded bits and then doing index-decoding. In this paper, we consider unprioritized receivers and follow the two-step decoding process at the receivers. After estimating the PSK symbol using an ML decoder, at a receiver, there might be more than one decoding strategy, i.e., a linear combination of index-coded bits and different subsets of side information bits, that can be used to estimate the requested message. Thomas et al. in ["Single Uniprior Index Coding With Min Max Probability of Error Over Fading Channels,"] showed that for binary-modulated index code transmissions, minimizing the number of transmissions used to decode a requested message is equivalent to minimizing the probability of error. This paper shows that this is no longer the case while employing multi-level modulations. Further, we consider that the side information available to each receiver is also noisy and derive an expression for the probability that a requested message bit is estimated erroneously at a receiver. We also show that the criterion for choosing a decoding strategy that gives the best probability of error performance at a receiver changes with the signal-to-noise ratio at which the side information is broadcast.
[ { "created": "Sun, 21 May 2023 06:06:37 GMT", "version": "v1" } ]
2023-05-23
[ [ "Saxena", "Navya", "" ], [ "Mahesh", "Anjana A.", "" ], [ "Rajan", "B. Sundar", "" ] ]
This paper studies noisy index coding problems over single-input single-output broadcast channels. The codewords from a chosen index code of length $N$ are transmitted after $2^N$-PSK modulation over an AWGN channel. In "Index Coded PSK Modulation for prioritized Receivers," the authors showed that when a length-$N$ index code is transmitted as a $2^N$-PSK symbol, the ML decoder at a receiver decodes directly to the message bit rather than following the two-step decoding process of first demodulating the PSK symbol and equivalently the index-coded bits and then doing index-decoding. In this paper, we consider unprioritized receivers and follow the two-step decoding process at the receivers. After estimating the PSK symbol using an ML decoder, at a receiver, there might be more than one decoding strategy, i.e., a linear combination of index-coded bits and different subsets of side information bits, that can be used to estimate the requested message. Thomas et al. in ["Single Uniprior Index Coding With Min Max Probability of Error Over Fading Channels,"] showed that for binary-modulated index code transmissions, minimizing the number of transmissions used to decode a requested message is equivalent to minimizing the probability of error. This paper shows that this is no longer the case while employing multi-level modulations. Further, we consider that the side information available to each receiver is also noisy and derive an expression for the probability that a requested message bit is estimated erroneously at a receiver. We also show that the criterion for choosing a decoding strategy that gives the best probability of error performance at a receiver changes with the signal-to-noise ratio at which the side information is broadcast.
1510.06375
Shuo Li
Xiantong Zhen, Shuo Li
Towards Direct Medical Image Analysis without Segmentation
2 pages perspective
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Direct methods have recently emerged as an effective and efficient tool in automated medical image analysis and become a trend to solve diverse challenging tasks in clinical practise. Compared to traditional methods, direct methods are of much more clinical significance by straightly targeting to the final clinical goal rather than relying on any intermediate steps. These intermediate steps, e.g., segmentation, registration and tracking, are actually not necessary and only limited to very constrained tasks far from being used in practical clinical applications; moreover they are computationally expensive and time-consuming, which causes a high waste of research resources. The advantages of direct methods stem from \textbf{1)} removal of intermediate steps, e.g., segmentation, tracking and registration; \textbf{2)} avoidance of user inputs and initialization; \textbf{3)} reformulation of conventional challenging problems, e.g., inversion problem, with efficient solutions.
[ { "created": "Wed, 21 Oct 2015 19:16:34 GMT", "version": "v1" } ]
2015-10-22
[ [ "Zhen", "Xiantong", "" ], [ "Li", "Shuo", "" ] ]
Direct methods have recently emerged as an effective and efficient tool in automated medical image analysis and become a trend to solve diverse challenging tasks in clinical practise. Compared to traditional methods, direct methods are of much more clinical significance by straightly targeting to the final clinical goal rather than relying on any intermediate steps. These intermediate steps, e.g., segmentation, registration and tracking, are actually not necessary and only limited to very constrained tasks far from being used in practical clinical applications; moreover they are computationally expensive and time-consuming, which causes a high waste of research resources. The advantages of direct methods stem from \textbf{1)} removal of intermediate steps, e.g., segmentation, tracking and registration; \textbf{2)} avoidance of user inputs and initialization; \textbf{3)} reformulation of conventional challenging problems, e.g., inversion problem, with efficient solutions.
2209.00062
Shehan Munasinghe
Bimsara Pathiraja, Shehan Munasinghe, Malshan Ranawella, Maleesha De Silva, Ranga Rodrigo, Peshala Jayasekara
Class-Aware Attention for Multimodal Trajectory Prediction
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Predicting the possible future trajectories of the surrounding dynamic agents is an essential requirement in autonomous driving. These trajectories mainly depend on the surrounding static environment, as well as the past movements of those dynamic agents. Furthermore, the multimodal nature of agent intentions makes the trajectory prediction problem more challenging. All of the existing models consider the target agent as well as the surrounding agents similarly, without considering the variation of physical properties. In this paper, we present a novel deep-learning based framework for multimodal trajectory prediction in autonomous driving, which considers the physical properties of the target and surrounding vehicles such as the object class and their physical dimensions through a weighted attention module, that improves the accuracy of the predictions. Our model has achieved the highest results in the nuScenes trajectory prediction benchmark, out of the models which use rasterized maps to input environment information. Furthermore, our model is able to run in real-time, achieving a high inference rate of over 300 FPS.
[ { "created": "Wed, 31 Aug 2022 18:43:23 GMT", "version": "v1" } ]
2022-09-02
[ [ "Pathiraja", "Bimsara", "" ], [ "Munasinghe", "Shehan", "" ], [ "Ranawella", "Malshan", "" ], [ "De Silva", "Maleesha", "" ], [ "Rodrigo", "Ranga", "" ], [ "Jayasekara", "Peshala", "" ] ]
Predicting the possible future trajectories of the surrounding dynamic agents is an essential requirement in autonomous driving. These trajectories mainly depend on the surrounding static environment, as well as the past movements of those dynamic agents. Furthermore, the multimodal nature of agent intentions makes the trajectory prediction problem more challenging. All of the existing models consider the target agent as well as the surrounding agents similarly, without considering the variation of physical properties. In this paper, we present a novel deep-learning based framework for multimodal trajectory prediction in autonomous driving, which considers the physical properties of the target and surrounding vehicles such as the object class and their physical dimensions through a weighted attention module, that improves the accuracy of the predictions. Our model has achieved the highest results in the nuScenes trajectory prediction benchmark, out of the models which use rasterized maps to input environment information. Furthermore, our model is able to run in real-time, achieving a high inference rate of over 300 FPS.
1508.01600
Jonathan Sterling
Jonathan Sterling
Remark on the hypothetical judgment
null
null
null
null
cs.LO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
What is the proper explanation of intuitionistic hypothetical judgment, and thence propositional implication? The answer is unclear from the writings of Brouwer and Heyting, who in their lifetimes propounded multiple (sometimes conflicting) explanations of the hypothetical judgment. To my mind, the determination of an acceptable explanation must take into account its adequacy for the expression of the bar theorem and, more generally, the development of an open-ended framework for transcendental arguments in mathematics.
[ { "created": "Fri, 7 Aug 2015 04:36:53 GMT", "version": "v1" }, { "created": "Mon, 10 Aug 2015 04:00:21 GMT", "version": "v2" }, { "created": "Tue, 11 Aug 2015 04:20:29 GMT", "version": "v3" }, { "created": "Wed, 25 Nov 2015 22:34:23 GMT", "version": "v4" } ]
2015-11-30
[ [ "Sterling", "Jonathan", "" ] ]
What is the proper explanation of intuitionistic hypothetical judgment, and thence propositional implication? The answer is unclear from the writings of Brouwer and Heyting, who in their lifetimes propounded multiple (sometimes conflicting) explanations of the hypothetical judgment. To my mind, the determination of an acceptable explanation must take into account its adequacy for the expression of the bar theorem and, more generally, the development of an open-ended framework for transcendental arguments in mathematics.
1702.04942
Nicola Caon
Nicola Caon, Antonio Dorta, Juan Carlos Trelles Arjona
Benchmarking the computing resources at the Instituto de Astrof\'isica de Canarias
null
null
null
null
cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of this study is the characterization of the computing resources used by researchers at the "Instituto de Astrof\'isica de Canarias" (IAC). Since there is a huge demand of computing time and we use tools such as HTCondor to implement High Throughput Computing (HTC) across all available PCs, it is essential for us to assess in a quantitative way, using objective parameters, the performances of our computing nodes. In order to achieve that, we have run a set of benchmark tests on a number of different desktop and laptop PC models among those used in our institution. In particular, we run the "Polyhedron Fortran Benchmarks" suite, using three different compilers: GNU Fortran Compiler, Intel Fortran Compiler and the PGI Fortran Compiler; execution times are then normalized to the reference values published by Polyhedron. The same tests were run multiple times on a same PCs, and on 3 to 5 PCs of the same model (whenever possible) to check for repeatability and consistency of the results. We found that in general execution times, for a given PC model, are consistent within an uncertainty of about 10%, and show a gain in CPU speed of a factor of about 3 between the oldest PCs used at the IAC (7-8 years old) and the newest ones.
[ { "created": "Thu, 16 Feb 2017 12:21:11 GMT", "version": "v1" } ]
2017-02-17
[ [ "Caon", "Nicola", "" ], [ "Dorta", "Antonio", "" ], [ "Arjona", "Juan Carlos Trelles", "" ] ]
The aim of this study is the characterization of the computing resources used by researchers at the "Instituto de Astrof\'isica de Canarias" (IAC). Since there is a huge demand of computing time and we use tools such as HTCondor to implement High Throughput Computing (HTC) across all available PCs, it is essential for us to assess in a quantitative way, using objective parameters, the performances of our computing nodes. In order to achieve that, we have run a set of benchmark tests on a number of different desktop and laptop PC models among those used in our institution. In particular, we run the "Polyhedron Fortran Benchmarks" suite, using three different compilers: GNU Fortran Compiler, Intel Fortran Compiler and the PGI Fortran Compiler; execution times are then normalized to the reference values published by Polyhedron. The same tests were run multiple times on a same PCs, and on 3 to 5 PCs of the same model (whenever possible) to check for repeatability and consistency of the results. We found that in general execution times, for a given PC model, are consistent within an uncertainty of about 10%, and show a gain in CPU speed of a factor of about 3 between the oldest PCs used at the IAC (7-8 years old) and the newest ones.
2302.08244
Jose Alberto Hernandez
Oscar Gonzalez de Dios and Ramon Casellas and Filippo Cugini and Jose Alberto Hernandez
Beyond 5G Domainless Network Operation enabled by Multiband: Toward Optical Continuum Architectures
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by-sa/4.0/
Both public and private innovation projects are targeting the design, prototyping and demonstration of a novel end-to-end integrated packet-optical transport architecture based on Multi-Band (MB) optical transmission and switching networks. Essentially, MB is expected to be the next technological evolution to deal with the traffic demand and service requirements of 5G mobile networks, and beyond, in the most cost-effective manner. Thanks to MB transmission, classical telco architectures segmented into hierarchical levels and domains can move forward toward an optical network continuum, where edge access nodes are all-optically interconnected with top-hierarchical nodes, interfacing Content Delivery Networks (CDN) and Internet Exchange Points (IXP). This article overviews the technological challenges and innovation requirements to enable such an architectural shift of telco networks both from a data and control and management planes.
[ { "created": "Thu, 16 Feb 2023 11:54:53 GMT", "version": "v1" } ]
2023-02-17
[ [ "de Dios", "Oscar Gonzalez", "" ], [ "Casellas", "Ramon", "" ], [ "Cugini", "Filippo", "" ], [ "Hernandez", "Jose Alberto", "" ] ]
Both public and private innovation projects are targeting the design, prototyping and demonstration of a novel end-to-end integrated packet-optical transport architecture based on Multi-Band (MB) optical transmission and switching networks. Essentially, MB is expected to be the next technological evolution to deal with the traffic demand and service requirements of 5G mobile networks, and beyond, in the most cost-effective manner. Thanks to MB transmission, classical telco architectures segmented into hierarchical levels and domains can move forward toward an optical network continuum, where edge access nodes are all-optically interconnected with top-hierarchical nodes, interfacing Content Delivery Networks (CDN) and Internet Exchange Points (IXP). This article overviews the technological challenges and innovation requirements to enable such an architectural shift of telco networks both from a data and control and management planes.
2404.13523
Martin Pfaller
Martin R. Pfaller, Marcos Latorre, Erica L. Schwarz, Fannie M. Gerosa, Jason M. Szafron, Jay D. Humphrey, Alison L. Marsden
FSGe: A fast and strongly-coupled 3D fluid-solid-growth interaction method
null
null
10.1016/j.cma.2024.117259
null
cs.CE
http://creativecommons.org/licenses/by/4.0/
Equilibrated fluid-solid-growth (FSGe) is a fast, open source, three-dimensional (3D) computational platform for simulating interactions between instantaneous hemodynamics and long-term vessel wall adaptation through mechanobiologically equilibrated growth and remodeling (G&R). Such models can capture evolving geometry, composition, and material properties in health and disease and following clinical interventions. In traditional G&R models, this feedback is modeled through highly simplified fluid solutions, neglecting local variations in blood pressure and wall shear stress (WSS). FSGe overcomes these inherent limitations by strongly coupling the 3D Navier-Stokes equations for blood flow with a 3D equilibrated constrained mixture model (CMMe) for vascular tissue G&R. CMMe allows one to predict long-term evolved mechanobiological equilibria from an original homeostatic state at a computational cost equivalent to that of a standard hyperelastic material model. In illustrative computational examples, we focus on the development of a stable aortic aneurysm in a mouse model to highlight key differences in growth patterns between FSGe and solid-only G&R models. We show that FSGe is especially important in blood vessels with asymmetric stimuli. Simulation results reveal greater local variation in fluid-derived WSS than in intramural stress (IMS). Thus, differences between FSGe and G&R models became more pronounced with the growing influence of WSS relative to pressure. Future applications in highly localized disease processes, such as for lesion formation in atherosclerosis, can now include spatial and temporal variations of WSS.
[ { "created": "Sun, 21 Apr 2024 04:08:24 GMT", "version": "v1" }, { "created": "Mon, 15 Jul 2024 00:42:19 GMT", "version": "v2" }, { "created": "Thu, 8 Aug 2024 12:50:55 GMT", "version": "v3" } ]
2024-08-12
[ [ "Pfaller", "Martin R.", "" ], [ "Latorre", "Marcos", "" ], [ "Schwarz", "Erica L.", "" ], [ "Gerosa", "Fannie M.", "" ], [ "Szafron", "Jason M.", "" ], [ "Humphrey", "Jay D.", "" ], [ "Marsden", "Alison L.", ...
Equilibrated fluid-solid-growth (FSGe) is a fast, open source, three-dimensional (3D) computational platform for simulating interactions between instantaneous hemodynamics and long-term vessel wall adaptation through mechanobiologically equilibrated growth and remodeling (G&R). Such models can capture evolving geometry, composition, and material properties in health and disease and following clinical interventions. In traditional G&R models, this feedback is modeled through highly simplified fluid solutions, neglecting local variations in blood pressure and wall shear stress (WSS). FSGe overcomes these inherent limitations by strongly coupling the 3D Navier-Stokes equations for blood flow with a 3D equilibrated constrained mixture model (CMMe) for vascular tissue G&R. CMMe allows one to predict long-term evolved mechanobiological equilibria from an original homeostatic state at a computational cost equivalent to that of a standard hyperelastic material model. In illustrative computational examples, we focus on the development of a stable aortic aneurysm in a mouse model to highlight key differences in growth patterns between FSGe and solid-only G&R models. We show that FSGe is especially important in blood vessels with asymmetric stimuli. Simulation results reveal greater local variation in fluid-derived WSS than in intramural stress (IMS). Thus, differences between FSGe and G&R models became more pronounced with the growing influence of WSS relative to pressure. Future applications in highly localized disease processes, such as for lesion formation in atherosclerosis, can now include spatial and temporal variations of WSS.
2007.13960
Wei Jing
En Yen Puang and Keng Peng Tee and Wei Jing
KOVIS: Keypoint-based Visual Servoing with Zero-Shot Sim-to-Real Transfer for Robotics Manipulation
Accepted by IROS 2020
null
10.1109/IROS45743.2020.9341370
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present KOVIS, a novel learning-based, calibration-free visual servoing method for fine robotic manipulation tasks with eye-in-hand stereo camera system. We train the deep neural network only in the simulated environment; and the trained model could be directly used for real-world visual servoing tasks. KOVIS consists of two networks. The first keypoint network learns the keypoint representation from the image using with an autoencoder. Then the visual servoing network learns the motion based on keypoints extracted from the camera image. The two networks are trained end-to-end in the simulated environment by self-supervised learning without manual data labeling. After training with data augmentation, domain randomization, and adversarial examples, we are able to achieve zero-shot sim-to-real transfer to real-world robotic manipulation tasks. We demonstrate the effectiveness of the proposed method in both simulated environment and real-world experiment with different robotic manipulation tasks, including grasping, peg-in-hole insertion with 4mm clearance, and M13 screw insertion. The demo video is available at http://youtu.be/gfBJBR2tDzA
[ { "created": "Tue, 28 Jul 2020 02:53:28 GMT", "version": "v1" } ]
2022-04-27
[ [ "Puang", "En Yen", "" ], [ "Tee", "Keng Peng", "" ], [ "Jing", "Wei", "" ] ]
We present KOVIS, a novel learning-based, calibration-free visual servoing method for fine robotic manipulation tasks with eye-in-hand stereo camera system. We train the deep neural network only in the simulated environment; and the trained model could be directly used for real-world visual servoing tasks. KOVIS consists of two networks. The first keypoint network learns the keypoint representation from the image using with an autoencoder. Then the visual servoing network learns the motion based on keypoints extracted from the camera image. The two networks are trained end-to-end in the simulated environment by self-supervised learning without manual data labeling. After training with data augmentation, domain randomization, and adversarial examples, we are able to achieve zero-shot sim-to-real transfer to real-world robotic manipulation tasks. We demonstrate the effectiveness of the proposed method in both simulated environment and real-world experiment with different robotic manipulation tasks, including grasping, peg-in-hole insertion with 4mm clearance, and M13 screw insertion. The demo video is available at http://youtu.be/gfBJBR2tDzA
1910.13821
Sajad Daei Omshi
Maral Safari, Sajad Daei, Farzan Haddadi
Off-the-grid Recovery of Time and Frequency Shifts with Multiple Measurement Vectors
Suggestions of Professor Reinhard Heckel have been applied
Signal Processing, 2021
10.1016/j.sigpro.2021.108016
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of estimating time and frequency shifts of a known waveform in the presence of multiple measurement vectors (MMVs). This problem naturally arises in radar imaging and wireless communications. Specifically, a signal ensemble is observed, where each signal of the ensemble is formed by a superposition of a small number of scaled, time-delayed, and frequency shifted versions of a known waveform sharing the same continuous-valued time and frequency components. The goal is to recover the continuous-valued time-frequency pairs from a small number of observations. In this work, we propose a semidefinite programming which exactly recovers $s$ pairs of time-frequency shifts from $L$ regularly spaced samples per measurement vector under a minimum separation condition between the time-frequency shifts. Moreover, we prove that the number $s$ of time-frequency shifts scales linearly with the number $L$ of samples up to a log-factor. Extensive numerical results are also provided to validate the effectiveness of the proposed method over the single measurement vectors (SMVs) problem. In particular, we find that our approach leads to a relaxed minimum separation condition and reduced number of required samples.
[ { "created": "Wed, 30 Oct 2019 13:02:41 GMT", "version": "v1" }, { "created": "Sat, 30 Jan 2021 16:12:30 GMT", "version": "v2" }, { "created": "Fri, 26 Feb 2021 08:15:04 GMT", "version": "v3" } ]
2021-03-01
[ [ "Safari", "Maral", "" ], [ "Daei", "Sajad", "" ], [ "Haddadi", "Farzan", "" ] ]
We address the problem of estimating time and frequency shifts of a known waveform in the presence of multiple measurement vectors (MMVs). This problem naturally arises in radar imaging and wireless communications. Specifically, a signal ensemble is observed, where each signal of the ensemble is formed by a superposition of a small number of scaled, time-delayed, and frequency shifted versions of a known waveform sharing the same continuous-valued time and frequency components. The goal is to recover the continuous-valued time-frequency pairs from a small number of observations. In this work, we propose a semidefinite programming which exactly recovers $s$ pairs of time-frequency shifts from $L$ regularly spaced samples per measurement vector under a minimum separation condition between the time-frequency shifts. Moreover, we prove that the number $s$ of time-frequency shifts scales linearly with the number $L$ of samples up to a log-factor. Extensive numerical results are also provided to validate the effectiveness of the proposed method over the single measurement vectors (SMVs) problem. In particular, we find that our approach leads to a relaxed minimum separation condition and reduced number of required samples.
2212.04072
Azade Mohammadi
Azade Mohammadi (1), Reza Ramezani (2), Ahmad Baraani (3) ((1) Ph.D student in University of Isfahan, (2) Assistant Professor in University of Isfahan, (3) Professor of Computer Engineering in University of Isfahan)
A Comprehensive Survey on Multi-hop Machine Reading Comprehension Approaches
35 pages, 46 figures, 3 tables. under review at Acm computing Survey journal
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Machine reading comprehension (MRC) is a long-standing topic in natural language processing (NLP). The MRC task aims to answer a question based on the given context. Recently studies focus on multi-hop MRC which is a more challenging extension of MRC, which to answer a question some disjoint pieces of information across the context are required. Due to the complexity and importance of multi-hop MRC, a large number of studies have been focused on this topic in recent years, therefore, it is necessary and worth reviewing the related literature. This study aims to investigate recent advances in the multi-hop MRC approaches based on 31 studies from 2018 to 2022. In this regard, first, the multi-hop MRC problem definition will be introduced, then 31 models will be reviewed in detail with a strong focus on their multi-hop aspects. They also will be categorized based on their main techniques. Finally, a fine-grain comprehensive comparison of the models and techniques will be presented.
[ { "created": "Thu, 8 Dec 2022 04:51:54 GMT", "version": "v1" } ]
2022-12-09
[ [ "Mohammadi", "Azade", "" ], [ "Ramezani", "Reza", "" ], [ "Baraani", "Ahmad", "" ] ]
Machine reading comprehension (MRC) is a long-standing topic in natural language processing (NLP). The MRC task aims to answer a question based on the given context. Recently studies focus on multi-hop MRC which is a more challenging extension of MRC, which to answer a question some disjoint pieces of information across the context are required. Due to the complexity and importance of multi-hop MRC, a large number of studies have been focused on this topic in recent years, therefore, it is necessary and worth reviewing the related literature. This study aims to investigate recent advances in the multi-hop MRC approaches based on 31 studies from 2018 to 2022. In this regard, first, the multi-hop MRC problem definition will be introduced, then 31 models will be reviewed in detail with a strong focus on their multi-hop aspects. They also will be categorized based on their main techniques. Finally, a fine-grain comprehensive comparison of the models and techniques will be presented.
1411.2443
Bo-Kai Hsu
Bo-Kai Hsu, Chia-Han Lee, and Ping-Cheng Yeh
On Timing Synchronization for Quantity-based Modulation in Additive Inverse Gaussian Channel with Drift
8 pages, 9 figures
null
null
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Diffusion-based Molecular Communications, the channel between Transmitter Nano-machine (TN) and Receiver Nano-machine (RN) can be modeled by Additive Inverse Gaussian Channel, that is the first hitting time of messenger molecule released from TN and captured by RN follows Inverse Gaussian distribution. In this channel, a quantity-based modulation embedding message on the different quantity levels of messenger molecules relies on a time-slotted system between TN and RN. Accordingly, their clocks need to synchronize with each other. In this paper, we discuss the approaches to make RN estimate its timing offset between TN efficiently by the arrival times of molecules. We propose many methods such as Maximum Likelihood Estimation (MLE), Unbiased Linear Estimation (ULE), Iterative ULE, and Decision Feedback (DF). The numerical results shows the comparison of them. We evaluate these methods by not only the Mean Square Error, but also the computational complexity.
[ { "created": "Mon, 10 Nov 2014 14:48:36 GMT", "version": "v1" } ]
2014-11-11
[ [ "Hsu", "Bo-Kai", "" ], [ "Lee", "Chia-Han", "" ], [ "Yeh", "Ping-Cheng", "" ] ]
In Diffusion-based Molecular Communications, the channel between Transmitter Nano-machine (TN) and Receiver Nano-machine (RN) can be modeled by Additive Inverse Gaussian Channel, that is the first hitting time of messenger molecule released from TN and captured by RN follows Inverse Gaussian distribution. In this channel, a quantity-based modulation embedding message on the different quantity levels of messenger molecules relies on a time-slotted system between TN and RN. Accordingly, their clocks need to synchronize with each other. In this paper, we discuss the approaches to make RN estimate its timing offset between TN efficiently by the arrival times of molecules. We propose many methods such as Maximum Likelihood Estimation (MLE), Unbiased Linear Estimation (ULE), Iterative ULE, and Decision Feedback (DF). The numerical results shows the comparison of them. We evaluate these methods by not only the Mean Square Error, but also the computational complexity.
2208.06838
HaoYuan He
Haoyuan He, Wangzhou Dai, Ming Li
Reduced Implication-bias Logic Loss for Neuro-Symbolic Learning
ACML'2023 Journal Track(Accepted by Machine Learning Journal)
null
null
null
cs.AI cs.LG cs.LO
http://creativecommons.org/licenses/by/4.0/
Integrating logical reasoning and machine learning by approximating logical inference with differentiable operators is a widely used technique in Neuro-Symbolic systems. However, some differentiable operators could bring a significant bias during backpropagation and degrade the performance of Neuro-Symbolic learning. In this paper, we reveal that this bias, named \textit{Implication Bias} is common in loss functions derived from fuzzy logic operators. Furthermore, we propose a simple yet effective method to transform the biased loss functions into \textit{Reduced Implication-bias Logic Loss (RILL)} to address the above problem. Empirical study shows that RILL can achieve significant improvements compared with the biased logic loss functions, especially when the knowledge base is incomplete, and keeps more robust than the compared methods when labelled data is insufficient.
[ { "created": "Sun, 14 Aug 2022 11:57:46 GMT", "version": "v1" }, { "created": "Mon, 25 Sep 2023 10:26:35 GMT", "version": "v2" } ]
2023-09-26
[ [ "He", "Haoyuan", "" ], [ "Dai", "Wangzhou", "" ], [ "Li", "Ming", "" ] ]
Integrating logical reasoning and machine learning by approximating logical inference with differentiable operators is a widely used technique in Neuro-Symbolic systems. However, some differentiable operators could bring a significant bias during backpropagation and degrade the performance of Neuro-Symbolic learning. In this paper, we reveal that this bias, named \textit{Implication Bias} is common in loss functions derived from fuzzy logic operators. Furthermore, we propose a simple yet effective method to transform the biased loss functions into \textit{Reduced Implication-bias Logic Loss (RILL)} to address the above problem. Empirical study shows that RILL can achieve significant improvements compared with the biased logic loss functions, especially when the knowledge base is incomplete, and keeps more robust than the compared methods when labelled data is insufficient.