id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2406.05506
Lior Limonad
Fabiana Fournier, Lior Limonad, Inna Skarbovsky
Towards a Benchmark for Causal Business Process Reasoning with LLMs
12 pages, 1 figure
NLP4BPM workshop at BPM 2024
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) are increasingly used for boosting organizational efficiency and automating tasks. While not originally designed for complex cognitive processes, recent efforts have further extended to employ LLMs in activities such as reasoning, planning, and decision-making. In business processes, such abilities could be invaluable for leveraging on the massive corpora LLMs have been trained on for gaining deep understanding of such processes. In this work, we plant the seeds for the development of a benchmark to assess the ability of LLMs to reason about causal and process perspectives of business operations. We refer to this view as Causally-augmented Business Processes (BP^C). The core of the benchmark comprises a set of BP^C related situations, a set of questions about these situations, and a set of deductive rules employed to systematically resolve the ground truth answers to these questions. Also with the power of LLMs, the seed is then instantiated into a larger-scale set of domain-specific situations and questions. Reasoning on BP^C is of crucial importance for process interventions and process improvement. Our benchmark, accessible at https://huggingface.co/datasets/ibm/BPC, can be used in one of two possible modalities: testing the performance of any target LLM and training an LLM to advance its capability to reason about BP^C.
[ { "created": "Sat, 8 Jun 2024 16:10:53 GMT", "version": "v1" }, { "created": "Tue, 16 Jul 2024 15:48:32 GMT", "version": "v2" } ]
2024-08-13
[ [ "Fournier", "Fabiana", "" ], [ "Limonad", "Lior", "" ], [ "Skarbovsky", "Inna", "" ] ]
Large Language Models (LLMs) are increasingly used for boosting organizational efficiency and automating tasks. While not originally designed for complex cognitive processes, recent efforts have further extended to employ LLMs in activities such as reasoning, planning, and decision-making. In business processes, such abilities could be invaluable for leveraging on the massive corpora LLMs have been trained on for gaining deep understanding of such processes. In this work, we plant the seeds for the development of a benchmark to assess the ability of LLMs to reason about causal and process perspectives of business operations. We refer to this view as Causally-augmented Business Processes (BP^C). The core of the benchmark comprises a set of BP^C related situations, a set of questions about these situations, and a set of deductive rules employed to systematically resolve the ground truth answers to these questions. Also with the power of LLMs, the seed is then instantiated into a larger-scale set of domain-specific situations and questions. Reasoning on BP^C is of crucial importance for process interventions and process improvement. Our benchmark, accessible at https://huggingface.co/datasets/ibm/BPC, can be used in one of two possible modalities: testing the performance of any target LLM and training an LLM to advance its capability to reason about BP^C.
cs/0703061
Ralf Koetter
Ralf Koetter and Frank Kschischang
Coding for Errors and Erasures in Random Network Coding
This revised paper contains some minor changes and clarifications
null
null
null
cs.IT cs.NI math.IT
null
The problem of error-control in random linear network coding is considered. A ``noncoherent'' or ``channel oblivious'' model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that linear network coding is vector-space preserving, information transmission is modelled as the injection into the network of a basis for a vector space $V$ and the collection by the receiver of a basis for a vector space $U$. A metric on the projective geometry associated with the packet space is introduced, and it is shown that a minimum distance decoder for this metric achieves correct decoding if the dimension of the space $V \cap U$ is sufficiently large. If the dimension of each codeword is restricted to a fixed integer, the code forms a subset of a finite-field Grassmannian, or, equivalently, a subset of the vertices of the corresponding Grassmann graph. Sphere-packing and sphere-covering bounds as well as a generalization of the Singleton bound are provided for such codes. Finally, a Reed-Solomon-like code construction, related to Gabidulin's construction of maximum rank-distance codes, is described and a Sudan-style ``list-1'' minimum distance decoding algorithm is provided.
[ { "created": "Tue, 13 Mar 2007 07:43:46 GMT", "version": "v1" }, { "created": "Tue, 25 Mar 2008 16:29:01 GMT", "version": "v2" } ]
2008-03-25
[ [ "Koetter", "Ralf", "" ], [ "Kschischang", "Frank", "" ] ]
The problem of error-control in random linear network coding is considered. A ``noncoherent'' or ``channel oblivious'' model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that linear network coding is vector-space preserving, information transmission is modelled as the injection into the network of a basis for a vector space $V$ and the collection by the receiver of a basis for a vector space $U$. A metric on the projective geometry associated with the packet space is introduced, and it is shown that a minimum distance decoder for this metric achieves correct decoding if the dimension of the space $V \cap U$ is sufficiently large. If the dimension of each codeword is restricted to a fixed integer, the code forms a subset of a finite-field Grassmannian, or, equivalently, a subset of the vertices of the corresponding Grassmann graph. Sphere-packing and sphere-covering bounds as well as a generalization of the Singleton bound are provided for such codes. Finally, a Reed-Solomon-like code construction, related to Gabidulin's construction of maximum rank-distance codes, is described and a Sudan-style ``list-1'' minimum distance decoding algorithm is provided.
2303.00865
Ramin Nakhli
Ramin Nakhli, Puria Azadi Moghadam, Haoyang Mi, Hossein Farahani, Alexander Baras, Blake Gilks, Ali Bashashati
AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context Processing for Representation Learning of Giga-pixel Images
Accepted at CVPR 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Processing giga-pixel whole slide histopathology images (WSI) is a computationally expensive task. Multiple instance learning (MIL) has become the conventional approach to process WSIs, in which these images are split into smaller patches for further processing. However, MIL-based techniques ignore explicit information about the individual cells within a patch. In this paper, by defining the novel concept of shared-context processing, we designed a multi-modal Graph Transformer (AMIGO) that uses the celluar graph within the tissue to provide a single representation for a patient while taking advantage of the hierarchical structure of the tissue, enabling a dynamic focus between cell-level and tissue-level information. We benchmarked the performance of our model against multiple state-of-the-art methods in survival prediction and showed that ours can significantly outperform all of them including hierarchical Vision Transformer (ViT). More importantly, we show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data. Finally, in two different cancer datasets, we demonstrated that our model was able to stratify the patients into low-risk and high-risk groups while other state-of-the-art methods failed to achieve this goal. We also publish a large dataset of immunohistochemistry images (InUIT) containing 1,600 tissue microarray (TMA) cores from 188 patients along with their survival information, making it one of the largest publicly available datasets in this context.
[ { "created": "Wed, 1 Mar 2023 23:37:45 GMT", "version": "v1" }, { "created": "Wed, 5 Jul 2023 13:25:47 GMT", "version": "v2" } ]
2023-07-06
[ [ "Nakhli", "Ramin", "" ], [ "Moghadam", "Puria Azadi", "" ], [ "Mi", "Haoyang", "" ], [ "Farahani", "Hossein", "" ], [ "Baras", "Alexander", "" ], [ "Gilks", "Blake", "" ], [ "Bashashati", "Ali", "" ] ]
Processing giga-pixel whole slide histopathology images (WSI) is a computationally expensive task. Multiple instance learning (MIL) has become the conventional approach to process WSIs, in which these images are split into smaller patches for further processing. However, MIL-based techniques ignore explicit information about the individual cells within a patch. In this paper, by defining the novel concept of shared-context processing, we designed a multi-modal Graph Transformer (AMIGO) that uses the celluar graph within the tissue to provide a single representation for a patient while taking advantage of the hierarchical structure of the tissue, enabling a dynamic focus between cell-level and tissue-level information. We benchmarked the performance of our model against multiple state-of-the-art methods in survival prediction and showed that ours can significantly outperform all of them including hierarchical Vision Transformer (ViT). More importantly, we show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data. Finally, in two different cancer datasets, we demonstrated that our model was able to stratify the patients into low-risk and high-risk groups while other state-of-the-art methods failed to achieve this goal. We also publish a large dataset of immunohistochemistry images (InUIT) containing 1,600 tissue microarray (TMA) cores from 188 patients along with their survival information, making it one of the largest publicly available datasets in this context.
1003.3418
John Fearnley
John Fearnley
Exponential Lower Bounds For Policy Iteration
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study policy iteration for infinite-horizon Markov decision processes. It has recently been shown policy iteration style algorithms have exponential lower bounds in a two player game setting. We extend these lower bounds to Markov decision processes with the total reward and average-reward optimality criteria.
[ { "created": "Wed, 17 Mar 2010 17:48:58 GMT", "version": "v1" } ]
2010-03-18
[ [ "Fearnley", "John", "" ] ]
We study policy iteration for infinite-horizon Markov decision processes. It has recently been shown policy iteration style algorithms have exponential lower bounds in a two player game setting. We extend these lower bounds to Markov decision processes with the total reward and average-reward optimality criteria.
2311.10969
Genoveva Vargas Solar
Genoveva Vargas-Solar, Santiago Negrete-Yankelevich, Javier A. Espinosa-Oviedo, Khalid Belhajjame, Jos\'e-Luis Zechinelli-Martini
MATILDA: Inclusive Data Science Pipelines Design through Computational Creativity
null
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
We argue for the need for a new generation of data science solutions that can democratize recent advances in data engineering and artificial intelligence for non-technical users from various disciplines, enabling them to unlock the full potential of these solutions. To do so, we adopt an approach whereby computational creativity and conversational computing are combined to guide non-specialists intuitively to explore and extract knowledge from data collections. The paper introduces MATILDA, a creativity-based data science design platform, showing how it can support the design process of data science pipelines guided by human and computational creativity.
[ { "created": "Sat, 18 Nov 2023 04:37:07 GMT", "version": "v1" } ]
2023-11-21
[ [ "Vargas-Solar", "Genoveva", "" ], [ "Negrete-Yankelevich", "Santiago", "" ], [ "Espinosa-Oviedo", "Javier A.", "" ], [ "Belhajjame", "Khalid", "" ], [ "Zechinelli-Martini", "José-Luis", "" ] ]
We argue for the need for a new generation of data science solutions that can democratize recent advances in data engineering and artificial intelligence for non-technical users from various disciplines, enabling them to unlock the full potential of these solutions. To do so, we adopt an approach whereby computational creativity and conversational computing are combined to guide non-specialists intuitively to explore and extract knowledge from data collections. The paper introduces MATILDA, a creativity-based data science design platform, showing how it can support the design process of data science pipelines guided by human and computational creativity.
1906.05651
Pushpendre Rastogi
Pushpendre Rastogi
Representation Learning for Words and Entities
phd thesis, Machine Learning, Natural Language Processing, Representation Learning, Knowledge Graphs, Entities, Word Embeddings, Entity Embeddings
null
null
null
cs.CL cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This thesis presents new methods for unsupervised learning of distributed representations of words and entities from text and knowledge bases. The first algorithm presented in the thesis is a multi-view algorithm for learning representations of words called Multiview Latent Semantic Analysis (MVLSA). By incorporating up to 46 different types of co-occurrence statistics for the same vocabulary of english words, I show that MVLSA outperforms other state-of-the-art word embedding models. Next, I focus on learning entity representations for search and recommendation and present the second method of this thesis, Neural Variational Set Expansion (NVSE). NVSE is also an unsupervised learning method, but it is based on the Variational Autoencoder framework. Evaluations with human annotators show that NVSE can facilitate better search and recommendation of information gathered from noisy, automatic annotation of unstructured natural language corpora. Finally, I move from unstructured data and focus on structured knowledge graphs. I present novel approaches for learning embeddings of vertices and edges in a knowledge graph that obey logical constraints.
[ { "created": "Wed, 12 Jun 2019 17:29:22 GMT", "version": "v1" } ]
2019-06-14
[ [ "Rastogi", "Pushpendre", "" ] ]
This thesis presents new methods for unsupervised learning of distributed representations of words and entities from text and knowledge bases. The first algorithm presented in the thesis is a multi-view algorithm for learning representations of words called Multiview Latent Semantic Analysis (MVLSA). By incorporating up to 46 different types of co-occurrence statistics for the same vocabulary of english words, I show that MVLSA outperforms other state-of-the-art word embedding models. Next, I focus on learning entity representations for search and recommendation and present the second method of this thesis, Neural Variational Set Expansion (NVSE). NVSE is also an unsupervised learning method, but it is based on the Variational Autoencoder framework. Evaluations with human annotators show that NVSE can facilitate better search and recommendation of information gathered from noisy, automatic annotation of unstructured natural language corpora. Finally, I move from unstructured data and focus on structured knowledge graphs. I present novel approaches for learning embeddings of vertices and edges in a knowledge graph that obey logical constraints.
2207.03807
Anurag Arnab
Anurag Arnab, Xuehan Xiong, Alexey Gritsenko, Rob Romijnders, Josip Djolonga, Mostafa Dehghani, Chen Sun, Mario Lu\v{c}i\'c, Cordelia Schmid
Beyond Transfer Learning: Co-finetuning for Action Localisation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transfer learning is the predominant paradigm for training deep networks on small target datasets. Models are typically pretrained on large ``upstream'' datasets for classification, as such labels are easy to collect, and then finetuned on ``downstream'' tasks such as action localisation, which are smaller due to their finer-grained annotations. In this paper, we question this approach, and propose co-finetuning -- simultaneously training a single model on multiple ``upstream'' and ``downstream'' tasks. We demonstrate that co-finetuning outperforms traditional transfer learning when using the same total amount of data, and also show how we can easily extend our approach to multiple ``upstream'' datasets to further improve performance. In particular, co-finetuning significantly improves the performance on rare classes in our downstream task, as it has a regularising effect, and enables the network to learn feature representations that transfer between different datasets. Finally, we observe how co-finetuning with public, video classification datasets, we are able to achieve state-of-the-art results for spatio-temporal action localisation on the challenging AVA and AVA-Kinetics datasets, outperforming recent works which develop intricate models.
[ { "created": "Fri, 8 Jul 2022 10:25:47 GMT", "version": "v1" } ]
2022-07-11
[ [ "Arnab", "Anurag", "" ], [ "Xiong", "Xuehan", "" ], [ "Gritsenko", "Alexey", "" ], [ "Romijnders", "Rob", "" ], [ "Djolonga", "Josip", "" ], [ "Dehghani", "Mostafa", "" ], [ "Sun", "Chen", "" ], [ "Lučić", "Mario", "" ], [ "Schmid", "Cordelia", "" ] ]
Transfer learning is the predominant paradigm for training deep networks on small target datasets. Models are typically pretrained on large ``upstream'' datasets for classification, as such labels are easy to collect, and then finetuned on ``downstream'' tasks such as action localisation, which are smaller due to their finer-grained annotations. In this paper, we question this approach, and propose co-finetuning -- simultaneously training a single model on multiple ``upstream'' and ``downstream'' tasks. We demonstrate that co-finetuning outperforms traditional transfer learning when using the same total amount of data, and also show how we can easily extend our approach to multiple ``upstream'' datasets to further improve performance. In particular, co-finetuning significantly improves the performance on rare classes in our downstream task, as it has a regularising effect, and enables the network to learn feature representations that transfer between different datasets. Finally, we observe how co-finetuning with public, video classification datasets, we are able to achieve state-of-the-art results for spatio-temporal action localisation on the challenging AVA and AVA-Kinetics datasets, outperforming recent works which develop intricate models.
1903.12303
Maria Cruz Varona
Maria Cruz Varona, Nico Schneucker, Boris Lohmann
Nonlinear Moment Matching for the Simulation-Free Reduction of Structural Systems
7 pages, 3 figures; short version arXiv:1901.10750 submitted to NOLCOS 2019; https://zenodo.org/record/2611120
null
null
null
cs.CE cs.NA cs.SY math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper transfers the concept of moment matching to nonlinear structural systems and further provides a simulation-free reduction scheme for such nonlinear second-order models. After first presenting the steady-state interpretation of linear moment matching, we then extend this reduction concept to the nonlinear second-order case based on Astolfi [2010]. Then, similar simplifications as in Cruz Varona et al. [2019] are proposed to achieve a simulation-free nonlinear moment matching algorithm. A discussion on the simplifications and their limitations is presented, as well as a numerical example which illustrates the efficiency of the algorithm.
[ { "created": "Thu, 28 Mar 2019 16:43:49 GMT", "version": "v1" } ]
2019-04-01
[ [ "Varona", "Maria Cruz", "" ], [ "Schneucker", "Nico", "" ], [ "Lohmann", "Boris", "" ] ]
This paper transfers the concept of moment matching to nonlinear structural systems and further provides a simulation-free reduction scheme for such nonlinear second-order models. After first presenting the steady-state interpretation of linear moment matching, we then extend this reduction concept to the nonlinear second-order case based on Astolfi [2010]. Then, similar simplifications as in Cruz Varona et al. [2019] are proposed to achieve a simulation-free nonlinear moment matching algorithm. A discussion on the simplifications and their limitations is presented, as well as a numerical example which illustrates the efficiency of the algorithm.
2311.16834
Qiqi Su
Qiqi Su, Christos Kloukinas, Artur d'Avila Garcez
FocusLearn: Fully-Interpretable, High-Performance Modular Neural Networks for Time Series
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Multivariate time series have many applications, from healthcare and meteorology to life science. Although deep learning models have shown excellent predictive performance for time series, they have been criticised for being "black-boxes" or non-interpretable. This paper proposes a novel modular neural network model for multivariate time series prediction that is interpretable by construction. A recurrent neural network learns the temporal dependencies in the data while an attention-based feature selection component selects the most relevant features and suppresses redundant features used in the learning of the temporal dependencies. A modular deep network is trained from the selected features independently to show the users how features influence outcomes, making the model interpretable. Experimental results show that this approach can outperform state-of-the-art interpretable Neural Additive Models (NAM) and variations thereof in both regression and classification of time series tasks, achieving a predictive performance that is comparable to the top non-interpretable methods for time series, LSTM and XGBoost.
[ { "created": "Tue, 28 Nov 2023 14:51:06 GMT", "version": "v1" }, { "created": "Wed, 29 Nov 2023 13:23:42 GMT", "version": "v2" }, { "created": "Mon, 18 Mar 2024 17:39:11 GMT", "version": "v3" }, { "created": "Fri, 3 May 2024 16:44:31 GMT", "version": "v4" } ]
2024-05-06
[ [ "Su", "Qiqi", "" ], [ "Kloukinas", "Christos", "" ], [ "Garcez", "Artur d'Avila", "" ] ]
Multivariate time series have many applications, from healthcare and meteorology to life science. Although deep learning models have shown excellent predictive performance for time series, they have been criticised for being "black-boxes" or non-interpretable. This paper proposes a novel modular neural network model for multivariate time series prediction that is interpretable by construction. A recurrent neural network learns the temporal dependencies in the data while an attention-based feature selection component selects the most relevant features and suppresses redundant features used in the learning of the temporal dependencies. A modular deep network is trained from the selected features independently to show the users how features influence outcomes, making the model interpretable. Experimental results show that this approach can outperform state-of-the-art interpretable Neural Additive Models (NAM) and variations thereof in both regression and classification of time series tasks, achieving a predictive performance that is comparable to the top non-interpretable methods for time series, LSTM and XGBoost.
2311.12823
Niful Islam
Niful Islam, Md. Mehedi Hasan Jony, Emam Hasan, Sunny Sutradhar, Atikur Rahman, Md. Motaharul Islam
EWasteNet: A Two-Stream Data Efficient Image Transformer Approach for E-Waste Classification
6 pages
2023 IEEE 8th International Conference On Software Engineering and Computer Systems (ICSECS), Penang, Malaysia, 2023, pp. 435-440
10.1109/ICSECS58457.2023.10256323
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Improper disposal of e-waste poses global environmental and health risks, raising serious concerns. The accurate classification of e-waste images is critical for efficient management and recycling. In this paper, we have presented a comprehensive dataset comprised of eight different classes of images of electronic devices named the E-Waste Vision Dataset. We have also presented EWasteNet, a novel two-stream approach for precise e-waste image classification based on a data-efficient image transformer (DeiT). The first stream of EWasteNet passes through a sobel operator that detects the edges while the second stream is directed through an Atrous Spatial Pyramid Pooling and attention block where multi-scale contextual information is captured. We train both of the streams simultaneously and their features are merged at the decision level. The DeiT is used as the backbone of both streams. Extensive analysis of the e-waste dataset indicates the usefulness of our method, providing 96% accuracy in e-waste classification. The proposed approach demonstrates significant usefulness in addressing the global concern of e-waste management. It facilitates efficient waste management and recycling by accurately classifying e-waste images, reducing health and safety hazards associated with improper disposal.
[ { "created": "Thu, 28 Sep 2023 13:12:45 GMT", "version": "v1" } ]
2023-11-23
[ [ "Islam", "Niful", "" ], [ "Jony", "Md. Mehedi Hasan", "" ], [ "Hasan", "Emam", "" ], [ "Sutradhar", "Sunny", "" ], [ "Rahman", "Atikur", "" ], [ "Islam", "Md. Motaharul", "" ] ]
Improper disposal of e-waste poses global environmental and health risks, raising serious concerns. The accurate classification of e-waste images is critical for efficient management and recycling. In this paper, we have presented a comprehensive dataset comprised of eight different classes of images of electronic devices named the E-Waste Vision Dataset. We have also presented EWasteNet, a novel two-stream approach for precise e-waste image classification based on a data-efficient image transformer (DeiT). The first stream of EWasteNet passes through a sobel operator that detects the edges while the second stream is directed through an Atrous Spatial Pyramid Pooling and attention block where multi-scale contextual information is captured. We train both of the streams simultaneously and their features are merged at the decision level. The DeiT is used as the backbone of both streams. Extensive analysis of the e-waste dataset indicates the usefulness of our method, providing 96% accuracy in e-waste classification. The proposed approach demonstrates significant usefulness in addressing the global concern of e-waste management. It facilitates efficient waste management and recycling by accurately classifying e-waste images, reducing health and safety hazards associated with improper disposal.
1904.05059
Chao Zhang
Chao Zhang, Shuaicheng Liu, Xun Xu, Ce Zhu
C3AE: Exploring the Limits of Compact Model for Age Estimation
accepted by cvpr2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Age estimation is a classic learning problem in computer vision. Many larger and deeper CNNs have been proposed with promising performance, such as AlexNet, VggNet, GoogLeNet and ResNet. However, these models are not practical for the embedded/mobile devices. Recently, MobileNets and ShuffleNets have been proposed to reduce the number of parameters, yielding lightweight models. However, their representation has been weakened because of the adoption of depth-wise separable convolution. In this work, we investigate the limits of compact model for small-scale image and propose an extremely Compact yet efficient Cascade Context-based Age Estimation model(C3AE). This model possesses only 1/9 and 1/2000 parameters compared with MobileNets/ShuffleNets and VggNet, while achieves competitive performance. In particular, we re-define age estimation problem by two-points representation, which is implemented by a cascade model. Moreover, to fully utilize the facial context information, multi-branch CNN network is proposed to aggregate multi-scale context. Experiments are carried out on three age estimation datasets. The state-of-the-art performance on compact model has been achieved with a relatively large margin.
[ { "created": "Wed, 10 Apr 2019 08:33:14 GMT", "version": "v1" }, { "created": "Thu, 11 Apr 2019 15:24:36 GMT", "version": "v2" } ]
2019-04-12
[ [ "Zhang", "Chao", "" ], [ "Liu", "Shuaicheng", "" ], [ "Xu", "Xun", "" ], [ "Zhu", "Ce", "" ] ]
Age estimation is a classic learning problem in computer vision. Many larger and deeper CNNs have been proposed with promising performance, such as AlexNet, VggNet, GoogLeNet and ResNet. However, these models are not practical for the embedded/mobile devices. Recently, MobileNets and ShuffleNets have been proposed to reduce the number of parameters, yielding lightweight models. However, their representation has been weakened because of the adoption of depth-wise separable convolution. In this work, we investigate the limits of compact model for small-scale image and propose an extremely Compact yet efficient Cascade Context-based Age Estimation model(C3AE). This model possesses only 1/9 and 1/2000 parameters compared with MobileNets/ShuffleNets and VggNet, while achieves competitive performance. In particular, we re-define age estimation problem by two-points representation, which is implemented by a cascade model. Moreover, to fully utilize the facial context information, multi-branch CNN network is proposed to aggregate multi-scale context. Experiments are carried out on three age estimation datasets. The state-of-the-art performance on compact model has been achieved with a relatively large margin.
2206.04520
Bao Bach
Trung Dinh Pham, Bao Gia Bach, Lam Trinh Luu, Minh Dinh Nguyen, Hai Duc Pham, Khoa Bui Anh, Xuan Quang Nguyen, Cuong Pham Quoc
An FPGA-based Solution for Convolution Operation Acceleration
11 pages, 6 figures, accepted to The First International Conference on Intelligence of Things (ICIT 2022)
Lecture Notes on Data Engineering and Communications Technologies, vol 148. Springer, 2022,
10.1007/978-3-031-15063-0_26
null
cs.AR cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Hardware-based acceleration is an extensive attempt to facilitate many computationally-intensive mathematics operations. This paper proposes an FPGA-based architecture to accelerate the convolution operation - a complex and expensive computing step that appears in many Convolutional Neural Network models. We target the design to the standard convolution operation, intending to launch the product as an edge-AI solution. The project's purpose is to produce an FPGA IP core that can process a convolutional layer at a time. System developers can deploy the IP core with various FPGA families by using Verilog HDL as the primary design language for the architecture. The experimental results show that our single computing core synthesized on a simple edge computing FPGA board can offer 0.224 GOPS. When the board is fully utilized, 4.48 GOPS can be achieved.
[ { "created": "Thu, 9 Jun 2022 14:12:30 GMT", "version": "v1" } ]
2023-02-28
[ [ "Pham", "Trung Dinh", "" ], [ "Bach", "Bao Gia", "" ], [ "Luu", "Lam Trinh", "" ], [ "Nguyen", "Minh Dinh", "" ], [ "Pham", "Hai Duc", "" ], [ "Anh", "Khoa Bui", "" ], [ "Nguyen", "Xuan Quang", "" ], [ "Quoc", "Cuong Pham", "" ] ]
Hardware-based acceleration is an extensive attempt to facilitate many computationally-intensive mathematics operations. This paper proposes an FPGA-based architecture to accelerate the convolution operation - a complex and expensive computing step that appears in many Convolutional Neural Network models. We target the design to the standard convolution operation, intending to launch the product as an edge-AI solution. The project's purpose is to produce an FPGA IP core that can process a convolutional layer at a time. System developers can deploy the IP core with various FPGA families by using Verilog HDL as the primary design language for the architecture. The experimental results show that our single computing core synthesized on a simple edge computing FPGA board can offer 0.224 GOPS. When the board is fully utilized, 4.48 GOPS can be achieved.
2402.10779
Mingchen Li
Mingchen Li, Chen Ling, Rui Zhang, Liang Zhao
A Condensed Transition Graph Framework for Zero-shot Link Prediction with Large Language Models
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Zero-shot link prediction (ZSLP) on knowledge graphs aims at automatically identifying relations between given entities. Existing methods primarily employ auxiliary information to predict tail entity given head entity and its relation, yet face challenges due to the occasional unavailability of such detailed information and the inherent simplicity of predicting tail entities based on semantic similarities. Even though Large Language Models (LLMs) offer a promising solution to predict unobserved relations between the head and tail entity in a zero-shot manner, their performance is still restricted due to the inability to leverage all the (exponentially many) paths' information between two entities, which are critical in collectively indicating their relation types. To address this, in this work, we introduce a Condensed Transition Graph Framework for Zero-Shot Link Prediction (CTLP), which encodes all the paths' information in linear time complexity to predict unseen relations between entities, attaining both efficiency and information preservation. Specifically, we design a condensed transition graph encoder with theoretical guarantees on its coverage, expressiveness, and efficiency. It is learned by a transition graph contrastive learning strategy. Subsequently, we design a soft instruction tuning to learn and map the all-path embedding to the input of LLMs. Experimental results show that our proposed CTLP method achieves state-of-the-art performance on three standard ZSLP datasets
[ { "created": "Fri, 16 Feb 2024 16:02:33 GMT", "version": "v1" } ]
2024-02-19
[ [ "Li", "Mingchen", "" ], [ "Ling", "Chen", "" ], [ "Zhang", "Rui", "" ], [ "Zhao", "Liang", "" ] ]
Zero-shot link prediction (ZSLP) on knowledge graphs aims at automatically identifying relations between given entities. Existing methods primarily employ auxiliary information to predict tail entity given head entity and its relation, yet face challenges due to the occasional unavailability of such detailed information and the inherent simplicity of predicting tail entities based on semantic similarities. Even though Large Language Models (LLMs) offer a promising solution to predict unobserved relations between the head and tail entity in a zero-shot manner, their performance is still restricted due to the inability to leverage all the (exponentially many) paths' information between two entities, which are critical in collectively indicating their relation types. To address this, in this work, we introduce a Condensed Transition Graph Framework for Zero-Shot Link Prediction (CTLP), which encodes all the paths' information in linear time complexity to predict unseen relations between entities, attaining both efficiency and information preservation. Specifically, we design a condensed transition graph encoder with theoretical guarantees on its coverage, expressiveness, and efficiency. It is learned by a transition graph contrastive learning strategy. Subsequently, we design a soft instruction tuning to learn and map the all-path embedding to the input of LLMs. Experimental results show that our proposed CTLP method achieves state-of-the-art performance on three standard ZSLP datasets
1601.02225
Hamid Mansouri
Hamid Mansouri (Machine Vision Lab., Computer Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran) and Hamid-Reza Pourreza (Machine Vision Lab., Computer Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran)
Parallel Stroked Multi Line: a model-based method for compressing large fingerprint databases
26 pages, 10 figures, submitted to Computer Vision and Image Understanding
null
null
null
cs.CV cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With increasing usage of fingerprints as an important biometric data, the need to compress the large fingerprint databases has become essential. The most recommended compression algorithm, even by standards, is JPEG2K. But at high compression rates, this algorithm is ineffective. In this paper, a model is proposed which is based on parallel lines with same orientations, arbitrary widths and same gray level values located on rectangle with constant gray level value as background. We refer to this algorithm as Parallel Stroked Multi Line (PSML). By using Adaptive Geometrical Wavelet and employing PSML, a compression algorithm is developed. This compression algorithm can preserve fingerprint structure and minutiae. The exact algorithm of computing the PSML model take exponential time. However, we have proposed an alternative approximation algorithm, which reduces the time complexity to $O(n^3)$. The proposed PSML alg. has significant advantage over Wedgelets Transform in PSNR value and visual quality in compressed images. The proposed method, despite the lower PSNR values than JPEG2K algorithm in common range of compression rates, in all compression rates have nearly equal or greater advantage over JPEG2K when used by Automatic Fingerprint Identification Systems (AFIS). At high compression rates, according to PSNR values, mean EER rate and visual quality, the encoded images with JPEG2K can not be identified from each other after compression. But, images encoded by the PSML alg. retained the sufficient information to maintain fingerprint identification performances similar to the ones obtained by raw images without compression. One the U.are.U 400 database, the mean EER rate for uncompressed images is 4.54%, while at 267:1 compression ratio, this value becomes 49.41% and 6.22% for JPEG2K and PSML, respectively. This result shows a significant improvement over the standard JPEG2K algorithm.
[ { "created": "Sun, 10 Jan 2016 15:01:10 GMT", "version": "v1" } ]
2016-01-12
[ [ "Mansouri", "Hamid", "", "Machine Vision Lab., Computer Engineering Department,\n Ferdowsi University of Mashhad, Mashhad, Iran" ], [ "Pourreza", "Hamid-Reza", "", "Machine Vision Lab., Computer Engineering Department, Ferdowsi University of\n Mashhad, Mashhad, Iran" ] ]
With increasing usage of fingerprints as an important biometric data, the need to compress the large fingerprint databases has become essential. The most recommended compression algorithm, even by standards, is JPEG2K. But at high compression rates, this algorithm is ineffective. In this paper, a model is proposed which is based on parallel lines with same orientations, arbitrary widths and same gray level values located on rectangle with constant gray level value as background. We refer to this algorithm as Parallel Stroked Multi Line (PSML). By using Adaptive Geometrical Wavelet and employing PSML, a compression algorithm is developed. This compression algorithm can preserve fingerprint structure and minutiae. The exact algorithm of computing the PSML model take exponential time. However, we have proposed an alternative approximation algorithm, which reduces the time complexity to $O(n^3)$. The proposed PSML alg. has significant advantage over Wedgelets Transform in PSNR value and visual quality in compressed images. The proposed method, despite the lower PSNR values than JPEG2K algorithm in common range of compression rates, in all compression rates have nearly equal or greater advantage over JPEG2K when used by Automatic Fingerprint Identification Systems (AFIS). At high compression rates, according to PSNR values, mean EER rate and visual quality, the encoded images with JPEG2K can not be identified from each other after compression. But, images encoded by the PSML alg. retained the sufficient information to maintain fingerprint identification performances similar to the ones obtained by raw images without compression. One the U.are.U 400 database, the mean EER rate for uncompressed images is 4.54%, while at 267:1 compression ratio, this value becomes 49.41% and 6.22% for JPEG2K and PSML, respectively. This result shows a significant improvement over the standard JPEG2K algorithm.
2009.09312
Riccardo Marin
Riccardo Marin, Simone Melzi, Emanuele Rodol\`a, Umberto Castellani
High-Resolution Augmentation for Automatic Template-Based Matching of Human Models
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new approach for 3D shape matching of deformable human shapes. Our approach is based on the joint adoption of three different tools: an intrinsic spectral matching pipeline, a morphable model, and an extrinsic details refinement. By operating in conjunction, these tools allow us to greatly improve the quality of the matching while at the same time resolving the key issues exhibited by each tool individually. In this paper we present an innovative High-Resolution Augmentation (HRA) strategy that enables highly accurate correspondence even in the presence of significant mesh resolution mismatch between the input shapes. This augmentation provides an effective workaround for the resolution limitations imposed by the adopted morphable model. The HRA in its global and localized versions represents a novel refinement strategy for surface subdivision methods. We demonstrate the accuracy of the proposed pipeline on multiple challenging benchmarks, and showcase its effectiveness in surface registration and texture transfer.
[ { "created": "Sat, 19 Sep 2020 22:41:24 GMT", "version": "v1" } ]
2020-09-22
[ [ "Marin", "Riccardo", "" ], [ "Melzi", "Simone", "" ], [ "Rodolà", "Emanuele", "" ], [ "Castellani", "Umberto", "" ] ]
We propose a new approach for 3D shape matching of deformable human shapes. Our approach is based on the joint adoption of three different tools: an intrinsic spectral matching pipeline, a morphable model, and an extrinsic details refinement. By operating in conjunction, these tools allow us to greatly improve the quality of the matching while at the same time resolving the key issues exhibited by each tool individually. In this paper we present an innovative High-Resolution Augmentation (HRA) strategy that enables highly accurate correspondence even in the presence of significant mesh resolution mismatch between the input shapes. This augmentation provides an effective workaround for the resolution limitations imposed by the adopted morphable model. The HRA in its global and localized versions represents a novel refinement strategy for surface subdivision methods. We demonstrate the accuracy of the proposed pipeline on multiple challenging benchmarks, and showcase its effectiveness in surface registration and texture transfer.
2006.14279
Erion \c{C}ano
Erion \c{C}ano, Riccardo Coppola, Eleonora Gargiulo, Marco Marengo, Maurizio Morisio
Mood-based On-Car Music Recommendations
11 pages, 5 figures. Published in proceedings of INISCOM 2016, the 2nd International Conference on Industrial Networks and Intelligent Systems, Leicester, UK
null
10.1007/978-3-319-52569-3_14
null
cs.HC cs.IR cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Driving and music listening are two inseparable everyday activities for millions of people today in the world. Considering the high correlation between music, mood and driving comfort and safety, it makes sense to use appropriate and intelligent music recommendations based on the mood of drivers and songs in the context of car driving. The objective of this paper is to present the project of a contextual mood-based music recommender system capable of regulating the driver's mood and trying to have a positive influence on her driving behaviour. Here we present the proof of concept of the system and describe the techniques and technologies that are part of it. Further possible future improvements on each of the building blocks are also presented.
[ { "created": "Thu, 25 Jun 2020 09:50:26 GMT", "version": "v1" } ]
2020-06-26
[ [ "Çano", "Erion", "" ], [ "Coppola", "Riccardo", "" ], [ "Gargiulo", "Eleonora", "" ], [ "Marengo", "Marco", "" ], [ "Morisio", "Maurizio", "" ] ]
Driving and music listening are two inseparable everyday activities for millions of people today in the world. Considering the high correlation between music, mood and driving comfort and safety, it makes sense to use appropriate and intelligent music recommendations based on the mood of drivers and songs in the context of car driving. The objective of this paper is to present the project of a contextual mood-based music recommender system capable of regulating the driver's mood and trying to have a positive influence on her driving behaviour. Here we present the proof of concept of the system and describe the techniques and technologies that are part of it. Further possible future improvements on each of the building blocks are also presented.
2209.08724
Ryota Iijima
Ryota Iijima, Miki Tanaka, Isao Echizen, and Hitoshi Kiya
On the Adversarial Transferability of ConvMixer Models
5 pages, 5 figures, 5 tables. arXiv admin note: substantial text overlap with arXiv:2209.02997
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs). In addition, AEs have adversarial transferability, which means AEs generated for a source model can fool another black-box model (target model) with a non-trivial probability. In this paper, we investigate the property of adversarial transferability between models including ConvMixer, which is an isotropic network, for the first time. To objectively verify the property of transferability, the robustness of models is evaluated by using a benchmark attack method called AutoAttack. In an image classification experiment, ConvMixer is confirmed to be weak to adversarial transferability.
[ { "created": "Mon, 19 Sep 2022 02:51:01 GMT", "version": "v1" } ]
2022-09-20
[ [ "Iijima", "Ryota", "" ], [ "Tanaka", "Miki", "" ], [ "Echizen", "Isao", "" ], [ "Kiya", "Hitoshi", "" ] ]
Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs). In addition, AEs have adversarial transferability, which means AEs generated for a source model can fool another black-box model (target model) with a non-trivial probability. In this paper, we investigate the property of adversarial transferability between models including ConvMixer, which is an isotropic network, for the first time. To objectively verify the property of transferability, the robustness of models is evaluated by using a benchmark attack method called AutoAttack. In an image classification experiment, ConvMixer is confirmed to be weak to adversarial transferability.
1904.05488
Sean Tao
Sean Tao
Deep Neural Network Ensembles
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current deep neural networks suffer from two problems; first, they are hard to interpret, and second, they suffer from overfitting. There have been many attempts to define interpretability in neural networks, but they typically lack causality or generality. A myriad of regularization techniques have been developed to prevent overfitting, and this has driven deep learning to become the hot topic it is today; however, while most regularization techniques are justified empirically and even intuitively, there is not much underlying theory. This paper argues that to extract the features used in neural networks to make decisions, it's important to look at the paths between clusters existing in the hidden spaces of neural networks. These features are of particular interest because they reflect the true decision making process of the neural network. This analysis is then furthered to present an ensemble algorithm for arbitrary neural networks which has guarantees for test accuracy. Finally, a discussion detailing the aforementioned guarantees is introduced and the implications to neural networks, including an intuitive explanation for all current regularization methods, are presented. The ensemble algorithm has generated state-of-the-art results for Wide-ResNets on CIFAR-10 (top 5 for all models) and has improved test accuracy for all models it has been applied to.
[ { "created": "Thu, 11 Apr 2019 00:52:47 GMT", "version": "v1" }, { "created": "Tue, 13 Aug 2019 20:48:02 GMT", "version": "v2" } ]
2019-08-15
[ [ "Tao", "Sean", "" ] ]
Current deep neural networks suffer from two problems; first, they are hard to interpret, and second, they suffer from overfitting. There have been many attempts to define interpretability in neural networks, but they typically lack causality or generality. A myriad of regularization techniques have been developed to prevent overfitting, and this has driven deep learning to become the hot topic it is today; however, while most regularization techniques are justified empirically and even intuitively, there is not much underlying theory. This paper argues that to extract the features used in neural networks to make decisions, it's important to look at the paths between clusters existing in the hidden spaces of neural networks. These features are of particular interest because they reflect the true decision making process of the neural network. This analysis is then furthered to present an ensemble algorithm for arbitrary neural networks which has guarantees for test accuracy. Finally, a discussion detailing the aforementioned guarantees is introduced and the implications to neural networks, including an intuitive explanation for all current regularization methods, are presented. The ensemble algorithm has generated state-of-the-art results for Wide-ResNets on CIFAR-10 (top 5 for all models) and has improved test accuracy for all models it has been applied to.
1702.05752
K. V. Krishna
Gayatri Panicker, K. V. Krishna and Purandar Bhaduri
Axiomatization of if-then-else over monoids of possibly non-halting programs and tests
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to study the axiomatization of the if-then-else construct over possibly non-halting programs and tests, the notion of $C$-sets was introduced in the literature by considering the tests from an abstract $C$-algebra. This paper extends the notion of $C$-sets to $C$-monoids which include the composition of programs as well as composition of programs with tests. For the class of $C$-monoids where the $C$-algebras are adas a canonical representation in terms of functional $C$-monoids is obtained.
[ { "created": "Sun, 19 Feb 2017 14:13:03 GMT", "version": "v1" } ]
2017-02-21
[ [ "Panicker", "Gayatri", "" ], [ "Krishna", "K. V.", "" ], [ "Bhaduri", "Purandar", "" ] ]
In order to study the axiomatization of the if-then-else construct over possibly non-halting programs and tests, the notion of $C$-sets was introduced in the literature by considering the tests from an abstract $C$-algebra. This paper extends the notion of $C$-sets to $C$-monoids which include the composition of programs as well as composition of programs with tests. For the class of $C$-monoids where the $C$-algebras are adas a canonical representation in terms of functional $C$-monoids is obtained.
2212.13924
Sven Najem-Meyer
Najem-Meyer Sven, Romanello Matteo
Page Layout Analysis of Text-heavy Historical Documents: a Comparison of Textual and Visual Approaches
Same as https://ceur-ws.org/Vol-3290/long_paper8670.pdf
Proceedings of the Computational Humanities Research Conference 2022
null
null
cs.IR cs.AI cs.CL cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Page layout analysis is a fundamental step in document processing which enables to segment a page into regions of interest. With highly complex layouts and mixed scripts, scholarly commentaries are text-heavy documents which remain challenging for state-of-the-art models. Their layout considerably varies across editions and their most important regions are mainly defined by semantic rather than graphical characteristics such as position or appearance. This setting calls for a comparison between textual, visual and hybrid approaches. We therefore assess the performances of two transformers (LayoutLMv3 and RoBERTa) and an objection-detection network (YOLOv5). If results show a clear advantage in favor of the latter, we also list several caveats to this finding. In addition to our experiments, we release a dataset of ca. 300 annotated pages sampled from 19th century commentaries.
[ { "created": "Mon, 12 Dec 2022 10:10:29 GMT", "version": "v1" } ]
2022-12-29
[ [ "Sven", "Najem-Meyer", "" ], [ "Matteo", "Romanello", "" ] ]
Page layout analysis is a fundamental step in document processing which enables to segment a page into regions of interest. With highly complex layouts and mixed scripts, scholarly commentaries are text-heavy documents which remain challenging for state-of-the-art models. Their layout considerably varies across editions and their most important regions are mainly defined by semantic rather than graphical characteristics such as position or appearance. This setting calls for a comparison between textual, visual and hybrid approaches. We therefore assess the performances of two transformers (LayoutLMv3 and RoBERTa) and an objection-detection network (YOLOv5). If results show a clear advantage in favor of the latter, we also list several caveats to this finding. In addition to our experiments, we release a dataset of ca. 300 annotated pages sampled from 19th century commentaries.
2401.09281
Andr\'e Miguel Romeiro Faria Lopes
Andr\'e Lopes (1), Daniel Castro (1), Paolo Romano (1) ((1) INESC-ID & Instituto Superior T\'ecnico - Universidade de Lisboa)
PIM-STM: Software Transactional Memory for Processing-In-Memory Systems
To be published in 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2 (ASPLOS '24), April 27-May 1, 2024, La Jolla, CA, USA
null
10.1145/3620665.3640428
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Processing-In-Memory (PIM) is a novel approach that augments existing DRAM memory chips with lightweight logic. By allowing to offload computations to the PIM system, this architecture allows for circumventing the data-bottleneck problem that affects many modern workloads. This work tackles the problem of how to build efficient software implementations of the Transactional Memory (TM) abstraction by introducing PIM-STM, a library that provides a range of diverse TM implementations for UPMEM, the first commercial PIM system. Via an extensive study we assess the efficiency of alternative choices in the design space of TM algorithms on this emerging architecture. We further quantify the impact of using different memory tiers of the UPMEM system (having different trade-offs for what concerns latency vs capacity) to store the metadata used by different TM implementations. Finally, we assess the gains achievable in terms of performance and memory efficiency when using PIM-STM to accelerate TM applications originally conceived for conventional CPU-based systems.
[ { "created": "Wed, 17 Jan 2024 15:35:58 GMT", "version": "v1" } ]
2024-01-18
[ [ "Lopes", "André", "" ], [ "Castro", "Daniel", "" ], [ "Romano", "Paolo", "" ] ]
Processing-In-Memory (PIM) is a novel approach that augments existing DRAM memory chips with lightweight logic. By allowing to offload computations to the PIM system, this architecture allows for circumventing the data-bottleneck problem that affects many modern workloads. This work tackles the problem of how to build efficient software implementations of the Transactional Memory (TM) abstraction by introducing PIM-STM, a library that provides a range of diverse TM implementations for UPMEM, the first commercial PIM system. Via an extensive study we assess the efficiency of alternative choices in the design space of TM algorithms on this emerging architecture. We further quantify the impact of using different memory tiers of the UPMEM system (having different trade-offs for what concerns latency vs capacity) to store the metadata used by different TM implementations. Finally, we assess the gains achievable in terms of performance and memory efficiency when using PIM-STM to accelerate TM applications originally conceived for conventional CPU-based systems.
2306.01404
Federico Quin
Federico Quin, Danny Weyns, Omid Gheibi
Reducing Large Adaptation Spaces in Self-Adaptive Systems Using Machine Learning
null
null
10.1016/j.jss.2022.111341
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern software systems often have to cope with uncertain operation conditions, such as changing workloads or fluctuating interference in a wireless network. To ensure that these systems meet their goals these uncertainties have to be mitigated. One approach to realize this is self-adaptation that equips a system with a feedback loop. The feedback loop implements four core functions -- monitor, analyze, plan, and execute -- that share knowledge in the form of runtime models. For systems with a large number of adaptation options, i.e., large adaptation spaces, deciding which option to select for adaptation may be time consuming or even infeasible within the available time window to make an adaptation decision. This is particularly the case when rigorous analysis techniques are used to select adaptation options, such as formal verification at runtime, which is widely adopted. One technique to deal with the analysis of a large number of adaptation options is reducing the adaptation space using machine learning. State of the art has showed the effectiveness of this technique, yet, a systematic solution that is able to handle different types of goals is lacking. In this paper, we present ML2ASR+, short for Machine Learning to Adaptation Space Reduction Plus. Central to ML2ASR+ is a configurable machine learning pipeline that supports effective analysis of large adaptation spaces for threshold, optimization, and setpoint goals. We evaluate ML2ASR+ for two applications with different sizes of adaptation spaces: an Internet-of-Things application and a service-based system. The results demonstrate that ML2ASR+ can be applied to deal with different types of goals and is able to reduce the adaptation space and hence the time to make adaptation decisions with over 90%, with negligible effect on the realization of the adaptation goals.
[ { "created": "Fri, 2 Jun 2023 09:49:33 GMT", "version": "v1" } ]
2023-06-05
[ [ "Quin", "Federico", "" ], [ "Weyns", "Danny", "" ], [ "Gheibi", "Omid", "" ] ]
Modern software systems often have to cope with uncertain operation conditions, such as changing workloads or fluctuating interference in a wireless network. To ensure that these systems meet their goals these uncertainties have to be mitigated. One approach to realize this is self-adaptation that equips a system with a feedback loop. The feedback loop implements four core functions -- monitor, analyze, plan, and execute -- that share knowledge in the form of runtime models. For systems with a large number of adaptation options, i.e., large adaptation spaces, deciding which option to select for adaptation may be time consuming or even infeasible within the available time window to make an adaptation decision. This is particularly the case when rigorous analysis techniques are used to select adaptation options, such as formal verification at runtime, which is widely adopted. One technique to deal with the analysis of a large number of adaptation options is reducing the adaptation space using machine learning. State of the art has showed the effectiveness of this technique, yet, a systematic solution that is able to handle different types of goals is lacking. In this paper, we present ML2ASR+, short for Machine Learning to Adaptation Space Reduction Plus. Central to ML2ASR+ is a configurable machine learning pipeline that supports effective analysis of large adaptation spaces for threshold, optimization, and setpoint goals. We evaluate ML2ASR+ for two applications with different sizes of adaptation spaces: an Internet-of-Things application and a service-based system. The results demonstrate that ML2ASR+ can be applied to deal with different types of goals and is able to reduce the adaptation space and hence the time to make adaptation decisions with over 90%, with negligible effect on the realization of the adaptation goals.
2305.15489
Bader Abu Radi
Bader Abu Radi and Orna Kupferman
On Semantically-Deterministic Automata
29 pages, 4 figures
null
null
null
cs.FL
http://creativecommons.org/licenses/by/4.0/
A nondeterministic automaton is semantically deterministic (SD) if different nondeterministic choices in the automaton lead to equivalent states. Semantic determinism is interesting as it is a natural relaxation of determinism, and as some applications of deterministic automata in formal methods can actually use automata with some level of nondeterminism, tightly related to semantic determinism. In the context of finite words, semantic determinism coincides with determinism, in the sense that every pruning of an SD automaton to a deterministic one results in an equivalent automaton. We study SD automata on infinite words, focusing on B\"uchi, co-B\"uchi, and weak automata. We show that there, while semantic determinism does not increase the expressive power, the combinatorial and computational properties of SD automata are very different from these of deterministic automata. In particular, SD B\"uchi and co-B\"uchi automata are exponentially more succinct than deterministic ones (in fact, also exponentially more succinct than history-deterministic automata), their complementation involves an exponential blow up, and decision procedures for them like universality and minimization are PSPACE-complete. For weak automata, we show that while an SD weak automaton need not be pruned to an equivalent deterministic one, it can be determinized to an equivalent deterministic weak automaton with the same state space, implying also efficient complementation and decision procedures for SD weak automata.
[ { "created": "Wed, 24 May 2023 18:21:31 GMT", "version": "v1" } ]
2023-05-26
[ [ "Radi", "Bader Abu", "" ], [ "Kupferman", "Orna", "" ] ]
A nondeterministic automaton is semantically deterministic (SD) if different nondeterministic choices in the automaton lead to equivalent states. Semantic determinism is interesting as it is a natural relaxation of determinism, and as some applications of deterministic automata in formal methods can actually use automata with some level of nondeterminism, tightly related to semantic determinism. In the context of finite words, semantic determinism coincides with determinism, in the sense that every pruning of an SD automaton to a deterministic one results in an equivalent automaton. We study SD automata on infinite words, focusing on B\"uchi, co-B\"uchi, and weak automata. We show that there, while semantic determinism does not increase the expressive power, the combinatorial and computational properties of SD automata are very different from these of deterministic automata. In particular, SD B\"uchi and co-B\"uchi automata are exponentially more succinct than deterministic ones (in fact, also exponentially more succinct than history-deterministic automata), their complementation involves an exponential blow up, and decision procedures for them like universality and minimization are PSPACE-complete. For weak automata, we show that while an SD weak automaton need not be pruned to an equivalent deterministic one, it can be determinized to an equivalent deterministic weak automaton with the same state space, implying also efficient complementation and decision procedures for SD weak automata.
2405.12958
Nikos Zarifis
Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis
Online Learning of Halfspaces with Massart Noise
null
null
null
null
cs.LG cs.DS math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the task of online learning in the presence of Massart noise. Instead of assuming that the online adversary chooses an arbitrary sequence of labels, we assume that the context $\mathbf{x}$ is selected adversarially but the label $y$ presented to the learner disagrees with the ground-truth label of $\mathbf{x}$ with unknown probability at most $\eta$. We study the fundamental class of $\gamma$-margin linear classifiers and present a computationally efficient algorithm that achieves mistake bound $\eta T + o(T)$. Our mistake bound is qualitatively tight for efficient algorithms: it is known that even in the offline setting achieving classification error better than $\eta$ requires super-polynomial time in the SQ model. We extend our online learning model to a $k$-arm contextual bandit setting where the rewards -- instead of satisfying commonly used realizability assumptions -- are consistent (in expectation) with some linear ranking function with weight vector $\mathbf{w}^\ast$. Given a list of contexts $\mathbf{x}_1,\ldots \mathbf{x}_k$, if $\mathbf{w}^*\cdot \mathbf{x}_i > \mathbf{w}^* \cdot \mathbf{x}_j$, the expected reward of action $i$ must be larger than that of $j$ by at least $\Delta$. We use our Massart online learner to design an efficient bandit algorithm that obtains expected reward at least $(1-1/k)~ \Delta T - o(T)$ bigger than choosing a random action at every round.
[ { "created": "Tue, 21 May 2024 17:31:10 GMT", "version": "v1" } ]
2024-05-22
[ [ "Diakonikolas", "Ilias", "" ], [ "Kontonis", "Vasilis", "" ], [ "Tzamos", "Christos", "" ], [ "Zarifis", "Nikos", "" ] ]
We study the task of online learning in the presence of Massart noise. Instead of assuming that the online adversary chooses an arbitrary sequence of labels, we assume that the context $\mathbf{x}$ is selected adversarially but the label $y$ presented to the learner disagrees with the ground-truth label of $\mathbf{x}$ with unknown probability at most $\eta$. We study the fundamental class of $\gamma$-margin linear classifiers and present a computationally efficient algorithm that achieves mistake bound $\eta T + o(T)$. Our mistake bound is qualitatively tight for efficient algorithms: it is known that even in the offline setting achieving classification error better than $\eta$ requires super-polynomial time in the SQ model. We extend our online learning model to a $k$-arm contextual bandit setting where the rewards -- instead of satisfying commonly used realizability assumptions -- are consistent (in expectation) with some linear ranking function with weight vector $\mathbf{w}^\ast$. Given a list of contexts $\mathbf{x}_1,\ldots \mathbf{x}_k$, if $\mathbf{w}^*\cdot \mathbf{x}_i > \mathbf{w}^* \cdot \mathbf{x}_j$, the expected reward of action $i$ must be larger than that of $j$ by at least $\Delta$. We use our Massart online learner to design an efficient bandit algorithm that obtains expected reward at least $(1-1/k)~ \Delta T - o(T)$ bigger than choosing a random action at every round.
1603.08776
Nikolaus Hansen
Nikolaus Hansen (Inria), Tea Tusar (Inria), Olaf Mersmann, Anne Auger (Inria), Dimo Brockhoff (Inria)
COCO: The Experimental Procedure
ArXiv e-prints, arXiv:1603.08776
null
null
null
cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a budget-free experimental setup and procedure for benchmarking numericaloptimization algorithms in a black-box scenario. This procedure can be applied with the COCO benchmarking platform. We describe initialization of and input to the algorithm and touch upon therelevance of termination and restarts.
[ { "created": "Tue, 29 Mar 2016 14:10:14 GMT", "version": "v1" }, { "created": "Thu, 19 May 2016 11:58:22 GMT", "version": "v2" } ]
2016-05-20
[ [ "Hansen", "Nikolaus", "", "Inria" ], [ "Tusar", "Tea", "", "Inria" ], [ "Mersmann", "Olaf", "", "Inria" ], [ "Auger", "Anne", "", "Inria" ], [ "Brockhoff", "Dimo", "", "Inria" ] ]
We present a budget-free experimental setup and procedure for benchmarking numericaloptimization algorithms in a black-box scenario. This procedure can be applied with the COCO benchmarking platform. We describe initialization of and input to the algorithm and touch upon therelevance of termination and restarts.
1705.07511
Yu-Ting Wang
Yu-Ting Wang, Jun Li, Rong Zheng, Dongmei Zhao
ARABIS: an Asynchronous Acoustic Indoor Positioning System for Mobile Devices
8 pages, 13 figures
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Acoustic ranging based indoor positioning solutions have the advantage of higher ranging accuracy and better compatibility with commercial-off-the-self consumer devices. However, similar to other time-domain based approaches using Time-of-Arrival and Time-Difference-of-Arrival, they suffer from performance degradation in presence of multi-path propagation and low received signal-to-noise ratio (SNR) in indoor environments. In this paper, we improve upon our previous work on asynchronous acoustic indoor positioning and develop ARABIS, a robust and low-cost acoustic positioning system (IPS) for mobile devices. We develop a low-cost acoustic board custom-designed to support large operational ranges and extensibility. To mitigate the effects of low SNR and multi-path propagation, we devise a robust algorithm that iteratively removes possible outliers by taking advantage of redundant TDoA estimates. Experiments have been carried in two testbeds of sizes 10.67m*7.76m and 15m*15m, one in an academic building and one in a convention center. The proposed system achieves average and 95% quantile localization errors of 7.4cm and 16.0cm in the first testbed with 8 anchor nodes and average and 95% quantile localization errors of 20.4cm and 40.0cm in the second testbed with 4 anchor nodes only.
[ { "created": "Sun, 21 May 2017 21:35:06 GMT", "version": "v1" } ]
2017-05-23
[ [ "Wang", "Yu-Ting", "" ], [ "Li", "Jun", "" ], [ "Zheng", "Rong", "" ], [ "Zhao", "Dongmei", "" ] ]
Acoustic ranging based indoor positioning solutions have the advantage of higher ranging accuracy and better compatibility with commercial-off-the-self consumer devices. However, similar to other time-domain based approaches using Time-of-Arrival and Time-Difference-of-Arrival, they suffer from performance degradation in presence of multi-path propagation and low received signal-to-noise ratio (SNR) in indoor environments. In this paper, we improve upon our previous work on asynchronous acoustic indoor positioning and develop ARABIS, a robust and low-cost acoustic positioning system (IPS) for mobile devices. We develop a low-cost acoustic board custom-designed to support large operational ranges and extensibility. To mitigate the effects of low SNR and multi-path propagation, we devise a robust algorithm that iteratively removes possible outliers by taking advantage of redundant TDoA estimates. Experiments have been carried in two testbeds of sizes 10.67m*7.76m and 15m*15m, one in an academic building and one in a convention center. The proposed system achieves average and 95% quantile localization errors of 7.4cm and 16.0cm in the first testbed with 8 anchor nodes and average and 95% quantile localization errors of 20.4cm and 40.0cm in the second testbed with 4 anchor nodes only.
1803.05494
Shubhra Aich
Shubhra Aich and Ian Stavness
Improving Object Counting with Heatmap Regulation
Code repository: https://github.com/littleaich/heatmap-regulation
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a simple and effective way to improve one-look regression models for object counting from images. We use class activation map visualizations to illustrate the drawbacks of learning a pure one-look regression model for a counting task. Based on these insights, we enhance one-look regression counting models by regulating activation maps from the final convolution layer of the network with coarse ground-truth activation maps generated from simple dot annotations. We call this strategy heatmap regulation (HR). We show that this simple enhancement effectively suppresses false detections generated by the corresponding one-look baseline model and also improves the performance in terms of false negatives. Evaluations are performed on four different counting datasets --- two for car counting (CARPK, PUCPR+), one for crowd counting (WorldExpo) and another for biological cell counting (VGG-Cells). Adding HR to a simple VGG front-end improves performance on all these benchmarks compared to a simple one-look baseline model and results in state-of-the-art performance for car counting.
[ { "created": "Wed, 14 Mar 2018 19:52:43 GMT", "version": "v1" }, { "created": "Wed, 23 May 2018 21:43:47 GMT", "version": "v2" } ]
2018-05-25
[ [ "Aich", "Shubhra", "" ], [ "Stavness", "Ian", "" ] ]
In this paper, we propose a simple and effective way to improve one-look regression models for object counting from images. We use class activation map visualizations to illustrate the drawbacks of learning a pure one-look regression model for a counting task. Based on these insights, we enhance one-look regression counting models by regulating activation maps from the final convolution layer of the network with coarse ground-truth activation maps generated from simple dot annotations. We call this strategy heatmap regulation (HR). We show that this simple enhancement effectively suppresses false detections generated by the corresponding one-look baseline model and also improves the performance in terms of false negatives. Evaluations are performed on four different counting datasets --- two for car counting (CARPK, PUCPR+), one for crowd counting (WorldExpo) and another for biological cell counting (VGG-Cells). Adding HR to a simple VGG front-end improves performance on all these benchmarks compared to a simple one-look baseline model and results in state-of-the-art performance for car counting.
2305.03143
Gaia Saveri
Gaia Saveri and Luca Bortolussi
Towards Invertible Semantic-Preserving Embeddings of Logical Formulae
null
null
null
null
cs.AI cs.LG cs.LO
http://creativecommons.org/licenses/by/4.0/
Logic is the main formal language to perform automated reasoning, and it is further a human-interpretable language, at least for small formulae. Learning and optimising logic requirements and rules has always been an important problem in Artificial Intelligence. State of the art Machine Learning (ML) approaches are mostly based on gradient descent optimisation in continuous spaces, while learning logic is framed in the discrete syntactic space of formulae. Using continuous optimisation to learn logic properties is a challenging problem, requiring to embed formulae in a continuous space in a meaningful way, i.e. preserving the semantics. Current methods are able to construct effective semantic-preserving embeddings via kernel methods (for linear temporal logic), but the map they define is not invertible. In this work we address this problem, learning how to invert such an embedding leveraging deep architectures based on the Graph Variational Autoencoder framework. We propose a novel model specifically designed for this setting, justifying our design choices through an extensive experimental evaluation. Reported results in the context of propositional logic are promising, and several challenges regarding learning invertible embeddings of formulae are highlighted and addressed.
[ { "created": "Wed, 3 May 2023 10:49:01 GMT", "version": "v1" } ]
2023-05-08
[ [ "Saveri", "Gaia", "" ], [ "Bortolussi", "Luca", "" ] ]
Logic is the main formal language to perform automated reasoning, and it is further a human-interpretable language, at least for small formulae. Learning and optimising logic requirements and rules has always been an important problem in Artificial Intelligence. State of the art Machine Learning (ML) approaches are mostly based on gradient descent optimisation in continuous spaces, while learning logic is framed in the discrete syntactic space of formulae. Using continuous optimisation to learn logic properties is a challenging problem, requiring to embed formulae in a continuous space in a meaningful way, i.e. preserving the semantics. Current methods are able to construct effective semantic-preserving embeddings via kernel methods (for linear temporal logic), but the map they define is not invertible. In this work we address this problem, learning how to invert such an embedding leveraging deep architectures based on the Graph Variational Autoencoder framework. We propose a novel model specifically designed for this setting, justifying our design choices through an extensive experimental evaluation. Reported results in the context of propositional logic are promising, and several challenges regarding learning invertible embeddings of formulae are highlighted and addressed.
2403.10194
Sebastian Krebs
Sebastian Krebs and Tom Herter
Ultra-Wideband Positioning System Based on ESP32 and DWM3000 Modules
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, an Ultra-Wideband (UWB) positioning system is introduced, that leverages six identical custom-designed boards, each featuring an ESP32 microcontroller and a DWM3000 module from Quorvo. The system is capable of achieving localization with an accuracy of up to 10 cm, by utilizing Two-Way-Ranging (TWR) measurements between one designated tag and five anchor devices. The gathered distance measurements are subsequently processed by an Extended Kalman Filter (EKF) running locally on the tag board, enabling it to determine its own position, relying on fixed, a priori known positions of the anchor boards. This paper presents a comprehensive overview of the systems architecture, the key components, and the capabilities it offers for indoor positioning and tracking applications.
[ { "created": "Fri, 15 Mar 2024 10:57:09 GMT", "version": "v1" } ]
2024-03-18
[ [ "Krebs", "Sebastian", "" ], [ "Herter", "Tom", "" ] ]
In this paper, an Ultra-Wideband (UWB) positioning system is introduced, that leverages six identical custom-designed boards, each featuring an ESP32 microcontroller and a DWM3000 module from Quorvo. The system is capable of achieving localization with an accuracy of up to 10 cm, by utilizing Two-Way-Ranging (TWR) measurements between one designated tag and five anchor devices. The gathered distance measurements are subsequently processed by an Extended Kalman Filter (EKF) running locally on the tag board, enabling it to determine its own position, relying on fixed, a priori known positions of the anchor boards. This paper presents a comprehensive overview of the systems architecture, the key components, and the capabilities it offers for indoor positioning and tracking applications.
2307.13958
Zitong Yu
Zitong Yu, Rizhao Cai, Yawen Cui, Ajian Liu and Changsheng Chen
Visual Prompt Flexible-Modal Face Anti-Spoofing
arXiv admin note: text overlap with arXiv:2303.03369 by other authors
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, vision transformer based multimodal learning methods have been proposed to improve the robustness of face anti-spoofing (FAS) systems. However, multimodal face data collected from the real world is often imperfect due to missing modalities from various imaging sensors. Recently, flexible-modal FAS~\cite{yu2023flexible} has attracted more attention, which aims to develop a unified multimodal FAS model using complete multimodal face data but is insensitive to test-time missing modalities. In this paper, we tackle one main challenge in flexible-modal FAS, i.e., when missing modality occurs either during training or testing in real-world situations. Inspired by the recent success of the prompt learning in language models, we propose \textbf{V}isual \textbf{P}rompt flexible-modal \textbf{FAS} (VP-FAS), which learns the modal-relevant prompts to adapt the frozen pre-trained foundation model to downstream flexible-modal FAS task. Specifically, both vanilla visual prompts and residual contextual prompts are plugged into multimodal transformers to handle general missing-modality cases, while only requiring less than 4\% learnable parameters compared to training the entire model. Furthermore, missing-modality regularization is proposed to force models to learn consistent multimodal feature embeddings when missing partial modalities. Extensive experiments conducted on two multimodal FAS benchmark datasets demonstrate the effectiveness of our VP-FAS framework that improves the performance under various missing-modality cases while alleviating the requirement of heavy model re-training.
[ { "created": "Wed, 26 Jul 2023 05:06:41 GMT", "version": "v1" } ]
2023-07-27
[ [ "Yu", "Zitong", "" ], [ "Cai", "Rizhao", "" ], [ "Cui", "Yawen", "" ], [ "Liu", "Ajian", "" ], [ "Chen", "Changsheng", "" ] ]
Recently, vision transformer based multimodal learning methods have been proposed to improve the robustness of face anti-spoofing (FAS) systems. However, multimodal face data collected from the real world is often imperfect due to missing modalities from various imaging sensors. Recently, flexible-modal FAS~\cite{yu2023flexible} has attracted more attention, which aims to develop a unified multimodal FAS model using complete multimodal face data but is insensitive to test-time missing modalities. In this paper, we tackle one main challenge in flexible-modal FAS, i.e., when missing modality occurs either during training or testing in real-world situations. Inspired by the recent success of the prompt learning in language models, we propose \textbf{V}isual \textbf{P}rompt flexible-modal \textbf{FAS} (VP-FAS), which learns the modal-relevant prompts to adapt the frozen pre-trained foundation model to downstream flexible-modal FAS task. Specifically, both vanilla visual prompts and residual contextual prompts are plugged into multimodal transformers to handle general missing-modality cases, while only requiring less than 4\% learnable parameters compared to training the entire model. Furthermore, missing-modality regularization is proposed to force models to learn consistent multimodal feature embeddings when missing partial modalities. Extensive experiments conducted on two multimodal FAS benchmark datasets demonstrate the effectiveness of our VP-FAS framework that improves the performance under various missing-modality cases while alleviating the requirement of heavy model re-training.
1805.08695
Panagiotis Mousouliotis
Panagiotis G. Mousouliotis, Loukas P. Petrou
SqueezeJet: High-level Synthesis Accelerator Design for Deep Convolutional Neural Networks
The final publication is available at Springer via https://doi.org/10.1007/978-3-319-78890-6_5
null
10.1007/978-3-319-78890-6_5
null
cs.CV cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep convolutional neural networks have dominated the pattern recognition scene by providing much more accurate solutions in computer vision problems such as object recognition and object detection. Most of these solutions come at a huge computational cost, requiring billions of multiply-accumulate operations and, thus, making their use quite challenging in real-time applications that run on embedded mobile (resource-power constrained) hardware. This work presents the architecture, the high-level synthesis design, and the implementation of SqueezeJet, an FPGA accelerator for the inference phase of the SqueezeNet DCNN architecture, which is designed specifically for use in embedded systems. Results show that SqueezeJet can achieve 15.16 times speed-up compared to the software implementation of SqueezeNet running on an embedded mobile processor with less than 1% drop in top-5 accuracy.
[ { "created": "Sun, 6 May 2018 21:56:33 GMT", "version": "v1" } ]
2018-11-27
[ [ "Mousouliotis", "Panagiotis G.", "" ], [ "Petrou", "Loukas P.", "" ] ]
Deep convolutional neural networks have dominated the pattern recognition scene by providing much more accurate solutions in computer vision problems such as object recognition and object detection. Most of these solutions come at a huge computational cost, requiring billions of multiply-accumulate operations and, thus, making their use quite challenging in real-time applications that run on embedded mobile (resource-power constrained) hardware. This work presents the architecture, the high-level synthesis design, and the implementation of SqueezeJet, an FPGA accelerator for the inference phase of the SqueezeNet DCNN architecture, which is designed specifically for use in embedded systems. Results show that SqueezeJet can achieve 15.16 times speed-up compared to the software implementation of SqueezeNet running on an embedded mobile processor with less than 1% drop in top-5 accuracy.
2101.12591
Carlo A. Furia
Carlo A. Furia, Richard Torkar, Robert Feldt
Applying Bayesian Analysis Guidelines to Empirical Software Engineering Data: The Case of Programming Languages and Code Quality
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistical analysis is the tool of choice to turn data into information, and then information into empirical knowledge. To be valid, the process that goes from data to knowledge should be supported by detailed, rigorous guidelines, which help ferret out issues with the data or model, and lead to qualified results that strike a reasonable balance between generality and practical relevance. Such guidelines are being developed by statisticians to support the latest techniques for Bayesian data analysis. In this article, we frame these guidelines in a way that is apt to empirical research in software engineering. To demonstrate the guidelines in practice, we apply them to reanalyze a GitHub dataset about code quality in different programming languages. The dataset's original analysis (Ray et al., 2014) and a critical reanalysis (Berger at al., 2019) have attracted considerable attention -- in no small part because they target a topic (the impact of different programming languages) on which strong opinions abound. The goals of our reanalysis are largely orthogonal to this previous work, as we are concerned with demonstrating, on data in an interesting domain, how to build a principled Bayesian data analysis and to showcase some of its benefits. In the process, we will also shed light on some critical aspects of the analyzed data and of the relationship between programming languages and code quality. The high-level conclusions of our exercise will be that Bayesian statistical techniques can be applied to analyze software engineering data in a way that is principled, flexible, and leads to convincing results that inform the state of the art while highlighting the boundaries of its validity. The guidelines can support building solid statistical analyses and connecting their results, and hence help buttress continued progress in empirical software engineering research.
[ { "created": "Fri, 29 Jan 2021 14:00:18 GMT", "version": "v1" }, { "created": "Wed, 28 Jul 2021 11:59:17 GMT", "version": "v2" } ]
2021-07-29
[ [ "Furia", "Carlo A.", "" ], [ "Torkar", "Richard", "" ], [ "Feldt", "Robert", "" ] ]
Statistical analysis is the tool of choice to turn data into information, and then information into empirical knowledge. To be valid, the process that goes from data to knowledge should be supported by detailed, rigorous guidelines, which help ferret out issues with the data or model, and lead to qualified results that strike a reasonable balance between generality and practical relevance. Such guidelines are being developed by statisticians to support the latest techniques for Bayesian data analysis. In this article, we frame these guidelines in a way that is apt to empirical research in software engineering. To demonstrate the guidelines in practice, we apply them to reanalyze a GitHub dataset about code quality in different programming languages. The dataset's original analysis (Ray et al., 2014) and a critical reanalysis (Berger at al., 2019) have attracted considerable attention -- in no small part because they target a topic (the impact of different programming languages) on which strong opinions abound. The goals of our reanalysis are largely orthogonal to this previous work, as we are concerned with demonstrating, on data in an interesting domain, how to build a principled Bayesian data analysis and to showcase some of its benefits. In the process, we will also shed light on some critical aspects of the analyzed data and of the relationship between programming languages and code quality. The high-level conclusions of our exercise will be that Bayesian statistical techniques can be applied to analyze software engineering data in a way that is principled, flexible, and leads to convincing results that inform the state of the art while highlighting the boundaries of its validity. The guidelines can support building solid statistical analyses and connecting their results, and hence help buttress continued progress in empirical software engineering research.
2212.14126
Justus Fasse
Justus Fasse and Bart Jacobs
Modular termination verification with a higher-order concurrent separation logic (Intermediate report)
null
null
null
null
cs.LO cs.PL
http://creativecommons.org/licenses/by/4.0/
We report on intermediate results of our research on reasoning about liveness properties in addition to deep correctness properties for an imperative, concurrent programming language with a higher-order store. At present, we focus on one particular liveness property, namely termination. By guaranteeing termination we can strengthen statements of partial correctness to total correctness. This is achieved by the classic approach of turning termination into a safety property. In particular we extend the programming language under consideration with call permissions, which have been shown to enable modular reasoning about termination. Atomic blocks are added to increase the expressiveness of our call-permission-based approach. Our work builds on top of Iris -- a foundational, machine-checked, higher-order concurrent separation logic framework -- without modifying it. With these additions we are able to modularly reason about the termination of concurrent, but non-blocking algorithms. Our additions to the programming language under consideration preserve Iris' ability to reason about helping and prophecies. As an example, we apply the current system to an existing case study for a lock-free concurrent stack with helping that has been proven in Iris. Finally, we sketch the next steps to scale our approach to blocking concurrency.
[ { "created": "Wed, 28 Dec 2022 23:50:20 GMT", "version": "v1" } ]
2023-01-02
[ [ "Fasse", "Justus", "" ], [ "Jacobs", "Bart", "" ] ]
We report on intermediate results of our research on reasoning about liveness properties in addition to deep correctness properties for an imperative, concurrent programming language with a higher-order store. At present, we focus on one particular liveness property, namely termination. By guaranteeing termination we can strengthen statements of partial correctness to total correctness. This is achieved by the classic approach of turning termination into a safety property. In particular we extend the programming language under consideration with call permissions, which have been shown to enable modular reasoning about termination. Atomic blocks are added to increase the expressiveness of our call-permission-based approach. Our work builds on top of Iris -- a foundational, machine-checked, higher-order concurrent separation logic framework -- without modifying it. With these additions we are able to modularly reason about the termination of concurrent, but non-blocking algorithms. Our additions to the programming language under consideration preserve Iris' ability to reason about helping and prophecies. As an example, we apply the current system to an existing case study for a lock-free concurrent stack with helping that has been proven in Iris. Finally, we sketch the next steps to scale our approach to blocking concurrency.
2406.14294
Pooneh Mousavi
Pooneh Mousavi, Luca Della Libera, Jarod Duret, Artem Ploujnikov, Cem Subakan, Mirco Ravanelli
DASB - Discrete Audio and Speech Benchmark
9 pages, 5 tables
null
null
null
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by/4.0/
Discrete audio tokens have recently gained considerable attention for their potential to connect audio and language processing, enabling the creation of modern multimodal large language models. Ideal audio tokens must effectively preserve phonetic and semantic content along with paralinguistic information, speaker identity, and other details. While several types of audio tokens have been recently proposed, identifying the optimal tokenizer for various tasks is challenging due to the inconsistent evaluation settings in existing studies. To address this gap, we release the Discrete Audio and Speech Benchmark (DASB), a comprehensive leaderboard for benchmarking discrete audio tokens across a wide range of discriminative tasks, including speech recognition, speaker identification and verification, emotion recognition, keyword spotting, and intent classification, as well as generative tasks such as speech enhancement, separation, and text-to-speech. Our results show that, on average, semantic tokens outperform compression tokens across most discriminative and generative tasks. However, the performance gap between semantic tokens and standard continuous representations remains substantial, highlighting the need for further research in this field.
[ { "created": "Thu, 20 Jun 2024 13:23:27 GMT", "version": "v1" }, { "created": "Fri, 21 Jun 2024 17:07:17 GMT", "version": "v2" } ]
2024-06-25
[ [ "Mousavi", "Pooneh", "" ], [ "Della Libera", "Luca", "" ], [ "Duret", "Jarod", "" ], [ "Ploujnikov", "Artem", "" ], [ "Subakan", "Cem", "" ], [ "Ravanelli", "Mirco", "" ] ]
Discrete audio tokens have recently gained considerable attention for their potential to connect audio and language processing, enabling the creation of modern multimodal large language models. Ideal audio tokens must effectively preserve phonetic and semantic content along with paralinguistic information, speaker identity, and other details. While several types of audio tokens have been recently proposed, identifying the optimal tokenizer for various tasks is challenging due to the inconsistent evaluation settings in existing studies. To address this gap, we release the Discrete Audio and Speech Benchmark (DASB), a comprehensive leaderboard for benchmarking discrete audio tokens across a wide range of discriminative tasks, including speech recognition, speaker identification and verification, emotion recognition, keyword spotting, and intent classification, as well as generative tasks such as speech enhancement, separation, and text-to-speech. Our results show that, on average, semantic tokens outperform compression tokens across most discriminative and generative tasks. However, the performance gap between semantic tokens and standard continuous representations remains substantial, highlighting the need for further research in this field.
1710.07535
Raphael Gontijo Lopes
Raphael Gontijo Lopes, Stefano Fenu, Thad Starner
Data-Free Knowledge Distillation for Deep Neural Networks
Accepted to NIPS 2017 Workshop on Learning with Limited Data. Under review at AISTATS 2018
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in model compression have provided procedures for compressing large neural networks to a fraction of their original size while retaining most if not all of their accuracy. However, all of these approaches rely on access to the original training set, which might not always be possible if the network to be compressed was trained on a very large dataset, or on a dataset whose release poses privacy or safety concerns as may be the case for biometrics tasks. We present a method for data-free knowledge distillation, which is able to compress deep neural networks trained on large-scale datasets to a fraction of their size leveraging only some extra metadata to be provided with a pretrained model release. We also explore different kinds of metadata that can be used with our method, and discuss tradeoffs involved in using each of them.
[ { "created": "Thu, 19 Oct 2017 16:04:05 GMT", "version": "v1" }, { "created": "Thu, 23 Nov 2017 16:28:48 GMT", "version": "v2" } ]
2017-11-27
[ [ "Lopes", "Raphael Gontijo", "" ], [ "Fenu", "Stefano", "" ], [ "Starner", "Thad", "" ] ]
Recent advances in model compression have provided procedures for compressing large neural networks to a fraction of their original size while retaining most if not all of their accuracy. However, all of these approaches rely on access to the original training set, which might not always be possible if the network to be compressed was trained on a very large dataset, or on a dataset whose release poses privacy or safety concerns as may be the case for biometrics tasks. We present a method for data-free knowledge distillation, which is able to compress deep neural networks trained on large-scale datasets to a fraction of their size leveraging only some extra metadata to be provided with a pretrained model release. We also explore different kinds of metadata that can be used with our method, and discuss tradeoffs involved in using each of them.
1805.09423
Mart\'in Farach-Colton
Alex Conway, Martin Farach-Colton, Philip Shilane
Optimal Hashing in External Memory
null
null
null
null
cs.DS
http://creativecommons.org/publicdomain/zero/1.0/
Hash tables are a ubiquitous class of dictionary data structures. However, standard hash table implementations do not translate well into the external memory model, because they do not incorporate locality for insertions. Iacono and Patracsu established an update/query tradeoff curve for external hash tables: a hash table that performs insertions in $O(\lambda/B)$ amortized IOs requires $\Omega(\log_\lambda N)$ expected IOs for queries, where $N$ is the number of items that can be stored in the data structure, $B$ is the size of a memory transfer, $M$ is the size of memory, and $\lambda$ is a tuning parameter. They provide a hashing data structure that meets this curve for $\lambda$ that is $\Omega(\log\log M + \log_M N)$. Their data structure, which we call an \defn{IP hash table}, is complicated and, to the best of our knowledge, has not been implemented. In this paper, we present a new and much simpler optimal external memory hash table, the \defn{Bundle of Arrays Hash Table} (BOA). BOAs are based on size-tiered LSMs, a well-studied data structure, and are almost as easy to implement. The BOA is optimal for a narrower range of $\lambda$. However, the simplicity of BOAs allows them to be readily modified to achieve the following results: \begin{itemize} \item A new external memory data structure, the \defn{Bundle of Trees Hash Table} (BOT), that matches the performance of the IP hash table, while retaining some of the simplicity of the BOAs. \item The \defn{cache-oblivious Bundle of Trees Hash Table} (COBOT), the first cache-oblivious hash table. This data structure matches the optimality of BOTs and IP hash tables over the same range of $\lambda$. \end{itemize}
[ { "created": "Wed, 23 May 2018 21:00:47 GMT", "version": "v1" } ]
2018-05-25
[ [ "Conway", "Alex", "" ], [ "Farach-Colton", "Martin", "" ], [ "Shilane", "Philip", "" ] ]
Hash tables are a ubiquitous class of dictionary data structures. However, standard hash table implementations do not translate well into the external memory model, because they do not incorporate locality for insertions. Iacono and Patracsu established an update/query tradeoff curve for external hash tables: a hash table that performs insertions in $O(\lambda/B)$ amortized IOs requires $\Omega(\log_\lambda N)$ expected IOs for queries, where $N$ is the number of items that can be stored in the data structure, $B$ is the size of a memory transfer, $M$ is the size of memory, and $\lambda$ is a tuning parameter. They provide a hashing data structure that meets this curve for $\lambda$ that is $\Omega(\log\log M + \log_M N)$. Their data structure, which we call an \defn{IP hash table}, is complicated and, to the best of our knowledge, has not been implemented. In this paper, we present a new and much simpler optimal external memory hash table, the \defn{Bundle of Arrays Hash Table} (BOA). BOAs are based on size-tiered LSMs, a well-studied data structure, and are almost as easy to implement. The BOA is optimal for a narrower range of $\lambda$. However, the simplicity of BOAs allows them to be readily modified to achieve the following results: \begin{itemize} \item A new external memory data structure, the \defn{Bundle of Trees Hash Table} (BOT), that matches the performance of the IP hash table, while retaining some of the simplicity of the BOAs. \item The \defn{cache-oblivious Bundle of Trees Hash Table} (COBOT), the first cache-oblivious hash table. This data structure matches the optimality of BOTs and IP hash tables over the same range of $\lambda$. \end{itemize}
1711.06238
Rajarshee Mitra
Rajarshee Mitra
A Generative Approach to Question Answering
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Question Answering has come a long way from answer sentence selection, relational QA to reading and comprehension. We shift our attention to generative question answering (gQA) by which we facilitate machine to read passages and answer questions by learning to generate the answers. We frame the problem as a generative task where the encoder being a network that models the relationship between question and passage and encoding them to a vector thus facilitating the decoder to directly form an abstraction of the answer. Not being able to retain facts and making repetitions are common mistakes that affect the overall legibility of answers. To counter these issues, we employ copying mechanism and maintenance of coverage vector in our model respectively. Our results on MS-MARCO demonstrate it's superiority over baselines and we also show qualitative examples where we improved in terms of correctness and readability
[ { "created": "Thu, 16 Nov 2017 18:34:16 GMT", "version": "v1" }, { "created": "Sat, 7 Jul 2018 13:37:40 GMT", "version": "v2" } ]
2018-07-10
[ [ "Mitra", "Rajarshee", "" ] ]
Question Answering has come a long way from answer sentence selection, relational QA to reading and comprehension. We shift our attention to generative question answering (gQA) by which we facilitate machine to read passages and answer questions by learning to generate the answers. We frame the problem as a generative task where the encoder being a network that models the relationship between question and passage and encoding them to a vector thus facilitating the decoder to directly form an abstraction of the answer. Not being able to retain facts and making repetitions are common mistakes that affect the overall legibility of answers. To counter these issues, we employ copying mechanism and maintenance of coverage vector in our model respectively. Our results on MS-MARCO demonstrate it's superiority over baselines and we also show qualitative examples where we improved in terms of correctness and readability
1708.01643
Adebayo Omotosho Dr
Adebayo Omotosho, Justice Emuoyibofarhe, Christoph Meinel
Ensuring patients privacy in a cryptographic-based-electronic health records using bio-cryptography
null
International Journal of Electronic Healthcare (IJEH), Vol. 9, No. 4, pp.227 - 254 (2017)
10.1504/IJEH.2017.10003030
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several recent works have proposed and implemented cryptography as a means to preserve privacy and security of patients health data. Nevertheless, the weakest point of electronic health record (EHR) systems that relied on these cryptographic schemes is key management. Thus, this paper presents the development of privacy and security system for cryptography-based-EHR by taking advantage of the uniqueness of fingerprint and iris characteristic features to secure cryptographic keys in a bio-cryptography framework. The results of the system evaluation showed significant improvements in terms of time efficiency of this approach to cryptographic-based-EHR. Both the fuzzy vault and fuzzy commitment demonstrated false acceptance rate (FAR) of 0%, which reduces the likelihood of imposters gaining successful access to the keys protecting patients protected health information. This result also justifies the feasibility of implementing fuzzy key binding scheme in real applications, especially fuzzy vault which demonstrated a better performance during key reconstruction.
[ { "created": "Wed, 26 Jul 2017 22:11:23 GMT", "version": "v1" } ]
2017-08-09
[ [ "Omotosho", "Adebayo", "" ], [ "Emuoyibofarhe", "Justice", "" ], [ "Meinel", "Christoph", "" ] ]
Several recent works have proposed and implemented cryptography as a means to preserve privacy and security of patients health data. Nevertheless, the weakest point of electronic health record (EHR) systems that relied on these cryptographic schemes is key management. Thus, this paper presents the development of privacy and security system for cryptography-based-EHR by taking advantage of the uniqueness of fingerprint and iris characteristic features to secure cryptographic keys in a bio-cryptography framework. The results of the system evaluation showed significant improvements in terms of time efficiency of this approach to cryptographic-based-EHR. Both the fuzzy vault and fuzzy commitment demonstrated false acceptance rate (FAR) of 0%, which reduces the likelihood of imposters gaining successful access to the keys protecting patients protected health information. This result also justifies the feasibility of implementing fuzzy key binding scheme in real applications, especially fuzzy vault which demonstrated a better performance during key reconstruction.
2309.03220
Louis Rosenberg PhD
Louis Rosenberg, Gregg Willcox, Hans Schumann, Miles Bader, Ganesh Mani, Kokoro Sagae, Devang Acharya, Yuxin Zheng, Andrew Kim, Jialing Deng
Conversational Swarm Intelligence, a Pilot Study
Pending for conference, Collective Intelligence 2023 (ACM)
null
null
null
cs.HC cs.NE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Conversational Swarm Intelligence (CSI) is a new method for enabling large human groups to hold real-time networked conversations using a technique modeled on the dynamics of biological swarms. Through the novel use of conversational agents powered by Large Language Models (LLMs), the CSI structure simultaneously enables local dialog among small deliberative groups and global propagation of conversational content across a larger population. In this way, CSI combines the benefits of small-group deliberative reasoning and large-scale collective intelligence. In this pilot study, participants deliberating in conversational swarms (via text chat) (a) produced 30% more contributions (p<0.05) than participants deliberating in a standard centralized chat room and (b) demonstrated 7.2% less variance in contribution quantity. These results indicate that users contributed more content and participated more evenly when using the CSI structure.
[ { "created": "Thu, 31 Aug 2023 17:51:02 GMT", "version": "v1" } ]
2023-09-08
[ [ "Rosenberg", "Louis", "" ], [ "Willcox", "Gregg", "" ], [ "Schumann", "Hans", "" ], [ "Bader", "Miles", "" ], [ "Mani", "Ganesh", "" ], [ "Sagae", "Kokoro", "" ], [ "Acharya", "Devang", "" ], [ "Zheng", "Yuxin", "" ], [ "Kim", "Andrew", "" ], [ "Deng", "Jialing", "" ] ]
Conversational Swarm Intelligence (CSI) is a new method for enabling large human groups to hold real-time networked conversations using a technique modeled on the dynamics of biological swarms. Through the novel use of conversational agents powered by Large Language Models (LLMs), the CSI structure simultaneously enables local dialog among small deliberative groups and global propagation of conversational content across a larger population. In this way, CSI combines the benefits of small-group deliberative reasoning and large-scale collective intelligence. In this pilot study, participants deliberating in conversational swarms (via text chat) (a) produced 30% more contributions (p<0.05) than participants deliberating in a standard centralized chat room and (b) demonstrated 7.2% less variance in contribution quantity. These results indicate that users contributed more content and participated more evenly when using the CSI structure.
1507.08717
EPTCS
Christoph Benzm\"uller (Freie Universit\"at Berlin, Germany), Maximilian Claus (Freie Universit\"at Berlin, Germany), Nik Sultana (Cambridge University, UK)
Systematic Verification of the Modal Logic Cube in Isabelle/HOL
In Proceedings PxTP 2015, arXiv:1507.08375
EPTCS 186, 2015, pp. 27-41
10.4204/EPTCS.186.5
null
cs.LO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an automated verification of the well-known modal logic cube in Isabelle/HOL, in which we prove the inclusion relations between the cube's logics using automated reasoning tools. Prior work addresses this problem but without restriction to the modal logic cube, and using encodings in first-order logic in combination with first-order automated theorem provers. In contrast, our solution is more elegant, transparent and effective. It employs an embedding of quantified modal logic in classical higher-order logic. Automated reasoning tools, such as Sledgehammer with LEO-II, Satallax and CVC4, Metis and Nitpick, are employed to achieve full automation. Though successful, the experiments also motivate some technical improvements in the Isabelle/HOL tool.
[ { "created": "Fri, 31 Jul 2015 00:58:44 GMT", "version": "v1" } ]
2015-08-03
[ [ "Benzmüller", "Christoph", "", "Freie Universität Berlin, Germany" ], [ "Claus", "Maximilian", "", "Freie Universität Berlin, Germany" ], [ "Sultana", "Nik", "", "Cambridge University, UK" ] ]
We present an automated verification of the well-known modal logic cube in Isabelle/HOL, in which we prove the inclusion relations between the cube's logics using automated reasoning tools. Prior work addresses this problem but without restriction to the modal logic cube, and using encodings in first-order logic in combination with first-order automated theorem provers. In contrast, our solution is more elegant, transparent and effective. It employs an embedding of quantified modal logic in classical higher-order logic. Automated reasoning tools, such as Sledgehammer with LEO-II, Satallax and CVC4, Metis and Nitpick, are employed to achieve full automation. Though successful, the experiments also motivate some technical improvements in the Isabelle/HOL tool.
cs/0508059
Arindam Mitra
Arindam Mitra
Honesty can be the best policy within quantum mechanics
One of the referees (Phys. Rev. Lett.) observed that manuscript " deserves to be widely read and analyzed". Acknowledgement is due
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Honesty has never been scientifically proved to be the best policy in any case. It is pointed out that only honest person can prevent his dishonest partner to bias the outcome of quantum coin tossing.
[ { "created": "Thu, 11 Aug 2005 15:59:46 GMT", "version": "v1" }, { "created": "Wed, 19 Jul 2006 15:34:40 GMT", "version": "v10" }, { "created": "Tue, 17 Oct 2006 15:34:35 GMT", "version": "v11" }, { "created": "Thu, 16 Nov 2006 15:33:52 GMT", "version": "v12" }, { "created": "Wed, 22 Nov 2006 16:01:02 GMT", "version": "v13" }, { "created": "Fri, 24 Nov 2006 15:31:57 GMT", "version": "v14" }, { "created": "Wed, 31 Jan 2007 12:52:40 GMT", "version": "v15" }, { "created": "Tue, 1 May 2007 14:44:21 GMT", "version": "v16" }, { "created": "Tue, 20 Nov 2007 15:26:03 GMT", "version": "v17" }, { "created": "Tue, 19 Feb 2008 13:09:47 GMT", "version": "v18" }, { "created": "Sat, 2 Aug 2008 14:21:31 GMT", "version": "v19" }, { "created": "Tue, 23 Aug 2005 15:21:39 GMT", "version": "v2" }, { "created": "Sat, 27 Dec 2008 15:50:18 GMT", "version": "v20" }, { "created": "Thu, 8 Sep 2005 15:47:40 GMT", "version": "v3" }, { "created": "Thu, 22 Sep 2005 15:02:24 GMT", "version": "v4" }, { "created": "Thu, 9 Feb 2006 22:22:50 GMT", "version": "v5" }, { "created": "Thu, 16 Mar 2006 15:05:21 GMT", "version": "v6" }, { "created": "Thu, 30 Mar 2006 16:02:04 GMT", "version": "v7" }, { "created": "Fri, 5 May 2006 15:39:38 GMT", "version": "v8" }, { "created": "Fri, 14 Jul 2006 15:52:17 GMT", "version": "v9" } ]
2008-12-27
[ [ "Mitra", "Arindam", "" ] ]
Honesty has never been scientifically proved to be the best policy in any case. It is pointed out that only honest person can prevent his dishonest partner to bias the outcome of quantum coin tossing.
2305.13841
Juan Montes
Juan Montes Maestre, Yinwei Du, Ronan Hinchet, Stelian Coros, Bernhard Thomaszewski
Differentiable Stripe Patterns for Inverse Design of Structured Surfaces
14 pages
null
10.1145/3592114
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stripe patterns are ubiquitous in nature and everyday life. While the synthesis of these patterns has been thoroughly studied in the literature, their potential to control the mechanics of structured materials remains largely unexplored. In this work, we introduce Differentiable Stripe Patterns -- a computational approach for automated design of physical surfaces structured with stripe-shaped bi-material distributions. Our method builds on the work by Knoppel and colleagues for generating globally-continuous and equally-spaced stripe patterns. To unlock the full potential of this design space, we propose a gradient-based optimization tool to automatically compute stripe patterns that best approximate macromechanical performance goals. Specifically, we propose a computational model that combines solid shell finite elements with XFEM for accurate and fully-differentiable modeling of elastic bi-material surfaces. To resolve non-uniqueness problems in the original method, we furthermore propose a robust formulation that yields unique and differentiable stripe patterns. %Finally, we introduce design space regularizers to avoid numerical singularities and improve stripe neatness We combine these components with equilibrium state derivatives into an end-to-end differentiable pipeline that enables inverse design of mechanical stripe patterns. We demonstrate our method on a diverse set of examples that illustrate the potential of stripe patterns as a design space for structured materials. Our simulation results are experimentally validated on physical prototypes.
[ { "created": "Tue, 23 May 2023 09:05:36 GMT", "version": "v1" } ]
2023-05-24
[ [ "Maestre", "Juan Montes", "" ], [ "Du", "Yinwei", "" ], [ "Hinchet", "Ronan", "" ], [ "Coros", "Stelian", "" ], [ "Thomaszewski", "Bernhard", "" ] ]
Stripe patterns are ubiquitous in nature and everyday life. While the synthesis of these patterns has been thoroughly studied in the literature, their potential to control the mechanics of structured materials remains largely unexplored. In this work, we introduce Differentiable Stripe Patterns -- a computational approach for automated design of physical surfaces structured with stripe-shaped bi-material distributions. Our method builds on the work by Knoppel and colleagues for generating globally-continuous and equally-spaced stripe patterns. To unlock the full potential of this design space, we propose a gradient-based optimization tool to automatically compute stripe patterns that best approximate macromechanical performance goals. Specifically, we propose a computational model that combines solid shell finite elements with XFEM for accurate and fully-differentiable modeling of elastic bi-material surfaces. To resolve non-uniqueness problems in the original method, we furthermore propose a robust formulation that yields unique and differentiable stripe patterns. %Finally, we introduce design space regularizers to avoid numerical singularities and improve stripe neatness We combine these components with equilibrium state derivatives into an end-to-end differentiable pipeline that enables inverse design of mechanical stripe patterns. We demonstrate our method on a diverse set of examples that illustrate the potential of stripe patterns as a design space for structured materials. Our simulation results are experimentally validated on physical prototypes.
2211.09155
Lele Fu
Zhaoliang Chen, Lele Fu, Jie Yao, Wenzhong Guo, Claudia Plant, Shiping Wang
Learnable Graph Convolutional Network and Feature Fusion for Multi-view Learning
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In practical applications, multi-view data depicting objectives from assorted perspectives can facilitate the accuracy increase of learning algorithms. However, given multi-view data, there is limited work for learning discriminative node relationships and graph information simultaneously via graph convolutional network that has drawn the attention from considerable researchers in recent years. Most of existing methods only consider the weighted sum of adjacency matrices, yet a joint neural network of both feature and graph fusion is still under-explored. To cope with these issues, this paper proposes a joint deep learning framework called Learnable Graph Convolutional Network and Feature Fusion (LGCN-FF), consisting of two stages: feature fusion network and learnable graph convolutional network. The former aims to learn an underlying feature representation from heterogeneous views, while the latter explores a more discriminative graph fusion via learnable weights and a parametric activation function dubbed Differentiable Shrinkage Activation (DSA) function. The proposed LGCN-FF is validated to be superior to various state-of-the-art methods in multi-view semi-supervised classification.
[ { "created": "Wed, 16 Nov 2022 19:07:12 GMT", "version": "v1" } ]
2022-11-18
[ [ "Chen", "Zhaoliang", "" ], [ "Fu", "Lele", "" ], [ "Yao", "Jie", "" ], [ "Guo", "Wenzhong", "" ], [ "Plant", "Claudia", "" ], [ "Wang", "Shiping", "" ] ]
In practical applications, multi-view data depicting objectives from assorted perspectives can facilitate the accuracy increase of learning algorithms. However, given multi-view data, there is limited work for learning discriminative node relationships and graph information simultaneously via graph convolutional network that has drawn the attention from considerable researchers in recent years. Most of existing methods only consider the weighted sum of adjacency matrices, yet a joint neural network of both feature and graph fusion is still under-explored. To cope with these issues, this paper proposes a joint deep learning framework called Learnable Graph Convolutional Network and Feature Fusion (LGCN-FF), consisting of two stages: feature fusion network and learnable graph convolutional network. The former aims to learn an underlying feature representation from heterogeneous views, while the latter explores a more discriminative graph fusion via learnable weights and a parametric activation function dubbed Differentiable Shrinkage Activation (DSA) function. The proposed LGCN-FF is validated to be superior to various state-of-the-art methods in multi-view semi-supervised classification.
2208.01462
Hao Sun
Pu Ren, Chengping Rao, Yang Liu, Zihan Ma, Qi Wang, Jian-Xun Wang, Hao Sun
Physics-informed Deep Super-resolution for Spatiotemporal Data
null
null
null
null
cs.LG physics.comp-ph physics.data-an
http://creativecommons.org/licenses/by/4.0/
High-fidelity simulation of complex physical systems is exorbitantly expensive and inaccessible across spatiotemporal scales. Recently, there has been an increasing interest in leveraging deep learning to augment scientific data based on the coarse-grained simulations, which is of cheap computational expense and retains satisfactory solution accuracy. However, the major existing work focuses on data-driven approaches which rely on rich training datasets and lack sufficient physical constraints. To this end, we propose a novel and efficient spatiotemporal super-resolution framework via physics-informed learning, inspired by the independence between temporal and spatial derivatives in partial differential equations (PDEs). The general principle is to leverage the temporal interpolation for flow estimation, and then introduce convolutional-recurrent neural networks for learning temporal refinement. Furthermore, we employ the stacked residual blocks with wide activation and sub-pixel layers with pixelshuffle for spatial reconstruction, where feature extraction is conducted in a low-resolution latent space. Moreover, we consider hard imposition of boundary conditions in the network to improve reconstruction accuracy. Results demonstrate the superior effectiveness and efficiency of the proposed method compared with baseline algorithms through extensive numerical experiments.
[ { "created": "Tue, 2 Aug 2022 13:57:35 GMT", "version": "v1" } ]
2022-08-03
[ [ "Ren", "Pu", "" ], [ "Rao", "Chengping", "" ], [ "Liu", "Yang", "" ], [ "Ma", "Zihan", "" ], [ "Wang", "Qi", "" ], [ "Wang", "Jian-Xun", "" ], [ "Sun", "Hao", "" ] ]
High-fidelity simulation of complex physical systems is exorbitantly expensive and inaccessible across spatiotemporal scales. Recently, there has been an increasing interest in leveraging deep learning to augment scientific data based on the coarse-grained simulations, which is of cheap computational expense and retains satisfactory solution accuracy. However, the major existing work focuses on data-driven approaches which rely on rich training datasets and lack sufficient physical constraints. To this end, we propose a novel and efficient spatiotemporal super-resolution framework via physics-informed learning, inspired by the independence between temporal and spatial derivatives in partial differential equations (PDEs). The general principle is to leverage the temporal interpolation for flow estimation, and then introduce convolutional-recurrent neural networks for learning temporal refinement. Furthermore, we employ the stacked residual blocks with wide activation and sub-pixel layers with pixelshuffle for spatial reconstruction, where feature extraction is conducted in a low-resolution latent space. Moreover, we consider hard imposition of boundary conditions in the network to improve reconstruction accuracy. Results demonstrate the superior effectiveness and efficiency of the proposed method compared with baseline algorithms through extensive numerical experiments.
2405.16791
Ming-Min Zhao
Mingxin Chen, Ming-Min Zhao, An Liu, Min Li and Qingjiang Shi
Joint Node Selection and Resource Allocation Optimization for Cooperative Sensing with a Shared Wireless Backhaul
13 pages, 10 figures
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider a cooperative sensing framework in the context of future multi-functional network with both communication and sensing ability, where one base station (BS) serves as a sensing transmitter and several nearby BSs serve as sensing receivers. Each receiver receives the sensing signal reflected by the target and communicates with the fusion center (FC) through a wireless multiple access channel (MAC) for cooperative target localization. To improve the localization performance, we present a hybrid information-signal domain cooperative sensing (HISDCS) design, where each sensing receiver transmits both the estimated time delay/effective reflecting coefficient and the received sensing signal sampled around the estimated time delay to the FC. Then, we propose to minimize the number of channel uses by utilizing an efficient Karhunen-Lo\'eve transformation (KLT) encoding scheme for signal quantization and proper node selection, under the Cram\'er-Rao lower bound (CRLB) constraint and the capacity limits of MAC. A novel matrix-inequality constrained successive convex approximation (MCSCA) algorithm is proposed to optimize the wireless backhaul resource allocation, together with a greedy strategy for node selection. Despite the high non-convexness of the considered problem, we prove that the proposed MCSCA algorithm is able to converge to the set of Karush-Kuhn-Tucker (KKT) solutions of a relaxed problem obtained by relaxing the discrete variables. Besides, a low-complexity quantization bit reallocation algorithm is designed, which does not perform explicit node selection, and is able to harvest most of the performance gain brought by HISDCS. Finally, numerical simulations are presented to show that the proposed HISDCS design is able to significantly outperform the baseline schemes.
[ { "created": "Mon, 27 May 2024 03:24:53 GMT", "version": "v1" } ]
2024-05-28
[ [ "Chen", "Mingxin", "" ], [ "Zhao", "Ming-Min", "" ], [ "Liu", "An", "" ], [ "Li", "Min", "" ], [ "Shi", "Qingjiang", "" ] ]
In this paper, we consider a cooperative sensing framework in the context of future multi-functional network with both communication and sensing ability, where one base station (BS) serves as a sensing transmitter and several nearby BSs serve as sensing receivers. Each receiver receives the sensing signal reflected by the target and communicates with the fusion center (FC) through a wireless multiple access channel (MAC) for cooperative target localization. To improve the localization performance, we present a hybrid information-signal domain cooperative sensing (HISDCS) design, where each sensing receiver transmits both the estimated time delay/effective reflecting coefficient and the received sensing signal sampled around the estimated time delay to the FC. Then, we propose to minimize the number of channel uses by utilizing an efficient Karhunen-Lo\'eve transformation (KLT) encoding scheme for signal quantization and proper node selection, under the Cram\'er-Rao lower bound (CRLB) constraint and the capacity limits of MAC. A novel matrix-inequality constrained successive convex approximation (MCSCA) algorithm is proposed to optimize the wireless backhaul resource allocation, together with a greedy strategy for node selection. Despite the high non-convexness of the considered problem, we prove that the proposed MCSCA algorithm is able to converge to the set of Karush-Kuhn-Tucker (KKT) solutions of a relaxed problem obtained by relaxing the discrete variables. Besides, a low-complexity quantization bit reallocation algorithm is designed, which does not perform explicit node selection, and is able to harvest most of the performance gain brought by HISDCS. Finally, numerical simulations are presented to show that the proposed HISDCS design is able to significantly outperform the baseline schemes.
1710.03129
Michael Vierhauser
Giuliano Antoniol and Jane Cleland-Huang and Jane Huffman Hayes and Michael Vierhauser
Grand Challenges of Traceability: The Next Ten Years
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research.
[ { "created": "Mon, 9 Oct 2017 14:54:56 GMT", "version": "v1" } ]
2017-10-10
[ [ "Antoniol", "Giuliano", "" ], [ "Cleland-Huang", "Jane", "" ], [ "Hayes", "Jane Huffman", "" ], [ "Vierhauser", "Michael", "" ] ]
In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research.
2004.08776
Gerui Wang
Gerui Wang, Shuo Wang, Vivek Bagaria, David Tse, and Pramod Viswanath
Prism Removes Consensus Bottleneck for Smart Contracts
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The performance of existing permissionless smart contract platforms such as Ethereum is limited by the consensus layer. Prism is a new proof-of-work consensus protocol that provably achieves throughput and latency up to physical limits while retaining the strong guarantees of the longest chain protocol. This paper reports experimental results from implementations of two smart contract virtual machines, EVM and MoveVM, on top of Prism and demonstrates that the consensus bottleneck has been removed. Code can be found at https://github.com/wgr523/prism-smart-contracts.
[ { "created": "Sun, 19 Apr 2020 06:13:34 GMT", "version": "v1" }, { "created": "Sat, 13 Jun 2020 03:50:52 GMT", "version": "v2" } ]
2020-06-16
[ [ "Wang", "Gerui", "" ], [ "Wang", "Shuo", "" ], [ "Bagaria", "Vivek", "" ], [ "Tse", "David", "" ], [ "Viswanath", "Pramod", "" ] ]
The performance of existing permissionless smart contract platforms such as Ethereum is limited by the consensus layer. Prism is a new proof-of-work consensus protocol that provably achieves throughput and latency up to physical limits while retaining the strong guarantees of the longest chain protocol. This paper reports experimental results from implementations of two smart contract virtual machines, EVM and MoveVM, on top of Prism and demonstrates that the consensus bottleneck has been removed. Code can be found at https://github.com/wgr523/prism-smart-contracts.
2008.05640
Liang Pang
Changying Hao, Liang Pang, Yanyan Lan, Fei Sun, Jiafeng Guo, Xueqi Cheng
Ranking Enhanced Dialogue Generation
Accepted at CIKM 2020
null
10.1145/3340531.3411918
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
How to effectively utilize the dialogue history is a crucial problem in multi-turn dialogue generation. Previous works usually employ various neural network architectures (e.g., recurrent neural networks, attention mechanisms, and hierarchical structures) to model the history. However, a recent empirical study by Sankar et al. has shown that these architectures lack the ability of understanding and modeling the dynamics of the dialogue history. For example, the widely used architectures are insensitive to perturbations of the dialogue history, such as words shuffling, utterances missing, and utterances reordering. To tackle this problem, we propose a Ranking Enhanced Dialogue generation framework in this paper. Despite the traditional representation encoder and response generation modules, an additional ranking module is introduced to model the ranking relation between the former utterance and consecutive utterances. Specifically, the former utterance and consecutive utterances are treated as query and corresponding documents, and both local and global ranking losses are designed in the learning process. In this way, the dynamics in the dialogue history can be explicitly captured. To evaluate our proposed models, we conduct extensive experiments on three public datasets, i.e., bAbI, PersonaChat, and JDC. Experimental results show that our models produce better responses in terms of both quantitative measures and human judgments, as compared with the state-of-the-art dialogue generation models. Furthermore, we give some detailed experimental analysis to show where and how the improvements come from.
[ { "created": "Thu, 13 Aug 2020 01:49:56 GMT", "version": "v1" } ]
2020-08-14
[ [ "Hao", "Changying", "" ], [ "Pang", "Liang", "" ], [ "Lan", "Yanyan", "" ], [ "Sun", "Fei", "" ], [ "Guo", "Jiafeng", "" ], [ "Cheng", "Xueqi", "" ] ]
How to effectively utilize the dialogue history is a crucial problem in multi-turn dialogue generation. Previous works usually employ various neural network architectures (e.g., recurrent neural networks, attention mechanisms, and hierarchical structures) to model the history. However, a recent empirical study by Sankar et al. has shown that these architectures lack the ability of understanding and modeling the dynamics of the dialogue history. For example, the widely used architectures are insensitive to perturbations of the dialogue history, such as words shuffling, utterances missing, and utterances reordering. To tackle this problem, we propose a Ranking Enhanced Dialogue generation framework in this paper. Despite the traditional representation encoder and response generation modules, an additional ranking module is introduced to model the ranking relation between the former utterance and consecutive utterances. Specifically, the former utterance and consecutive utterances are treated as query and corresponding documents, and both local and global ranking losses are designed in the learning process. In this way, the dynamics in the dialogue history can be explicitly captured. To evaluate our proposed models, we conduct extensive experiments on three public datasets, i.e., bAbI, PersonaChat, and JDC. Experimental results show that our models produce better responses in terms of both quantitative measures and human judgments, as compared with the state-of-the-art dialogue generation models. Furthermore, we give some detailed experimental analysis to show where and how the improvements come from.
2203.12384
Julian Renner
Hannes Bartz, Lukas Holzbaur, Hedongliang Liu, Sven Puchinger, Julian Renner, Antonia Wachter-Zeh
Rank-Metric Codes and Their Applications
null
null
null
null
cs.IT cs.CR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rank metric measures the distance between two matrices by the rank of their difference. Codes designed for the rank metric have attracted considerable attention in recent years, reinforced by network coding and further motivated by a variety of applications. In code-based cryptography, the hardness of the corresponding generic decoding problem can lead to systems with reduced public-key size. In distributed data storage, codes in the rank metric have been used repeatedly to construct codes with locality, and in coded caching, they have been employed for the placement of coded symbols. This survey gives a general introduction to rank-metric codes, explains their most important applications, and highlights their relevance to these areas of research.
[ { "created": "Wed, 23 Mar 2022 13:01:23 GMT", "version": "v1" } ]
2022-03-24
[ [ "Bartz", "Hannes", "" ], [ "Holzbaur", "Lukas", "" ], [ "Liu", "Hedongliang", "" ], [ "Puchinger", "Sven", "" ], [ "Renner", "Julian", "" ], [ "Wachter-Zeh", "Antonia", "" ] ]
The rank metric measures the distance between two matrices by the rank of their difference. Codes designed for the rank metric have attracted considerable attention in recent years, reinforced by network coding and further motivated by a variety of applications. In code-based cryptography, the hardness of the corresponding generic decoding problem can lead to systems with reduced public-key size. In distributed data storage, codes in the rank metric have been used repeatedly to construct codes with locality, and in coded caching, they have been employed for the placement of coded symbols. This survey gives a general introduction to rank-metric codes, explains their most important applications, and highlights their relevance to these areas of research.
2108.09476
Tianyu Wu
Tianyu Wu, Konrad Schindler and Cenek Albl
3D Reconstruction from public webcams
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the possibility of 3D scene reconstruction from two or more overlapping webcam streams. A large, and growing, number of webcams observe places of interest and are publicly accessible. The question naturally arises: can we make use of this free data source for 3D computer vision? It turns out that the task to reconstruct scene structure from webcam streams is very different from standard structure-from-motion (SfM), and conventional SfM pipelines fail. In the webcam setting there are very few views of the same scene, in most cases only the minimum of two. These viewpoints often have large baselines and/or scale differences, their overlap is rather limited, and besides unknown internal and external calibration also their temporal synchronisation is unknown. On the other hand, they record rather large fields of view continuously over long time spans, so that they regularly observe dynamic objects moving through the scene. We show how to leverage recent advances in several areas of computer vision to adapt SfM reconstruction to this particular scenario and reconstruct the unknown camera poses, the 3D scene structure, and the 3D trajectories of dynamic objects.
[ { "created": "Sat, 21 Aug 2021 09:31:13 GMT", "version": "v1" }, { "created": "Sat, 11 Dec 2021 10:58:20 GMT", "version": "v2" } ]
2021-12-14
[ [ "Wu", "Tianyu", "" ], [ "Schindler", "Konrad", "" ], [ "Albl", "Cenek", "" ] ]
We investigate the possibility of 3D scene reconstruction from two or more overlapping webcam streams. A large, and growing, number of webcams observe places of interest and are publicly accessible. The question naturally arises: can we make use of this free data source for 3D computer vision? It turns out that the task to reconstruct scene structure from webcam streams is very different from standard structure-from-motion (SfM), and conventional SfM pipelines fail. In the webcam setting there are very few views of the same scene, in most cases only the minimum of two. These viewpoints often have large baselines and/or scale differences, their overlap is rather limited, and besides unknown internal and external calibration also their temporal synchronisation is unknown. On the other hand, they record rather large fields of view continuously over long time spans, so that they regularly observe dynamic objects moving through the scene. We show how to leverage recent advances in several areas of computer vision to adapt SfM reconstruction to this particular scenario and reconstruct the unknown camera poses, the 3D scene structure, and the 3D trajectories of dynamic objects.
2202.09318
Vishvak Murahari
Vishvak Murahari, Carlos E. Jimenez, Runzhe Yang, Karthik Narasimhan
DataMUX: Data Multiplexing for Neural Networks
NeurIPS 2022
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we introduce data multiplexing (DataMUX), a technique that enables deep neural networks to process multiple inputs simultaneously using a single compact representation. DataMUX demonstrates that neural networks are capable of generating accurate predictions over mixtures of inputs, resulting in increased throughput with minimal extra memory requirements. Our approach uses two key components -- 1) a multiplexing layer that performs a fixed linear transformation to each input before combining them to create a mixed representation of the same size as a single input, which is then processed by the base network, and 2) a demultiplexing layer that converts the base network's output back into independent representations before producing predictions for each input. We show the viability of DataMUX for different architectures (Transformers, and to a lesser extent MLPs and CNNs) across six different tasks spanning sentence classification, named entity recognition and image classification. For instance, DataMUX for Transformers can multiplex up to $20$x/$40$x inputs, achieving $11$x/$18$x increase in throughput with minimal absolute performance drops of $<2\%$ and $<4\%$ respectively on MNLI, a natural language inference task. We also provide a theoretical construction for multiplexing in self-attention networks and analyze the effect of various design elements in DataMUX.
[ { "created": "Fri, 18 Feb 2022 17:35:33 GMT", "version": "v1" }, { "created": "Mon, 14 Nov 2022 15:15:50 GMT", "version": "v2" } ]
2022-11-15
[ [ "Murahari", "Vishvak", "" ], [ "Jimenez", "Carlos E.", "" ], [ "Yang", "Runzhe", "" ], [ "Narasimhan", "Karthik", "" ] ]
In this paper, we introduce data multiplexing (DataMUX), a technique that enables deep neural networks to process multiple inputs simultaneously using a single compact representation. DataMUX demonstrates that neural networks are capable of generating accurate predictions over mixtures of inputs, resulting in increased throughput with minimal extra memory requirements. Our approach uses two key components -- 1) a multiplexing layer that performs a fixed linear transformation to each input before combining them to create a mixed representation of the same size as a single input, which is then processed by the base network, and 2) a demultiplexing layer that converts the base network's output back into independent representations before producing predictions for each input. We show the viability of DataMUX for different architectures (Transformers, and to a lesser extent MLPs and CNNs) across six different tasks spanning sentence classification, named entity recognition and image classification. For instance, DataMUX for Transformers can multiplex up to $20$x/$40$x inputs, achieving $11$x/$18$x increase in throughput with minimal absolute performance drops of $<2\%$ and $<4\%$ respectively on MNLI, a natural language inference task. We also provide a theoretical construction for multiplexing in self-attention networks and analyze the effect of various design elements in DataMUX.
2403.12071
Kostas Karpouzis
Kostas Karpouzis, Dimitris Pantazatos, Joanna Taouki, Kalliopi Meli
Tailoring Education with GenAI: A New Horizon in Lesson Planning
Abstract accepted for EDUCON 2024 (IEEE Global Engineering Education Conference 2024)
null
null
null
cs.CY cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The advent of Generative AI (GenAI) in education presents a transformative approach to traditional teaching methodologies, which often overlook the diverse needs of individual students. This study introduces a GenAI tool, based on advanced natural language processing, designed as a digital assistant for educators, enabling the creation of customized lesson plans. The tool utilizes an innovative feature termed 'interactive mega-prompt,' a comprehensive query system that allows educators to input detailed classroom specifics such as student demographics, learning objectives, and preferred teaching styles. This input is then processed by the GenAI to generate tailored lesson plans. To evaluate the tool's effectiveness, a comprehensive methodology incorporating both quantitative (i.e., % of time savings) and qualitative (i.e., user satisfaction) criteria was implemented, spanning various subjects and educational levels, with continuous feedback collected from educators through a structured evaluation form. Preliminary results show that educators find the GenAI-generated lesson plans effective, significantly reducing lesson planning time and enhancing the learning experience by accommodating diverse student needs. This AI-driven approach signifies a paradigm shift in education, suggesting its potential applicability in broader educational contexts, including special education needs (SEN), where individualized attention and specific learning aids are paramount
[ { "created": "Mon, 12 Feb 2024 17:30:05 GMT", "version": "v1" } ]
2024-03-20
[ [ "Karpouzis", "Kostas", "" ], [ "Pantazatos", "Dimitris", "" ], [ "Taouki", "Joanna", "" ], [ "Meli", "Kalliopi", "" ] ]
The advent of Generative AI (GenAI) in education presents a transformative approach to traditional teaching methodologies, which often overlook the diverse needs of individual students. This study introduces a GenAI tool, based on advanced natural language processing, designed as a digital assistant for educators, enabling the creation of customized lesson plans. The tool utilizes an innovative feature termed 'interactive mega-prompt,' a comprehensive query system that allows educators to input detailed classroom specifics such as student demographics, learning objectives, and preferred teaching styles. This input is then processed by the GenAI to generate tailored lesson plans. To evaluate the tool's effectiveness, a comprehensive methodology incorporating both quantitative (i.e., % of time savings) and qualitative (i.e., user satisfaction) criteria was implemented, spanning various subjects and educational levels, with continuous feedback collected from educators through a structured evaluation form. Preliminary results show that educators find the GenAI-generated lesson plans effective, significantly reducing lesson planning time and enhancing the learning experience by accommodating diverse student needs. This AI-driven approach signifies a paradigm shift in education, suggesting its potential applicability in broader educational contexts, including special education needs (SEN), where individualized attention and specific learning aids are paramount
1709.05652
Nidhish Raj Mr.
Nidhish Raj, Ravi N Banavar, Abhishek, Mangal Kothari
Robust Attitude Tracking for Aerobatic Helicopters: A Geometric Approach
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper highlights the significance of the rotor dynamics in control design for small-scale aerobatic helicopters, and proposes two singularity free robust attitude tracking controllers based on the available states for feedback. 1. The first, employs the angular velocity and the flap angle states (a variable that is not easy to measure) and uses a backstepping technique to design a robust compensator (BRC) to \textbf{\textit{actively}} suppress the disturbance induced tracking error. 2. The second exploits the inherent damping present in the helicopter dynamics leading to a structure preserving, \textbf{\textit{passively}} robust controller (SPR), which is free of angular velocity and flap angle feedback. The BRC controller is designed to be robust in the presence of two types of uncertainties: structured and unstructured. The structured disturbance is due to uncertainty in the rotor parameters, and the unstructured perturbation is modeled as an exogenous torque acting on the fuselage. The performance of the controller is demonstrated in the presence of both types of disturbances through numerical simulations. In contrast, the SPR tracking controller is derived such that the tracking error dynamics inherits the natural damping characteristic of the helicopter. The SPR controller is shown to be almost globally asymptotically stable and its performance is evaluated experimentally by performing aggressive flip maneuvers. Throughout the study, a nonlinear coupled rotor-fuselage helicopter model with first order flap dynamics is used.
[ { "created": "Sun, 17 Sep 2017 12:33:56 GMT", "version": "v1" }, { "created": "Wed, 9 Jan 2019 06:19:42 GMT", "version": "v2" } ]
2019-01-10
[ [ "Raj", "Nidhish", "" ], [ "Banavar", "Ravi N", "" ], [ "Abhishek", "", "" ], [ "Kothari", "Mangal", "" ] ]
This paper highlights the significance of the rotor dynamics in control design for small-scale aerobatic helicopters, and proposes two singularity free robust attitude tracking controllers based on the available states for feedback. 1. The first, employs the angular velocity and the flap angle states (a variable that is not easy to measure) and uses a backstepping technique to design a robust compensator (BRC) to \textbf{\textit{actively}} suppress the disturbance induced tracking error. 2. The second exploits the inherent damping present in the helicopter dynamics leading to a structure preserving, \textbf{\textit{passively}} robust controller (SPR), which is free of angular velocity and flap angle feedback. The BRC controller is designed to be robust in the presence of two types of uncertainties: structured and unstructured. The structured disturbance is due to uncertainty in the rotor parameters, and the unstructured perturbation is modeled as an exogenous torque acting on the fuselage. The performance of the controller is demonstrated in the presence of both types of disturbances through numerical simulations. In contrast, the SPR tracking controller is derived such that the tracking error dynamics inherits the natural damping characteristic of the helicopter. The SPR controller is shown to be almost globally asymptotically stable and its performance is evaluated experimentally by performing aggressive flip maneuvers. Throughout the study, a nonlinear coupled rotor-fuselage helicopter model with first order flap dynamics is used.
1502.01095
Nirmala Suresh
A.P. Nirmala, Dr. R. Sridaran
A Novel architecture for improving performance under virtualized environments
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Even though virtualization provides a lot of advantages in cloud computing, it does not provide effective performance isolation between the virtualization machines. In other words, the performance may get affected due the interferences caused by co-virtual machines. This can be achieved by the proper management of resource allocations between the Virtual Machines running simultaneously. This paper aims at providing a proposed novel architecture that is based on Fast Genetic K-means++ algorithm and test results show positive improvements in terms of performance improvements over a similar existing approach.
[ { "created": "Wed, 4 Feb 2015 05:21:30 GMT", "version": "v1" } ]
2015-02-05
[ [ "Nirmala", "A. P.", "" ], [ "Sridaran", "Dr. R.", "" ] ]
Even though virtualization provides a lot of advantages in cloud computing, it does not provide effective performance isolation between the virtualization machines. In other words, the performance may get affected due the interferences caused by co-virtual machines. This can be achieved by the proper management of resource allocations between the Virtual Machines running simultaneously. This paper aims at providing a proposed novel architecture that is based on Fast Genetic K-means++ algorithm and test results show positive improvements in terms of performance improvements over a similar existing approach.
1812.00541
Zhiyuan Jiang
Zhiyuan Jiang, Sheng Chen, Andreas F. Molisch, Rath Vannithamby, Sheng Zhou, Zhisheng Niu
Exploiting Wireless Channel State Information Structures Beyond Linear Correlations: A Deep Learning Approach
To appear in IEEE Commun. Mag. SI on Applications of Artificial Intelligence in Wireless Communications
null
null
null
cs.IT cs.LG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge of information about the propagation channel in which a wireless system operates enables better, more efficient approaches for signal transmissions. Therefore, channel state information (CSI) plays a pivotal role in the system performance. The importance of CSI is in fact growing in the upcoming 5G and beyond systems, e.g., for the implementation of massive multiple-input multiple-output (MIMO). However, the acquisition of timely and accurate CSI has long been considered as a major issue, and becomes increasingly challenging due to the need for obtaining CSI of many antenna elements in massive MIMO systems. To cope with this challenge, existing works mainly focus on exploiting linear structures of CSI, such as CSI correlations in the spatial domain, to achieve dimensionality reduction. In this article, we first systematically review the state-of-the-art on CSI structure exploitation; then extend to seek for deeper structures that enable remote CSI inference wherein a data-driven deep neural network (DNN) approach is necessary due to model inadequacy. We develop specific DNN designs suitable for CSI data. Case studies are provided to demonstrate great potential in this direction for future performance enhancement.
[ { "created": "Mon, 3 Dec 2018 03:38:20 GMT", "version": "v1" } ]
2018-12-04
[ [ "Jiang", "Zhiyuan", "" ], [ "Chen", "Sheng", "" ], [ "Molisch", "Andreas F.", "" ], [ "Vannithamby", "Rath", "" ], [ "Zhou", "Sheng", "" ], [ "Niu", "Zhisheng", "" ] ]
Knowledge of information about the propagation channel in which a wireless system operates enables better, more efficient approaches for signal transmissions. Therefore, channel state information (CSI) plays a pivotal role in the system performance. The importance of CSI is in fact growing in the upcoming 5G and beyond systems, e.g., for the implementation of massive multiple-input multiple-output (MIMO). However, the acquisition of timely and accurate CSI has long been considered as a major issue, and becomes increasingly challenging due to the need for obtaining CSI of many antenna elements in massive MIMO systems. To cope with this challenge, existing works mainly focus on exploiting linear structures of CSI, such as CSI correlations in the spatial domain, to achieve dimensionality reduction. In this article, we first systematically review the state-of-the-art on CSI structure exploitation; then extend to seek for deeper structures that enable remote CSI inference wherein a data-driven deep neural network (DNN) approach is necessary due to model inadequacy. We develop specific DNN designs suitable for CSI data. Case studies are provided to demonstrate great potential in this direction for future performance enhancement.
2311.06102
Lefteris Loukas
Lefteris Loukas, Ilias Stogiannidis, Odysseas Diamantopoulos, Prodromos Malakasiotis, Stavros Vassos
Making LLMs Worth Every Penny: Resource-Limited Text Classification in Banking
Long paper accepted to ACM ICAIF-23
null
10.1145/3604237.3626891
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Standard Full-Data classifiers in NLP demand thousands of labeled examples, which is impractical in data-limited domains. Few-shot methods offer an alternative, utilizing contrastive learning techniques that can be effective with as little as 20 examples per class. Similarly, Large Language Models (LLMs) like GPT-4 can perform effectively with just 1-5 examples per class. However, the performance-cost trade-offs of these methods remain underexplored, a critical concern for budget-limited organizations. Our work addresses this gap by studying the aforementioned approaches over the Banking77 financial intent detection dataset, including the evaluation of cutting-edge LLMs by OpenAI, Cohere, and Anthropic in a comprehensive set of few-shot scenarios. We complete the picture with two additional methods: first, a cost-effective querying method for LLMs based on retrieval-augmented generation (RAG), able to reduce operational costs multiple times compared to classic few-shot approaches, and second, a data augmentation method using GPT-4, able to improve performance in data-limited scenarios. Finally, to inspire future research, we provide a human expert's curated subset of Banking77, along with extensive error analysis.
[ { "created": "Fri, 10 Nov 2023 15:10:36 GMT", "version": "v1" } ]
2023-11-13
[ [ "Loukas", "Lefteris", "" ], [ "Stogiannidis", "Ilias", "" ], [ "Diamantopoulos", "Odysseas", "" ], [ "Malakasiotis", "Prodromos", "" ], [ "Vassos", "Stavros", "" ] ]
Standard Full-Data classifiers in NLP demand thousands of labeled examples, which is impractical in data-limited domains. Few-shot methods offer an alternative, utilizing contrastive learning techniques that can be effective with as little as 20 examples per class. Similarly, Large Language Models (LLMs) like GPT-4 can perform effectively with just 1-5 examples per class. However, the performance-cost trade-offs of these methods remain underexplored, a critical concern for budget-limited organizations. Our work addresses this gap by studying the aforementioned approaches over the Banking77 financial intent detection dataset, including the evaluation of cutting-edge LLMs by OpenAI, Cohere, and Anthropic in a comprehensive set of few-shot scenarios. We complete the picture with two additional methods: first, a cost-effective querying method for LLMs based on retrieval-augmented generation (RAG), able to reduce operational costs multiple times compared to classic few-shot approaches, and second, a data augmentation method using GPT-4, able to improve performance in data-limited scenarios. Finally, to inspire future research, we provide a human expert's curated subset of Banking77, along with extensive error analysis.
2205.06177
Zeinab Zoghi
Zeinab Zoghi, Gursel Serpen
Ensemble Classifier Design Tuned to Dataset Characteristics for Network Intrusion Detection
null
null
null
null
cs.CR cs.AI cs.DB cs.LG cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine Learning-based supervised approaches require highly customized and fine-tuned methodologies to deliver outstanding performance. This paper presents a dataset-driven design and performance evaluation of a machine learning classifier for the network intrusion dataset UNSW-NB15. Analysis of the dataset suggests that it suffers from class representation imbalance and class overlap in the feature space. We employed ensemble methods using Balanced Bagging (BB), eXtreme Gradient Boosting (XGBoost), and Random Forest empowered by Hellinger Distance Decision Tree (RF-HDDT). BB and XGBoost are tuned to handle the imbalanced data, and Random Forest (RF) classifier is supplemented by the Hellinger metric to address the imbalance issue. Two new algorithms are proposed to address the class overlap issue in the dataset. These two algorithms are leveraged to help improve the performance of the testing dataset by modifying the final classification decision made by three base classifiers as part of the ensemble classifier which employs a majority vote combiner. The proposed design is evaluated for both binary and multi-category classification. Comparing the proposed model to those reported on the same dataset in the literature demonstrate that the proposed model outperforms others by a significant margin for both binary and multi-category classification cases.
[ { "created": "Sun, 8 May 2022 21:06:42 GMT", "version": "v1" } ]
2022-05-13
[ [ "Zoghi", "Zeinab", "" ], [ "Serpen", "Gursel", "" ] ]
Machine Learning-based supervised approaches require highly customized and fine-tuned methodologies to deliver outstanding performance. This paper presents a dataset-driven design and performance evaluation of a machine learning classifier for the network intrusion dataset UNSW-NB15. Analysis of the dataset suggests that it suffers from class representation imbalance and class overlap in the feature space. We employed ensemble methods using Balanced Bagging (BB), eXtreme Gradient Boosting (XGBoost), and Random Forest empowered by Hellinger Distance Decision Tree (RF-HDDT). BB and XGBoost are tuned to handle the imbalanced data, and Random Forest (RF) classifier is supplemented by the Hellinger metric to address the imbalance issue. Two new algorithms are proposed to address the class overlap issue in the dataset. These two algorithms are leveraged to help improve the performance of the testing dataset by modifying the final classification decision made by three base classifiers as part of the ensemble classifier which employs a majority vote combiner. The proposed design is evaluated for both binary and multi-category classification. Comparing the proposed model to those reported on the same dataset in the literature demonstrate that the proposed model outperforms others by a significant margin for both binary and multi-category classification cases.
1702.07647
Kaarthik Sundar
Kaarthik Sundar and Saravanan Venkatachalam and Satyanarayana G. Manyam
Path Planning for Multiple Heterogeneous Unmanned Vehicles with Uncertain Service Times
8 pages, 2 figures, submitted to International Conference on Unmanned Aircraft Systems (ICUAS)
null
10.1109/ICUAS.2017.7991336
null
cs.RO math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article presents a framework and develops a formulation to solve a path planning problem for multiple heterogeneous Unmanned Vehicles (UVs) with uncertain service times for each vehicle--target pair. The vehicles incur a penalty proportional to the duration of their total service time in excess of a preset constant. The vehicles differ in their motion constraints and are located at distinct depots at the start of the mission. The vehicles may also be equipped with disparate sensors. The objective is to find a tour for each vehicle that starts and ends at its respective depot such that every target is visited and serviced by some vehicle while minimizing the sum of the total travel distance and the expected penalty incurred by all the vehicles. We formulate the problem as a two-stage stochastic program with recourse, present the theoretical properties of the formulation and advantages of using such a formulation, as opposed to a deterministic expected value formulation, to solve the problem. Extensive numerical simulations also corroborate the effectiveness of the proposed approach.
[ { "created": "Fri, 24 Feb 2017 16:24:58 GMT", "version": "v1" } ]
2018-07-30
[ [ "Sundar", "Kaarthik", "" ], [ "Venkatachalam", "Saravanan", "" ], [ "Manyam", "Satyanarayana G.", "" ] ]
This article presents a framework and develops a formulation to solve a path planning problem for multiple heterogeneous Unmanned Vehicles (UVs) with uncertain service times for each vehicle--target pair. The vehicles incur a penalty proportional to the duration of their total service time in excess of a preset constant. The vehicles differ in their motion constraints and are located at distinct depots at the start of the mission. The vehicles may also be equipped with disparate sensors. The objective is to find a tour for each vehicle that starts and ends at its respective depot such that every target is visited and serviced by some vehicle while minimizing the sum of the total travel distance and the expected penalty incurred by all the vehicles. We formulate the problem as a two-stage stochastic program with recourse, present the theoretical properties of the formulation and advantages of using such a formulation, as opposed to a deterministic expected value formulation, to solve the problem. Extensive numerical simulations also corroborate the effectiveness of the proposed approach.
2112.02612
Zih-Syuan Huang
Zih-Syuan Huang, Ching-pei Lee
Training Structured Neural Networks Through Manifold Identification and Variance Reduction
null
The 10th International Conference on Learning Representations, 2022
null
null
cs.LG math.OC stat.ML
http://creativecommons.org/licenses/by/4.0/
This paper proposes an algorithm (RMDA) for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we prove that after a finite number of iterations, all iterates of RMDA possess a desired structure identical to that induced by the regularizer at the stationary point of asymptotic convergence, even in the presence of engineering tricks like data augmentation and dropout that complicate the training process. Experiments on training NNs with structured sparsity confirm that variance reduction is necessary for such an identification, and show that RMDA thus significantly outperforms existing methods for this task. For unstructured sparsity, RMDA also outperforms a state-of-the-art pruning method, validating the benefits of training structured NNs through regularization.
[ { "created": "Sun, 5 Dec 2021 16:23:53 GMT", "version": "v1" }, { "created": "Thu, 17 Mar 2022 01:50:58 GMT", "version": "v2" }, { "created": "Fri, 18 Mar 2022 10:36:17 GMT", "version": "v3" } ]
2022-05-02
[ [ "Huang", "Zih-Syuan", "" ], [ "Lee", "Ching-pei", "" ] ]
This paper proposes an algorithm (RMDA) for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we prove that after a finite number of iterations, all iterates of RMDA possess a desired structure identical to that induced by the regularizer at the stationary point of asymptotic convergence, even in the presence of engineering tricks like data augmentation and dropout that complicate the training process. Experiments on training NNs with structured sparsity confirm that variance reduction is necessary for such an identification, and show that RMDA thus significantly outperforms existing methods for this task. For unstructured sparsity, RMDA also outperforms a state-of-the-art pruning method, validating the benefits of training structured NNs through regularization.
2202.02537
Cyril Onwubiko PhD
Cyril Onwubiko and Karim Ouazzane
Multidimensional Cybersecurity Framework for Strategic Foresight
31 pages, 7 figures
Intl. Journal on Cyber Situational Awareness, Vol. 6, No. 1, 2021
10.22619/IJCSA.2021.100137
null
cs.CR cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
Cybersecurity is now at the forefront of most organisational digital transformative agendas and National economic, social and political programmes. Hence its impact to society can no longer be seen to be one dimensional. The rise in National cybersecurity laws and regulations is a good indicator of its perceived importance to nations. And the recent awakening for social and ethical transparency in society and coupled with sustainability issues demonstrate the need for a paradigm shift in how cybersecurity discourses can now happen. In response to this shift, a multidimensional cybersecurity framework for strategic foresight underpinned on situational awareness is proposed. The conceptual cybersecurity framework comprising six domains such as Physical, Cultural, Economic, Social, Political and Cyber, is discussed. The guiding principles underpinning the framework are outlined, followed by in-depth reflection on the Business, Operational, Technological and Human (BOTH) factors and their implications for strategic foresight for cybersecurity.
[ { "created": "Sat, 5 Feb 2022 12:30:31 GMT", "version": "v1" } ]
2022-02-08
[ [ "Onwubiko", "Cyril", "" ], [ "Ouazzane", "Karim", "" ] ]
Cybersecurity is now at the forefront of most organisational digital transformative agendas and National economic, social and political programmes. Hence its impact to society can no longer be seen to be one dimensional. The rise in National cybersecurity laws and regulations is a good indicator of its perceived importance to nations. And the recent awakening for social and ethical transparency in society and coupled with sustainability issues demonstrate the need for a paradigm shift in how cybersecurity discourses can now happen. In response to this shift, a multidimensional cybersecurity framework for strategic foresight underpinned on situational awareness is proposed. The conceptual cybersecurity framework comprising six domains such as Physical, Cultural, Economic, Social, Political and Cyber, is discussed. The guiding principles underpinning the framework are outlined, followed by in-depth reflection on the Business, Operational, Technological and Human (BOTH) factors and their implications for strategic foresight for cybersecurity.
2008.07346
Federico Ruggeri
Federico Ruggeri, Francesca Lagioia, Marco Lippi, Paolo Torroni
Memory networks for consumer protection:unfairness exposed
null
null
null
null
cs.CY cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work has demonstrated how data-driven AI methods can leverage consumer protection by supporting the automated analysis of legal documents. However, a shortcoming of data-driven approaches is poor explainability. We posit that in this domain useful explanations of classifier outcomes can be provided by resorting to legal rationales. We thus consider several configurations of memory-augmented neural networks where rationales are given a special role in the modeling of context knowledge. Our results show that rationales not only contribute to improve the classification accuracy, but are also able to offer meaningful, natural language explanations of otherwise opaque classifier outcomes.
[ { "created": "Fri, 24 Jul 2020 14:25:54 GMT", "version": "v1" } ]
2020-08-18
[ [ "Ruggeri", "Federico", "" ], [ "Lagioia", "Francesca", "" ], [ "Lippi", "Marco", "" ], [ "Torroni", "Paolo", "" ] ]
Recent work has demonstrated how data-driven AI methods can leverage consumer protection by supporting the automated analysis of legal documents. However, a shortcoming of data-driven approaches is poor explainability. We posit that in this domain useful explanations of classifier outcomes can be provided by resorting to legal rationales. We thus consider several configurations of memory-augmented neural networks where rationales are given a special role in the modeling of context knowledge. Our results show that rationales not only contribute to improve the classification accuracy, but are also able to offer meaningful, natural language explanations of otherwise opaque classifier outcomes.
1203.4434
Zaier Aida
Aida Zaier and Ridha Bouallegue
Blind Channel Estimation Enhancement for Mimo- OFDM Systems under High Mobility Conditions
8 pages, 4 figures
International Journal of Wireless & Mobile Networks (IJWMN) Vol. 4, No. 1, February 2012
10.5121/ijwmn.2012.4115
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose an enhancement of a blind channel estimator based on a subspace approach in a MIMO OFDM context (Multi Input Multi Output Orthogonal Frequency Division Multiplexing) in high mobility scenario. As known, the combination between the MIMO context and the OFDM system has stimulated mainly the evolution of the fourth generation broadband wireless communications. The simulations results have demonstrated the effectiveness of the approach for a 16 QAM modulation scheme and had been evaluated in term of bit error rate BER and mean square error MSE versus the signal to noise ratio SNR.
[ { "created": "Tue, 20 Mar 2012 13:42:21 GMT", "version": "v1" } ]
2012-03-21
[ [ "Zaier", "Aida", "" ], [ "Bouallegue", "Ridha", "" ] ]
In this paper, we propose an enhancement of a blind channel estimator based on a subspace approach in a MIMO OFDM context (Multi Input Multi Output Orthogonal Frequency Division Multiplexing) in high mobility scenario. As known, the combination between the MIMO context and the OFDM system has stimulated mainly the evolution of the fourth generation broadband wireless communications. The simulations results have demonstrated the effectiveness of the approach for a 16 QAM modulation scheme and had been evaluated in term of bit error rate BER and mean square error MSE versus the signal to noise ratio SNR.
1112.1010
Catherine Bliss
Catherine A. Bliss, Isabel M. Kloumann, Kameron Decker Harris, Christopher M. Danforth, and Peter Sheridan Dodds
Twitter reciprocal reply networks exhibit assortativity with respect to happiness
22 pages, 21 figures, 5 tables, In press at the Journal of Computational Science
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of social media has provided an extraordinary, if imperfect, 'big data' window into the form and evolution of social networks. Based on nearly 40 million message pairs posted to Twitter between September 2008 and February 2009, we construct and examine the revealed social network structure and dynamics over the time scales of days, weeks, and months. At the level of user behavior, we employ our recently developed hedonometric analysis methods to investigate patterns of sentiment expression. We find users' average happiness scores to be positively and significantly correlated with those of users one, two, and three links away. We strengthen our analysis by proposing and using a null model to test the effect of network topology on the assortativity of happiness. We also find evidence that more well connected users write happier status updates, with a transition occurring around Dunbar's number. More generally, our work provides evidence of a social sub-network structure within Twitter and raises several methodological points of interest with regard to social network reconstructions.
[ { "created": "Mon, 5 Dec 2011 17:27:09 GMT", "version": "v1" }, { "created": "Fri, 4 May 2012 17:20:03 GMT", "version": "v2" }, { "created": "Thu, 10 May 2012 19:33:56 GMT", "version": "v3" }, { "created": "Fri, 11 May 2012 13:39:29 GMT", "version": "v4" } ]
2012-05-14
[ [ "Bliss", "Catherine A.", "" ], [ "Kloumann", "Isabel M.", "" ], [ "Harris", "Kameron Decker", "" ], [ "Danforth", "Christopher M.", "" ], [ "Dodds", "Peter Sheridan", "" ] ]
The advent of social media has provided an extraordinary, if imperfect, 'big data' window into the form and evolution of social networks. Based on nearly 40 million message pairs posted to Twitter between September 2008 and February 2009, we construct and examine the revealed social network structure and dynamics over the time scales of days, weeks, and months. At the level of user behavior, we employ our recently developed hedonometric analysis methods to investigate patterns of sentiment expression. We find users' average happiness scores to be positively and significantly correlated with those of users one, two, and three links away. We strengthen our analysis by proposing and using a null model to test the effect of network topology on the assortativity of happiness. We also find evidence that more well connected users write happier status updates, with a transition occurring around Dunbar's number. More generally, our work provides evidence of a social sub-network structure within Twitter and raises several methodological points of interest with regard to social network reconstructions.
2107.04863
Florian Tambon
Florian Tambon, Giulio Antoniol and Foutse Khomh
HOMRS: High Order Metamorphic Relations Selector for Deep Neural Networks
33 pages
null
null
null
cs.LG cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Neural Networks (DNN) applications are increasingly becoming a part of our everyday life, from medical applications to autonomous cars. Traditional validation of DNN relies on accuracy measures, however, the existence of adversarial examples has highlighted the limitations of these accuracy measures, raising concerns especially when DNN are integrated into safety-critical systems. In this paper, we present HOMRS, an approach to boost metamorphic testing by automatically building a small optimized set of high order metamorphic relations from an initial set of elementary metamorphic relations. HOMRS' backbone is a multi-objective search; it exploits ideas drawn from traditional systems testing such as code coverage, test case, path diversity as well as input validation. We applied HOMRS to MNIST/LeNet and SVHN/VGG and we report evidence that it builds a small but effective set of high-order transformations that generalize well to the input data distribution. Moreover, comparing to similar generation technique such as DeepXplore, we show that our distribution-based approach is more effective, generating valid transformations from an uncertainty quantification point of view, while requiring less computation time by leveraging the generalization ability of the approach.
[ { "created": "Sat, 10 Jul 2021 15:40:12 GMT", "version": "v1" }, { "created": "Tue, 21 Dec 2021 13:18:24 GMT", "version": "v2" } ]
2021-12-22
[ [ "Tambon", "Florian", "" ], [ "Antoniol", "Giulio", "" ], [ "Khomh", "Foutse", "" ] ]
Deep Neural Networks (DNN) applications are increasingly becoming a part of our everyday life, from medical applications to autonomous cars. Traditional validation of DNN relies on accuracy measures, however, the existence of adversarial examples has highlighted the limitations of these accuracy measures, raising concerns especially when DNN are integrated into safety-critical systems. In this paper, we present HOMRS, an approach to boost metamorphic testing by automatically building a small optimized set of high order metamorphic relations from an initial set of elementary metamorphic relations. HOMRS' backbone is a multi-objective search; it exploits ideas drawn from traditional systems testing such as code coverage, test case, path diversity as well as input validation. We applied HOMRS to MNIST/LeNet and SVHN/VGG and we report evidence that it builds a small but effective set of high-order transformations that generalize well to the input data distribution. Moreover, comparing to similar generation technique such as DeepXplore, we show that our distribution-based approach is more effective, generating valid transformations from an uncertainty quantification point of view, while requiring less computation time by leveraging the generalization ability of the approach.
1011.3049
Danupon Nanongkai
Atish Das Sarma, Stephan Holzer, Liah Kor, Amos Korman, Danupon Nanongkai, Gopal Pandurangan, David Peleg, Roger Wattenhofer
Distributed Verification and Hardness of Distributed Approximation
Submitted to Journal (special issue of STOC 2011)
null
null
null
cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the {\em verification} problem in distributed networks, stated as follows. Let $H$ be a subgraph of a network $G$ where each vertex of $G$ knows which edges incident on it are in $H$. We would like to verify whether $H$ has some properties, e.g., if it is a tree or if it is connected. We would like to perform this verification in a decentralized fashion via a distributed algorithm. The time complexity of verification is measured as the number of rounds of distributed communication. In this paper we initiate a systematic study of distributed verification, and give almost tight lower bounds on the running time of distributed verification algorithms for many fundamental problems such as connectivity, spanning connected subgraph, and $s-t$ cut verification. We then show applications of these results in deriving strong unconditional time lower bounds on the {\em hardness of distributed approximation} for many classical optimization problems including minimum spanning tree, shortest paths, and minimum cut. Many of these results are the first non-trivial lower bounds for both exact and approximate distributed computation and they resolve previous open questions. Moreover, our unconditional lower bound of approximating minimum spanning tree (MST) subsumes and improves upon the previous hardness of approximation bound of Elkin [STOC 2004] as well as the lower bound for (exact) MST computation of Peleg and Rubinovich [FOCS 1999]. Our result implies that there can be no distributed approximation algorithm for MST that is significantly faster than the current exact algorithm, for {\em any} approximation factor. Our lower bound proofs show an interesting connection between communication complexity and distributed computing which turns out to be useful in establishing the time complexity of exact and approximate distributed computation of many problems.
[ { "created": "Fri, 12 Nov 2010 21:06:13 GMT", "version": "v1" }, { "created": "Mon, 28 Mar 2011 00:02:26 GMT", "version": "v2" }, { "created": "Sat, 15 Oct 2011 17:01:07 GMT", "version": "v3" } ]
2011-10-18
[ [ "Sarma", "Atish Das", "" ], [ "Holzer", "Stephan", "" ], [ "Kor", "Liah", "" ], [ "Korman", "Amos", "" ], [ "Nanongkai", "Danupon", "" ], [ "Pandurangan", "Gopal", "" ], [ "Peleg", "David", "" ], [ "Wattenhofer", "Roger", "" ] ]
We study the {\em verification} problem in distributed networks, stated as follows. Let $H$ be a subgraph of a network $G$ where each vertex of $G$ knows which edges incident on it are in $H$. We would like to verify whether $H$ has some properties, e.g., if it is a tree or if it is connected. We would like to perform this verification in a decentralized fashion via a distributed algorithm. The time complexity of verification is measured as the number of rounds of distributed communication. In this paper we initiate a systematic study of distributed verification, and give almost tight lower bounds on the running time of distributed verification algorithms for many fundamental problems such as connectivity, spanning connected subgraph, and $s-t$ cut verification. We then show applications of these results in deriving strong unconditional time lower bounds on the {\em hardness of distributed approximation} for many classical optimization problems including minimum spanning tree, shortest paths, and minimum cut. Many of these results are the first non-trivial lower bounds for both exact and approximate distributed computation and they resolve previous open questions. Moreover, our unconditional lower bound of approximating minimum spanning tree (MST) subsumes and improves upon the previous hardness of approximation bound of Elkin [STOC 2004] as well as the lower bound for (exact) MST computation of Peleg and Rubinovich [FOCS 1999]. Our result implies that there can be no distributed approximation algorithm for MST that is significantly faster than the current exact algorithm, for {\em any} approximation factor. Our lower bound proofs show an interesting connection between communication complexity and distributed computing which turns out to be useful in establishing the time complexity of exact and approximate distributed computation of many problems.
1804.10520
Matthew England Dr
Zongyan Huang, Matthew England, David Wilson, James H. Davenport, and Lawrence C. Paulson
Using Machine Learning to Improve Cylindrical Algebraic Decomposition
arXiv admin note: text overlap with arXiv:1608.04219, arXiv:1404.6369
Mathematics in Computer Science, 13:4, pp. 461 - 488, Springer, 2019
10.1007/s11786-019-00394-8
null
cs.SC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cylindrical Algebraic Decomposition (CAD) is a key tool in computational algebraic geometry, best known as a procedure to enable Quantifier Elimination over real-closed fields. However, it has a worst case complexity doubly exponential in the size of the input, which is often encountered in practice. It has been observed that for many problems a change in algorithm settings or problem formulation can cause huge differences in runtime costs, changing problem instances from intractable to easy. A number of heuristics have been developed to help with such choices, but the complicated nature of the geometric relationships involved means these are imperfect and can sometimes make poor choices. We investigate the use of machine learning (specifically support vector machines) to make such choices instead. Machine learning is the process of fitting a computer model to a complex function based on properties learned from measured data. In this paper we apply it in two case studies: the first to select between heuristics for choosing a CAD variable ordering; the second to identify when a CAD problem instance would benefit from Groebner Basis preconditioning. These appear to be the first such applications of machine learning to Symbolic Computation. We demonstrate in both cases that the machine learned choice outperforms human developed heuristics.
[ { "created": "Thu, 26 Apr 2018 12:56:51 GMT", "version": "v1" } ]
2019-11-25
[ [ "Huang", "Zongyan", "" ], [ "England", "Matthew", "" ], [ "Wilson", "David", "" ], [ "Davenport", "James H.", "" ], [ "Paulson", "Lawrence C.", "" ] ]
Cylindrical Algebraic Decomposition (CAD) is a key tool in computational algebraic geometry, best known as a procedure to enable Quantifier Elimination over real-closed fields. However, it has a worst case complexity doubly exponential in the size of the input, which is often encountered in practice. It has been observed that for many problems a change in algorithm settings or problem formulation can cause huge differences in runtime costs, changing problem instances from intractable to easy. A number of heuristics have been developed to help with such choices, but the complicated nature of the geometric relationships involved means these are imperfect and can sometimes make poor choices. We investigate the use of machine learning (specifically support vector machines) to make such choices instead. Machine learning is the process of fitting a computer model to a complex function based on properties learned from measured data. In this paper we apply it in two case studies: the first to select between heuristics for choosing a CAD variable ordering; the second to identify when a CAD problem instance would benefit from Groebner Basis preconditioning. These appear to be the first such applications of machine learning to Symbolic Computation. We demonstrate in both cases that the machine learned choice outperforms human developed heuristics.
2204.06397
Anja Jankovic
Anja Jankovic, Diederick Vermetten, Ana Kostovska, Jacob de Nobel, Tome Eftimov, Carola Doerr
Trajectory-based Algorithm Selection with Warm-starting
null
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Landscape-aware algorithm selection approaches have so far mostly been relying on landscape feature extraction as a preprocessing step, independent of the execution of optimization algorithms in the portfolio. This introduces a significant overhead in computational cost for many practical applications, as features are extracted and computed via sampling and evaluating the problem instance at hand, similarly to what the optimization algorithm would perform anyway within its search trajectory. As suggested in Jankovic et al. (EvoAPPs 2021), trajectory-based algorithm selection circumvents the problem of costly feature extraction by computing landscape features from points that a solver sampled and evaluated during the optimization process. Features computed in this manner are used to train algorithm performance regression models, upon which a per-run algorithm selector is then built. In this work, we apply the trajectory-based approach onto a portfolio of five algorithms. We study the quality and accuracy of performance regression and algorithm selection models in the scenario of predicting different algorithm performances after a fixed budget of function evaluations. We rely on landscape features of the problem instance computed using one portion of the aforementioned budget of the same function evaluations. Moreover, we consider the possibility of switching between the solvers once, which requires them to be warm-started, i.e. when we switch, the second solver continues the optimization process already being initialized appropriately by making use of the information collected by the first solver. In this new context, we show promising performance of the trajectory-based per-run algorithm selection with warm-starting.
[ { "created": "Wed, 13 Apr 2022 14:00:55 GMT", "version": "v1" }, { "created": "Tue, 7 Jun 2022 11:45:35 GMT", "version": "v2" } ]
2022-06-08
[ [ "Jankovic", "Anja", "" ], [ "Vermetten", "Diederick", "" ], [ "Kostovska", "Ana", "" ], [ "de Nobel", "Jacob", "" ], [ "Eftimov", "Tome", "" ], [ "Doerr", "Carola", "" ] ]
Landscape-aware algorithm selection approaches have so far mostly been relying on landscape feature extraction as a preprocessing step, independent of the execution of optimization algorithms in the portfolio. This introduces a significant overhead in computational cost for many practical applications, as features are extracted and computed via sampling and evaluating the problem instance at hand, similarly to what the optimization algorithm would perform anyway within its search trajectory. As suggested in Jankovic et al. (EvoAPPs 2021), trajectory-based algorithm selection circumvents the problem of costly feature extraction by computing landscape features from points that a solver sampled and evaluated during the optimization process. Features computed in this manner are used to train algorithm performance regression models, upon which a per-run algorithm selector is then built. In this work, we apply the trajectory-based approach onto a portfolio of five algorithms. We study the quality and accuracy of performance regression and algorithm selection models in the scenario of predicting different algorithm performances after a fixed budget of function evaluations. We rely on landscape features of the problem instance computed using one portion of the aforementioned budget of the same function evaluations. Moreover, we consider the possibility of switching between the solvers once, which requires them to be warm-started, i.e. when we switch, the second solver continues the optimization process already being initialized appropriately by making use of the information collected by the first solver. In this new context, we show promising performance of the trajectory-based per-run algorithm selection with warm-starting.
1002.0406
Christoph Studer
Christoph Studer, Markus Wenk, Andreas Burg
MIMO Transmission with Residual Transmit-RF Impairments
to be presented at the International ITG Workshop on Smart Antennas - WSA 2010
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Physical transceiver implementations for multiple-input multiple-output (MIMO) wireless communication systems suffer from transmit-RF (Tx-RF) impairments. In this paper, we study the effect on channel capacity and error-rate performance of residual Tx-RF impairments that defy proper compensation. In particular, we demonstrate that such residual distortions severely degrade the performance of (near-)optimum MIMO detection algorithms. To mitigate this performance loss, we propose an efficient algorithm, which is based on an i.i.d. Gaussian model for the distortion caused by these impairments. In order to validate this model, we provide measurement results based on a 4-stream Tx-RF chain implementation for MIMO orthogonal frequency-division multiplexing (OFDM).
[ { "created": "Tue, 2 Feb 2010 07:37:18 GMT", "version": "v1" } ]
2010-02-03
[ [ "Studer", "Christoph", "" ], [ "Wenk", "Markus", "" ], [ "Burg", "Andreas", "" ] ]
Physical transceiver implementations for multiple-input multiple-output (MIMO) wireless communication systems suffer from transmit-RF (Tx-RF) impairments. In this paper, we study the effect on channel capacity and error-rate performance of residual Tx-RF impairments that defy proper compensation. In particular, we demonstrate that such residual distortions severely degrade the performance of (near-)optimum MIMO detection algorithms. To mitigate this performance loss, we propose an efficient algorithm, which is based on an i.i.d. Gaussian model for the distortion caused by these impairments. In order to validate this model, we provide measurement results based on a 4-stream Tx-RF chain implementation for MIMO orthogonal frequency-division multiplexing (OFDM).
1806.06639
Marco Livesu
Matteo Bracci and Marco Tarini and Nico Pietroni and Marco Livesu and Paolo Cignoni
HexaLab.net: an online viewer for hexahedral meshes
null
Computer-Aided Design, Volume 110, May 2019, Pages 24-36
10.1016/j.cad.2018.12.003
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce HexaLab: a WebGL application for real time visualization, exploration and assessment of hexahedral meshes. HexaLab can be used by simply opening www.hexalab.net. Our visualization tool targets both users and scholars. Practitioners who employ hexmeshes for Finite Element Analysis, can readily check mesh quality and assess its usability for simulation. Researchers involved in mesh generation may use HexaLab to perform a detailed analysis of the mesh structure, isolating weak points and testing new solutions to improve on the state of the art and generate high quality images. To this end, we support a wide variety of visualization and volume inspection tools. Our system offers also immediate access to a repository containing all the publicly available meshes produced with the most recent techniques for hexmesh generation. We believe HexaLab, providing a common tool for visualizing, assessing and distributing results, will push forward the recent strive for replicability in our scientific community.
[ { "created": "Mon, 18 Jun 2018 12:58:08 GMT", "version": "v1" }, { "created": "Fri, 15 Mar 2019 11:04:43 GMT", "version": "v2" } ]
2019-03-18
[ [ "Bracci", "Matteo", "" ], [ "Tarini", "Marco", "" ], [ "Pietroni", "Nico", "" ], [ "Livesu", "Marco", "" ], [ "Cignoni", "Paolo", "" ] ]
We introduce HexaLab: a WebGL application for real time visualization, exploration and assessment of hexahedral meshes. HexaLab can be used by simply opening www.hexalab.net. Our visualization tool targets both users and scholars. Practitioners who employ hexmeshes for Finite Element Analysis, can readily check mesh quality and assess its usability for simulation. Researchers involved in mesh generation may use HexaLab to perform a detailed analysis of the mesh structure, isolating weak points and testing new solutions to improve on the state of the art and generate high quality images. To this end, we support a wide variety of visualization and volume inspection tools. Our system offers also immediate access to a repository containing all the publicly available meshes produced with the most recent techniques for hexmesh generation. We believe HexaLab, providing a common tool for visualizing, assessing and distributing results, will push forward the recent strive for replicability in our scientific community.
2209.07936
Zhuoruo Zhang
Zhuoruo Zhang, Chenyang Yu, Rui Chang, Mingshuai Chen, Bo Feng, He Huang, Qinming Dai, Wenbo Shen, Yongwang Zhao
PA-Boot: A Formally Verified Authentication Protocol for Multiprocessor Secure Boot
null
null
null
null
cs.CR cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hardware supply-chain attacks are raising significant security threats to the boot process of multiprocessor systems. This paper identifies a new, prevalent hardware supply-chain attack surface that can bypass multiprocessor secure boot due to the absence of processor-authentication mechanisms. To defend against such attacks, we present PA-Boot, the first formally verified processor-authentication protocol for secure boot in multiprocessor systems. PA-Boot is proved functionally correct and is guaranteed to detect multiple adversarial behaviors, e.g., processor replacements, man-in-the-middle attacks, and tampering with certificates. The fine-grained formalization of PA-Boot and its fully mechanized security proofs are carried out in the Isabelle/HOL theorem prover with 306 lemmas/theorems and ~7,100 LoC. Experiments on a proof-of-concept implementation indicate that PA-Boot can effectively identify boot-process attacks with a considerably minor overhead and thereby improve the security of multiprocessor systems.
[ { "created": "Fri, 16 Sep 2022 13:54:43 GMT", "version": "v1" }, { "created": "Thu, 25 Apr 2024 03:04:32 GMT", "version": "v2" } ]
2024-04-26
[ [ "Zhang", "Zhuoruo", "" ], [ "Yu", "Chenyang", "" ], [ "Chang", "Rui", "" ], [ "Chen", "Mingshuai", "" ], [ "Feng", "Bo", "" ], [ "Huang", "He", "" ], [ "Dai", "Qinming", "" ], [ "Shen", "Wenbo", "" ], [ "Zhao", "Yongwang", "" ] ]
Hardware supply-chain attacks are raising significant security threats to the boot process of multiprocessor systems. This paper identifies a new, prevalent hardware supply-chain attack surface that can bypass multiprocessor secure boot due to the absence of processor-authentication mechanisms. To defend against such attacks, we present PA-Boot, the first formally verified processor-authentication protocol for secure boot in multiprocessor systems. PA-Boot is proved functionally correct and is guaranteed to detect multiple adversarial behaviors, e.g., processor replacements, man-in-the-middle attacks, and tampering with certificates. The fine-grained formalization of PA-Boot and its fully mechanized security proofs are carried out in the Isabelle/HOL theorem prover with 306 lemmas/theorems and ~7,100 LoC. Experiments on a proof-of-concept implementation indicate that PA-Boot can effectively identify boot-process attacks with a considerably minor overhead and thereby improve the security of multiprocessor systems.
1406.5162
Baichuan Zhang
Baichuan Zhang, Tanay Kumar Saha and Mohammad Al Hasan
Name Disambiguation from link data in a collaboration graph using temporal and topological features
The short version of this paper has been accepted to ASONAM 2014
null
null
null
cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a social community, multiple persons may share the same name, phone number or some other identifying attributes. This, along with other phenomena, such as name abbreviation, name misspelling, and human error leads to erroneous aggregation of records of multiple persons under a single reference. Such mistakes affect the performance of document retrieval, web search, database integration, and more importantly, improper attribution of credit (or blame). The task of entity disambiguation partitions the records belonging to multiple persons with the objective that each decomposed partition is composed of records of a unique person. Existing solutions to this task use either biographical attributes, or auxiliary features that are collected from external sources, such as Wikipedia. However, for many scenarios, such auxiliary features are not available, or they are costly to obtain. Besides, the attempt of collecting biographical or external data sustains the risk of privacy violation. In this work, we propose a method for solving entity disambiguation task from link information obtained from a collaboration network. Our method is non-intrusive of privacy as it uses only the time-stamped graph topology of an anonymized network. Experimental results on two real-life academic collaboration networks show that the proposed method has satisfactory performance.
[ { "created": "Thu, 19 Jun 2014 19:22:33 GMT", "version": "v1" }, { "created": "Sat, 13 Feb 2016 16:07:38 GMT", "version": "v2" }, { "created": "Thu, 18 Feb 2016 19:39:13 GMT", "version": "v3" } ]
2016-02-19
[ [ "Zhang", "Baichuan", "" ], [ "Saha", "Tanay Kumar", "" ], [ "Hasan", "Mohammad Al", "" ] ]
In a social community, multiple persons may share the same name, phone number or some other identifying attributes. This, along with other phenomena, such as name abbreviation, name misspelling, and human error leads to erroneous aggregation of records of multiple persons under a single reference. Such mistakes affect the performance of document retrieval, web search, database integration, and more importantly, improper attribution of credit (or blame). The task of entity disambiguation partitions the records belonging to multiple persons with the objective that each decomposed partition is composed of records of a unique person. Existing solutions to this task use either biographical attributes, or auxiliary features that are collected from external sources, such as Wikipedia. However, for many scenarios, such auxiliary features are not available, or they are costly to obtain. Besides, the attempt of collecting biographical or external data sustains the risk of privacy violation. In this work, we propose a method for solving entity disambiguation task from link information obtained from a collaboration network. Our method is non-intrusive of privacy as it uses only the time-stamped graph topology of an anonymized network. Experimental results on two real-life academic collaboration networks show that the proposed method has satisfactory performance.
2112.10457
Or Toledano
Or Toledano, Yanir Marmor, Dov Gertz
Image Animation with Keypoint Mask
null
null
10.13140/RG.2.2.16342.16968
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Motion transfer is the task of synthesizing future video frames of a single source image according to the motion from a given driving video. In order to solve it, we face the challenging complexity of motion representation and the unknown relations between the driving video and the source image. Despite its difficulty, this problem attracted great interests from researches at the recent years, with gradual improvements. The goal is often thought as the decoupling of motion and appearance, which is may be solved by extracting the motion from keypoint movement. We chose to tackle the generic, unsupervised setting, where we need to apply animation to any arbitrary object, without any domain specific model for the structure of the input. In this work, we extract the structure from a keypoint heatmap, without an explicit motion representation. Then, the structures from the image and the video are extracted to warp the image according to the video, by a deep generator. We suggest two variants of the structure from different steps in the keypoint module, and show superior qualitative pose and quantitative scores.
[ { "created": "Mon, 20 Dec 2021 11:35:06 GMT", "version": "v1" }, { "created": "Tue, 21 Dec 2021 22:15:23 GMT", "version": "v2" } ]
2021-12-23
[ [ "Toledano", "Or", "" ], [ "Marmor", "Yanir", "" ], [ "Gertz", "Dov", "" ] ]
Motion transfer is the task of synthesizing future video frames of a single source image according to the motion from a given driving video. In order to solve it, we face the challenging complexity of motion representation and the unknown relations between the driving video and the source image. Despite its difficulty, this problem attracted great interests from researches at the recent years, with gradual improvements. The goal is often thought as the decoupling of motion and appearance, which is may be solved by extracting the motion from keypoint movement. We chose to tackle the generic, unsupervised setting, where we need to apply animation to any arbitrary object, without any domain specific model for the structure of the input. In this work, we extract the structure from a keypoint heatmap, without an explicit motion representation. Then, the structures from the image and the video are extracted to warp the image according to the video, by a deep generator. We suggest two variants of the structure from different steps in the keypoint module, and show superior qualitative pose and quantitative scores.
1411.3071
Sunil Kumar Prof.
Sunil Kumar, Priya Ranjan, R. Radhakrishnan
EMEEDP: Enhanced Multi-hop Energy Efficient Distributed Protocol for Heterogeneous Wireless Sensor Network
6 pages, 4 figures. arXiv admin note: substantial text overlap with arXiv:1409.1412 by other authors
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In WSN (Wireless Sensor Network) every sensor node sensed the data and transmit it to the CH (Cluster head) or BS (Base Station). Sensors are randomly deployed in unreachable areas, where battery replacement or battery charge is not possible. For this reason, Energy conservation is the important design goal while developing a routing and distributed protocol to increase the lifetime of WSN. In this paper, an enhanced energy efficient distributed protocol for heterogeneous WSN have been reported. EMEEDP is proposed for heterogeneous WSN to increase the lifetime of the network. An efficient algorithm is proposed in the form of flowchart and based on various clustering equation proved that the proposed work accomplishes longer lifetime with improved QOS parameters parallel to MEEP. A WSN implemented and tested using Raspberry Pi devices as a base station, temperature sensors as a node and xively.com as a cloud. Users use data for decision purpose or business purposes from xively.com using internet.
[ { "created": "Wed, 12 Nov 2014 05:19:43 GMT", "version": "v1" }, { "created": "Fri, 14 Nov 2014 16:37:20 GMT", "version": "v2" } ]
2014-11-17
[ [ "Kumar", "Sunil", "" ], [ "Ranjan", "Priya", "" ], [ "Radhakrishnan", "R.", "" ] ]
In WSN (Wireless Sensor Network) every sensor node sensed the data and transmit it to the CH (Cluster head) or BS (Base Station). Sensors are randomly deployed in unreachable areas, where battery replacement or battery charge is not possible. For this reason, Energy conservation is the important design goal while developing a routing and distributed protocol to increase the lifetime of WSN. In this paper, an enhanced energy efficient distributed protocol for heterogeneous WSN have been reported. EMEEDP is proposed for heterogeneous WSN to increase the lifetime of the network. An efficient algorithm is proposed in the form of flowchart and based on various clustering equation proved that the proposed work accomplishes longer lifetime with improved QOS parameters parallel to MEEP. A WSN implemented and tested using Raspberry Pi devices as a base station, temperature sensors as a node and xively.com as a cloud. Users use data for decision purpose or business purposes from xively.com using internet.
1708.08989
Yu Zhao
Yu Zhao, Rennong Yang, Guillaume Chevalier, Maoguo Gong
Deep Residual Bidir-LSTM for Human Activity Recognition Using Wearable Sensors
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human activity recognition (HAR) has become a popular topic in research because of its wide application. With the development of deep learning, new ideas have appeared to address HAR problems. Here, a deep network architecture using residual bidirectional long short-term memory (LSTM) cells is proposed. The advantages of the new network include that a bidirectional connection can concatenate the positive time direction (forward state) and the negative time direction (backward state). Second, residual connections between stacked cells act as highways for gradients, which can pass underlying information directly to the upper layer, effectively avoiding the gradient vanishing problem. Generally, the proposed network shows improvements on both the temporal (using bidirectional cells) and the spatial (residual connections stacked deeply) dimensions, aiming to enhance the recognition rate. When tested with the Opportunity data set and the public domain UCI data set, the accuracy was increased by 4.78% and 3.68%, respectively, compared with previously reported results. Finally, the confusion matrix of the public domain UCI data set was analyzed.
[ { "created": "Tue, 22 Aug 2017 11:02:13 GMT", "version": "v1" }, { "created": "Thu, 7 Sep 2017 07:36:31 GMT", "version": "v2" } ]
2017-09-08
[ [ "Zhao", "Yu", "" ], [ "Yang", "Rennong", "" ], [ "Chevalier", "Guillaume", "" ], [ "Gong", "Maoguo", "" ] ]
Human activity recognition (HAR) has become a popular topic in research because of its wide application. With the development of deep learning, new ideas have appeared to address HAR problems. Here, a deep network architecture using residual bidirectional long short-term memory (LSTM) cells is proposed. The advantages of the new network include that a bidirectional connection can concatenate the positive time direction (forward state) and the negative time direction (backward state). Second, residual connections between stacked cells act as highways for gradients, which can pass underlying information directly to the upper layer, effectively avoiding the gradient vanishing problem. Generally, the proposed network shows improvements on both the temporal (using bidirectional cells) and the spatial (residual connections stacked deeply) dimensions, aiming to enhance the recognition rate. When tested with the Opportunity data set and the public domain UCI data set, the accuracy was increased by 4.78% and 3.68%, respectively, compared with previously reported results. Finally, the confusion matrix of the public domain UCI data set was analyzed.
2407.14486
Eduardo C. Garrido-Merch\'an
Alejandra de la Rica Escudero, Eduardo C. Garrido-Merchan, Maria Coronado-Vaca
Explainable Post hoc Portfolio Management Financial Policy of a Deep Reinforcement Learning agent
null
null
null
null
cs.CE cs.AI q-fin.PM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Financial portfolio management investment policies computed quantitatively by modern portfolio theory techniques like the Markowitz model rely on a set on assumptions that are not supported by data in high volatility markets. Hence, quantitative researchers are looking for alternative models to tackle this problem. Concretely, portfolio management is a problem that has been successfully addressed recently by Deep Reinforcement Learning (DRL) approaches. In particular, DRL algorithms train an agent by estimating the distribution of the expected reward of every action performed by an agent given any financial state in a simulator. However, these methods rely on Deep Neural Networks model to represent such a distribution, that although they are universal approximator models, they cannot explain its behaviour, given by a set of parameters that are not interpretable. Critically, financial investors policies require predictions to be interpretable, so DRL agents are not suited to follow a particular policy or explain their actions. In this work, we developed a novel Explainable Deep Reinforcement Learning (XDRL) approach for portfolio management, integrating the Proximal Policy Optimization (PPO) with the model agnostic explainable techniques of feature importance, SHAP and LIME to enhance transparency in prediction time. By executing our methodology, we can interpret in prediction time the actions of the agent to assess whether they follow the requisites of an investment policy or to assess the risk of following the agent suggestions. To the best of our knowledge, our proposed approach is the first explainable post hoc portfolio management financial policy of a DRL agent. We empirically illustrate our methodology by successfully identifying key features influencing investment decisions, which demonstrate the ability to explain the agent actions in prediction time.
[ { "created": "Fri, 19 Jul 2024 17:40:39 GMT", "version": "v1" } ]
2024-07-22
[ [ "Escudero", "Alejandra de la Rica", "" ], [ "Garrido-Merchan", "Eduardo C.", "" ], [ "Coronado-Vaca", "Maria", "" ] ]
Financial portfolio management investment policies computed quantitatively by modern portfolio theory techniques like the Markowitz model rely on a set on assumptions that are not supported by data in high volatility markets. Hence, quantitative researchers are looking for alternative models to tackle this problem. Concretely, portfolio management is a problem that has been successfully addressed recently by Deep Reinforcement Learning (DRL) approaches. In particular, DRL algorithms train an agent by estimating the distribution of the expected reward of every action performed by an agent given any financial state in a simulator. However, these methods rely on Deep Neural Networks model to represent such a distribution, that although they are universal approximator models, they cannot explain its behaviour, given by a set of parameters that are not interpretable. Critically, financial investors policies require predictions to be interpretable, so DRL agents are not suited to follow a particular policy or explain their actions. In this work, we developed a novel Explainable Deep Reinforcement Learning (XDRL) approach for portfolio management, integrating the Proximal Policy Optimization (PPO) with the model agnostic explainable techniques of feature importance, SHAP and LIME to enhance transparency in prediction time. By executing our methodology, we can interpret in prediction time the actions of the agent to assess whether they follow the requisites of an investment policy or to assess the risk of following the agent suggestions. To the best of our knowledge, our proposed approach is the first explainable post hoc portfolio management financial policy of a DRL agent. We empirically illustrate our methodology by successfully identifying key features influencing investment decisions, which demonstrate the ability to explain the agent actions in prediction time.
1406.7264
Gokhan Calis
Gokhan Calis and O. Ozan Koyluoglu
Repairable Block Failure Resilient Codes
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In large scale distributed storage systems (DSS) deployed in cloud computing, correlated failures resulting in simultaneous failure (or, unavailability) of blocks of nodes are common. In such scenarios, the stored data or a content of a failed node can only be reconstructed from the available live nodes belonging to available blocks. To analyze the resilience of the system against such block failures, this work introduces the framework of Block Failure Resilient (BFR) codes, wherein the data (e.g., file in DSS) can be decoded by reading out from a same number of codeword symbols (nodes) from each available blocks of the underlying codeword. Further, repairable BFR codes are introduced, wherein any codeword symbol in a failed block can be repaired by contacting to remaining blocks in the system. Motivated from regenerating codes, file size bounds for repairable BFR codes are derived, trade-off between per node storage and repair bandwidth is analyzed, and BFR-MSR and BFR-MBR points are derived. Explicit codes achieving these two operating points for a wide set of parameters are constructed by utilizing combinatorial designs, wherein the codewords of the underlying outer codes are distributed to BFR codeword symbols according to projective planes.
[ { "created": "Fri, 27 Jun 2014 18:30:47 GMT", "version": "v1" } ]
2014-06-30
[ [ "Calis", "Gokhan", "" ], [ "Koyluoglu", "O. Ozan", "" ] ]
In large scale distributed storage systems (DSS) deployed in cloud computing, correlated failures resulting in simultaneous failure (or, unavailability) of blocks of nodes are common. In such scenarios, the stored data or a content of a failed node can only be reconstructed from the available live nodes belonging to available blocks. To analyze the resilience of the system against such block failures, this work introduces the framework of Block Failure Resilient (BFR) codes, wherein the data (e.g., file in DSS) can be decoded by reading out from a same number of codeword symbols (nodes) from each available blocks of the underlying codeword. Further, repairable BFR codes are introduced, wherein any codeword symbol in a failed block can be repaired by contacting to remaining blocks in the system. Motivated from regenerating codes, file size bounds for repairable BFR codes are derived, trade-off between per node storage and repair bandwidth is analyzed, and BFR-MSR and BFR-MBR points are derived. Explicit codes achieving these two operating points for a wide set of parameters are constructed by utilizing combinatorial designs, wherein the codewords of the underlying outer codes are distributed to BFR codeword symbols according to projective planes.
1501.05990
Paulo Shakarian
Jana Shakarian, Paulo Shakarian, Andrew Ruef
Cyber Attacks and Public Embarrassment: A Survey of Some Notable Hacks
null
null
null
null
cs.CY cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We hear it all too often in the media: an organization is attacked, its data, often containing personally identifying information, is made public, and a hacking group emerges to claim credit. In this excerpt, we discuss how such groups operate and describe the details of a few major cyber-attacks of this sort in the wider context of how they occurred. We feel that understanding how such groups have operated in the past will give organizations ideas of how to defend against them in the future.
[ { "created": "Sat, 24 Jan 2015 02:35:04 GMT", "version": "v1" } ]
2015-01-27
[ [ "Shakarian", "Jana", "" ], [ "Shakarian", "Paulo", "" ], [ "Ruef", "Andrew", "" ] ]
We hear it all too often in the media: an organization is attacked, its data, often containing personally identifying information, is made public, and a hacking group emerges to claim credit. In this excerpt, we discuss how such groups operate and describe the details of a few major cyber-attacks of this sort in the wider context of how they occurred. We feel that understanding how such groups have operated in the past will give organizations ideas of how to defend against them in the future.
1808.00560
Kai Chen
Kai Chen, Yijue Dai, Feng Yin, Elena Marchiori, and Sergios Theodoridis
Compressible Spectral Mixture Kernels with Sparse Dependency Structures for Gaussian Processes
13 pages
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spectral mixture (SM) kernels comprise a powerful class of generalized kernels for Gaussian processes (GPs) to describe complex patterns. This paper introduces model compression and time- and phase (TP) modulated dependency structures to the original (SM) kernel for improved generalization of GPs. Specifically, by adopting Bienaym\'es identity, we generalize the dependency structure through cross-covariance between the SM components. Then, we propose a novel SM kernel with a dependency structure (SMD) by using cross-convolution between the SM components. Furthermore, we ameliorate the expressiveness of the dependency structure by parameterizing it with time and phase delays. The dependency structure has clear interpretations in terms of spectral density, covariance behavior, and sampling path. To enrich the SMD with effective hyperparameter initialization, compressible SM kernel components, and sparse dependency structures, we introduce a novel structure adaptation (SA) algorithm in the end. A thorough comparative analysis of the SMD on both synthetic and real-life applications corroborates its efficacy.
[ { "created": "Wed, 1 Aug 2018 20:55:54 GMT", "version": "v1" }, { "created": "Sun, 9 Sep 2018 11:50:23 GMT", "version": "v2" }, { "created": "Thu, 13 Sep 2018 21:37:31 GMT", "version": "v3" }, { "created": "Tue, 18 Sep 2018 09:05:19 GMT", "version": "v4" }, { "created": "Sun, 14 Oct 2018 20:26:09 GMT", "version": "v5" }, { "created": "Fri, 16 Aug 2019 19:18:41 GMT", "version": "v6" }, { "created": "Tue, 10 Aug 2021 02:09:14 GMT", "version": "v7" }, { "created": "Tue, 31 Aug 2021 12:23:05 GMT", "version": "v8" }, { "created": "Wed, 26 Jul 2023 04:30:49 GMT", "version": "v9" } ]
2023-07-27
[ [ "Chen", "Kai", "" ], [ "Dai", "Yijue", "" ], [ "Yin", "Feng", "" ], [ "Marchiori", "Elena", "" ], [ "Theodoridis", "Sergios", "" ] ]
Spectral mixture (SM) kernels comprise a powerful class of generalized kernels for Gaussian processes (GPs) to describe complex patterns. This paper introduces model compression and time- and phase (TP) modulated dependency structures to the original (SM) kernel for improved generalization of GPs. Specifically, by adopting Bienaym\'es identity, we generalize the dependency structure through cross-covariance between the SM components. Then, we propose a novel SM kernel with a dependency structure (SMD) by using cross-convolution between the SM components. Furthermore, we ameliorate the expressiveness of the dependency structure by parameterizing it with time and phase delays. The dependency structure has clear interpretations in terms of spectral density, covariance behavior, and sampling path. To enrich the SMD with effective hyperparameter initialization, compressible SM kernel components, and sparse dependency structures, we introduce a novel structure adaptation (SA) algorithm in the end. A thorough comparative analysis of the SMD on both synthetic and real-life applications corroborates its efficacy.
1905.10028
Simone Brugiapaglia
Ben Adcock, Simone Brugiapaglia, Matthew King-Roskamp
Do log factors matter? On optimal wavelet approximation and the foundations of compressed sensing
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A signature result in compressed sensing is that Gaussian random sampling achieves stable and robust recovery of sparse vectors under optimal conditions on the number of measurements. However, in the context of image reconstruction, it has been extensively documented that sampling strategies based on Fourier measurements outperform this purportedly optimal approach. Motivated by this seeming paradox, we investigate the problem of optimal sampling for compressed sensing. Rigorously combining the theories of wavelet approximation and infinite-dimensional compressed sensing, our analysis leads to new error bounds in terms of the total number of measurements $m$ for the approximation of piecewise $\alpha$-H\"{o}lder functions. Our theoretical findings suggest that Fourier sampling outperforms random Gaussian sampling when the H\"older exponent $\alpha$ is large enough. Moreover, we establish a provably optimal sampling strategy. This work is an important first step towards the resolution of the claimed paradox, and provides a clear theoretical justification for the practical success of compressed sensing techniques in imaging problems.
[ { "created": "Fri, 24 May 2019 04:38:13 GMT", "version": "v1" }, { "created": "Thu, 3 Sep 2020 20:29:16 GMT", "version": "v2" }, { "created": "Mon, 25 Jan 2021 18:48:45 GMT", "version": "v3" } ]
2021-01-26
[ [ "Adcock", "Ben", "" ], [ "Brugiapaglia", "Simone", "" ], [ "King-Roskamp", "Matthew", "" ] ]
A signature result in compressed sensing is that Gaussian random sampling achieves stable and robust recovery of sparse vectors under optimal conditions on the number of measurements. However, in the context of image reconstruction, it has been extensively documented that sampling strategies based on Fourier measurements outperform this purportedly optimal approach. Motivated by this seeming paradox, we investigate the problem of optimal sampling for compressed sensing. Rigorously combining the theories of wavelet approximation and infinite-dimensional compressed sensing, our analysis leads to new error bounds in terms of the total number of measurements $m$ for the approximation of piecewise $\alpha$-H\"{o}lder functions. Our theoretical findings suggest that Fourier sampling outperforms random Gaussian sampling when the H\"older exponent $\alpha$ is large enough. Moreover, we establish a provably optimal sampling strategy. This work is an important first step towards the resolution of the claimed paradox, and provides a clear theoretical justification for the practical success of compressed sensing techniques in imaging problems.
2105.12092
Anselmo Pitombeira-Neto
Anselmo R. Pitombeira-Neto, Helano P. Santos, Ticiana L. Coelho da Silva, Jos\'e Antonio F. de Macedo
Trajectory Modeling via Random Utility Inverse Reinforcement Learning
31 pages; expanded version, with the addition of proofs not present in the first version
null
10.1016/j.ins.2024.120128
null
cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of modeling trajectories of drivers in a road network from the perspective of inverse reinforcement learning. Cars are detected by sensors placed on sparsely distributed points on the street network of a city. As rational agents, drivers are trying to maximize some reward function unknown to an external observer. We apply the concept of random utility from econometrics to model the unknown reward function as a function of observed and unobserved features. In contrast to current inverse reinforcement learning approaches, we do not assume that agents act according to a stochastic policy; rather, we assume that agents act according to a deterministic optimal policy and show that randomness in data arises because the exact rewards are not fully observed by an external observer. We introduce the concept of extended state to cope with unobserved features and develop a Markov decision process formulation of drivers decisions. We present theoretical results which guarantee the existence of solutions and show that maximum entropy inverse reinforcement learning is a particular case of our approach. Finally, we illustrate Bayesian inference on model parameters through a case study with real trajectory data from a large city in Brazil.
[ { "created": "Tue, 25 May 2021 17:19:09 GMT", "version": "v1" }, { "created": "Wed, 11 Jan 2023 02:54:30 GMT", "version": "v2" } ]
2024-01-22
[ [ "Pitombeira-Neto", "Anselmo R.", "" ], [ "Santos", "Helano P.", "" ], [ "da Silva", "Ticiana L. Coelho", "" ], [ "de Macedo", "José Antonio F.", "" ] ]
We consider the problem of modeling trajectories of drivers in a road network from the perspective of inverse reinforcement learning. Cars are detected by sensors placed on sparsely distributed points on the street network of a city. As rational agents, drivers are trying to maximize some reward function unknown to an external observer. We apply the concept of random utility from econometrics to model the unknown reward function as a function of observed and unobserved features. In contrast to current inverse reinforcement learning approaches, we do not assume that agents act according to a stochastic policy; rather, we assume that agents act according to a deterministic optimal policy and show that randomness in data arises because the exact rewards are not fully observed by an external observer. We introduce the concept of extended state to cope with unobserved features and develop a Markov decision process formulation of drivers decisions. We present theoretical results which guarantee the existence of solutions and show that maximum entropy inverse reinforcement learning is a particular case of our approach. Finally, we illustrate Bayesian inference on model parameters through a case study with real trajectory data from a large city in Brazil.
2011.12073
Michael Lepori Jr.
Michael A. Lepori, R. Thomas McCoy
Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
As the name implies, contextualized representations of language are typically motivated by their ability to encode context. Which aspects of context are captured by such representations? We introduce an approach to address this question using Representational Similarity Analysis (RSA). As case studies, we investigate the degree to which a verb embedding encodes the verb's subject, a pronoun embedding encodes the pronoun's antecedent, and a full-sentence representation encodes the sentence's head word (as determined by a dependency parse). In all cases, we show that BERT's contextualized embeddings reflect the linguistic dependency being studied, and that BERT encodes these dependencies to a greater degree than it encodes less linguistically-salient controls. These results demonstrate the ability of our approach to adjudicate between hypotheses about which aspects of context are encoded in representations of language.
[ { "created": "Tue, 24 Nov 2020 13:19:06 GMT", "version": "v1" } ]
2020-11-25
[ [ "Lepori", "Michael A.", "" ], [ "McCoy", "R. Thomas", "" ] ]
As the name implies, contextualized representations of language are typically motivated by their ability to encode context. Which aspects of context are captured by such representations? We introduce an approach to address this question using Representational Similarity Analysis (RSA). As case studies, we investigate the degree to which a verb embedding encodes the verb's subject, a pronoun embedding encodes the pronoun's antecedent, and a full-sentence representation encodes the sentence's head word (as determined by a dependency parse). In all cases, we show that BERT's contextualized embeddings reflect the linguistic dependency being studied, and that BERT encodes these dependencies to a greater degree than it encodes less linguistically-salient controls. These results demonstrate the ability of our approach to adjudicate between hypotheses about which aspects of context are encoded in representations of language.
1910.07972
Max Argus
Lukas Hermann, Max Argus, Andreas Eitel, Artemij Amiranashvili, Wolfram Burgard, Thomas Brox
Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control
Accepted at the 2020 IEEE International Conference on Robotics and Automation (ICRA). Project page see https://lmb.informatik.uni-freiburg.de/projects/curriculum/
null
null
null
cs.RO cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose Adaptive Curriculum Generation from Demonstrations (ACGD) for reinforcement learning in the presence of sparse rewards. Rather than designing shaped reward functions, ACGD adaptively sets the appropriate task difficulty for the learner by controlling where to sample from the demonstration trajectories and which set of simulation parameters to use. We show that training vision-based control policies in simulation while gradually increasing the difficulty of the task via ACGD improves the policy transfer to the real world. The degree of domain randomization is also gradually increased through the task difficulty. We demonstrate zero-shot transfer for two real-world manipulation tasks: pick-and-stow and block stacking. A video showing the results can be found at https://lmb.informatik.uni-freiburg.de/projects/curriculum/
[ { "created": "Thu, 17 Oct 2019 15:33:03 GMT", "version": "v1" }, { "created": "Thu, 31 Oct 2019 10:49:36 GMT", "version": "v2" }, { "created": "Wed, 8 Jul 2020 15:44:10 GMT", "version": "v3" } ]
2020-07-09
[ [ "Hermann", "Lukas", "" ], [ "Argus", "Max", "" ], [ "Eitel", "Andreas", "" ], [ "Amiranashvili", "Artemij", "" ], [ "Burgard", "Wolfram", "" ], [ "Brox", "Thomas", "" ] ]
We propose Adaptive Curriculum Generation from Demonstrations (ACGD) for reinforcement learning in the presence of sparse rewards. Rather than designing shaped reward functions, ACGD adaptively sets the appropriate task difficulty for the learner by controlling where to sample from the demonstration trajectories and which set of simulation parameters to use. We show that training vision-based control policies in simulation while gradually increasing the difficulty of the task via ACGD improves the policy transfer to the real world. The degree of domain randomization is also gradually increased through the task difficulty. We demonstrate zero-shot transfer for two real-world manipulation tasks: pick-and-stow and block stacking. A video showing the results can be found at https://lmb.informatik.uni-freiburg.de/projects/curriculum/
2311.05647
Marc Wolf
Marc Wolf and Fran\c{c}ois Wolf
On the density of primes of the form $X^2+c$
25 pages
null
10.14738/tecs.116.15890
null
cs.DS
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present a method for finding large fixed-size primes of the form $X^2+c$. We study the density of primes on the sets $E_c = \{N(X,c)=X^2+c,\ X \in (2\mathbb{Z}+(c-1))\}$, $c \in \mathbb{N}^*$. We describe an algorithm for generating values of $c$ such that a given prime $p$ is the minimum of the union of prime divisors of all elements in $E_c$. We also present quadratic forms generating divisors of Ec and study the prime divisors of its terms. This paper uses the results of Dirichlet's arithmetic progression theorem [1] and the article [6] to rewrite a conjecture of Shanks [2] on the density of primes in $E_c$. Finally, based on these results, we discuss the heuristics of large primes occurrences in the research set of our algorithm.
[ { "created": "Tue, 7 Nov 2023 10:35:00 GMT", "version": "v1" } ]
2023-12-20
[ [ "Wolf", "Marc", "" ], [ "Wolf", "François", "" ] ]
We present a method for finding large fixed-size primes of the form $X^2+c$. We study the density of primes on the sets $E_c = \{N(X,c)=X^2+c,\ X \in (2\mathbb{Z}+(c-1))\}$, $c \in \mathbb{N}^*$. We describe an algorithm for generating values of $c$ such that a given prime $p$ is the minimum of the union of prime divisors of all elements in $E_c$. We also present quadratic forms generating divisors of Ec and study the prime divisors of its terms. This paper uses the results of Dirichlet's arithmetic progression theorem [1] and the article [6] to rewrite a conjecture of Shanks [2] on the density of primes in $E_c$. Finally, based on these results, we discuss the heuristics of large primes occurrences in the research set of our algorithm.
2206.04740
Chhavi Yadav
Chhavi Yadav, Michal Moshkovitz, Kamalika Chaudhuri
XAudit : A Theoretical Look at Auditing with Explanations
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Responsible use of machine learning requires models to be audited for undesirable properties. While a body of work has proposed using explanations for auditing, how to do so and why has remained relatively ill-understood. This work formalizes the role of explanations in auditing and investigates if and how model explanations can help audits. Specifically, we propose explanation-based algorithms for auditing linear classifiers and decision trees for feature sensitivity. Our results illustrate that Counterfactual explanations are extremely helpful for auditing. While Anchors and decision paths may not be as beneficial in the worst-case, in the average-case they do aid a lot.
[ { "created": "Thu, 9 Jun 2022 19:19:58 GMT", "version": "v1" }, { "created": "Wed, 2 Nov 2022 22:03:00 GMT", "version": "v2" }, { "created": "Mon, 5 Jun 2023 15:38:01 GMT", "version": "v3" } ]
2023-06-06
[ [ "Yadav", "Chhavi", "" ], [ "Moshkovitz", "Michal", "" ], [ "Chaudhuri", "Kamalika", "" ] ]
Responsible use of machine learning requires models to be audited for undesirable properties. While a body of work has proposed using explanations for auditing, how to do so and why has remained relatively ill-understood. This work formalizes the role of explanations in auditing and investigates if and how model explanations can help audits. Specifically, we propose explanation-based algorithms for auditing linear classifiers and decision trees for feature sensitivity. Our results illustrate that Counterfactual explanations are extremely helpful for auditing. While Anchors and decision paths may not be as beneficial in the worst-case, in the average-case they do aid a lot.
1606.02055
St\'ephane Lens
St\'ephane Lens, Bernard Boigelot
From Constrained Delaunay Triangulations to Roadmap Graphs with Arbitrary Clearance
null
null
null
null
cs.CG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work studies path planning in two-dimensional space, in the presence of polygonal obstacles. We specifically address the problem of building a roadmap graph, that is, an abstract representation of all the paths that can potentially be followed around a given set of obstacles. Our solution consists in an original refinement algorithm for constrained Delaunay triangulations, aimed at generating a roadmap graph suited for planning paths with arbitrary clearance. In other words, a minimum distance to the obstacles can be specified, and the graph does not have to be recomputed if this distance is modified. Compared to other solutions, our approach has the advantage of being simpler, as well as significantly more efficient.
[ { "created": "Tue, 7 Jun 2016 08:04:43 GMT", "version": "v1" } ]
2016-06-08
[ [ "Lens", "Stéphane", "" ], [ "Boigelot", "Bernard", "" ] ]
This work studies path planning in two-dimensional space, in the presence of polygonal obstacles. We specifically address the problem of building a roadmap graph, that is, an abstract representation of all the paths that can potentially be followed around a given set of obstacles. Our solution consists in an original refinement algorithm for constrained Delaunay triangulations, aimed at generating a roadmap graph suited for planning paths with arbitrary clearance. In other words, a minimum distance to the obstacles can be specified, and the graph does not have to be recomputed if this distance is modified. Compared to other solutions, our approach has the advantage of being simpler, as well as significantly more efficient.
2403.04905
Mark de Berg
Boris Aronov, Mark de Berg, Leonidas Theocharous
A Clique-Based Separator for Intersection Graphs of Geodesic Disks in $\mathbb{R}^2$
The paper will appear in SoCG 2024
null
null
null
cs.CG
http://creativecommons.org/licenses/by/4.0/
Let $d$ be a (well-behaved) shortest-path metric defined on a path-connected subset of $\mathbb{R}^2$ and let $\mathcal{D}=\{D_1,\ldots,D_n\}$ be a set of geodesic disks with respect to the metric $d$. We prove that $\mathcal{G}^{\times}(\mathcal{D})$, the intersection graph of the disks in $\mathcal{D}$, has a clique-based separator consisting of $O(n^{3/4+\varepsilon})$ cliques. This significantly extends the class of objects whose intersection graphs have small clique-based separators. Our clique-based separator yields an algorithm for $q$-COLORING that runs in time $2^{O(n^{3/4+\varepsilon})}$, assuming the boundaries of the disks $D_i$ can be computed in polynomial time. We also use our clique-based separator to obtain a simple, efficient, and almost exact distance oracle for intersection graphs of geodesic disks. Our distance oracle uses $O(n^{7/4+\varepsilon})$ storage and can report the hop distance between any two nodes in $\mathcal{G}^{\times}(\mathcal{D})$ in $O(n^{3/4+\varepsilon})$ time, up to an additive error of one. So far, distance oracles with an additive error of one that use subquadratic storage and sublinear query time were not known for such general graph classes.
[ { "created": "Thu, 7 Mar 2024 21:23:52 GMT", "version": "v1" } ]
2024-03-11
[ [ "Aronov", "Boris", "" ], [ "de Berg", "Mark", "" ], [ "Theocharous", "Leonidas", "" ] ]
Let $d$ be a (well-behaved) shortest-path metric defined on a path-connected subset of $\mathbb{R}^2$ and let $\mathcal{D}=\{D_1,\ldots,D_n\}$ be a set of geodesic disks with respect to the metric $d$. We prove that $\mathcal{G}^{\times}(\mathcal{D})$, the intersection graph of the disks in $\mathcal{D}$, has a clique-based separator consisting of $O(n^{3/4+\varepsilon})$ cliques. This significantly extends the class of objects whose intersection graphs have small clique-based separators. Our clique-based separator yields an algorithm for $q$-COLORING that runs in time $2^{O(n^{3/4+\varepsilon})}$, assuming the boundaries of the disks $D_i$ can be computed in polynomial time. We also use our clique-based separator to obtain a simple, efficient, and almost exact distance oracle for intersection graphs of geodesic disks. Our distance oracle uses $O(n^{7/4+\varepsilon})$ storage and can report the hop distance between any two nodes in $\mathcal{G}^{\times}(\mathcal{D})$ in $O(n^{3/4+\varepsilon})$ time, up to an additive error of one. So far, distance oracles with an additive error of one that use subquadratic storage and sublinear query time were not known for such general graph classes.
2307.01346
Tobias Goodwin-Allcock
Tobias Goodwin-Allcock, Ting Gong, Robert Gray, Parashkev Nachev and Hui Zhang
Patch-CNN: Training data-efficient deep learning for high-fidelity diffusion tensor estimation from minimal diffusion protocols
12 pages, 6 figures
null
null
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose a new method, Patch-CNN, for diffusion tensor (DT) estimation from only six-direction diffusion weighted images (DWI). Deep learning-based methods have been recently proposed for dMRI parameter estimation, using either voxel-wise fully-connected neural networks (FCN) or image-wise convolutional neural networks (CNN). In the acute clinical context -- where pressure of time limits the number of imaged directions to a minimum -- existing approaches either require an infeasible number of training images volumes (image-wise CNNs), or do not estimate the fibre orientations (voxel-wise FCNs) required for tractogram estimation. To overcome these limitations, we propose Patch-CNN, a neural network with a minimal (non-voxel-wise) convolutional kernel (3$\times$3$\times$3). Compared with voxel-wise FCNs, this has the advantage of allowing the network to leverage local anatomical information. Compared with image-wise CNNs, the minimal kernel vastly reduces training data demand. Evaluated against both conventional model fitting and a voxel-wise FCN, Patch-CNN, trained with a single subject is shown to improve the estimation of both scalar dMRI parameters and fibre orientation from six-direction DWIs. The improved fibre orientation estimation is shown to produce improved tractogram.
[ { "created": "Mon, 3 Jul 2023 20:39:48 GMT", "version": "v1" } ]
2023-07-06
[ [ "Goodwin-Allcock", "Tobias", "" ], [ "Gong", "Ting", "" ], [ "Gray", "Robert", "" ], [ "Nachev", "Parashkev", "" ], [ "Zhang", "Hui", "" ] ]
We propose a new method, Patch-CNN, for diffusion tensor (DT) estimation from only six-direction diffusion weighted images (DWI). Deep learning-based methods have been recently proposed for dMRI parameter estimation, using either voxel-wise fully-connected neural networks (FCN) or image-wise convolutional neural networks (CNN). In the acute clinical context -- where pressure of time limits the number of imaged directions to a minimum -- existing approaches either require an infeasible number of training images volumes (image-wise CNNs), or do not estimate the fibre orientations (voxel-wise FCNs) required for tractogram estimation. To overcome these limitations, we propose Patch-CNN, a neural network with a minimal (non-voxel-wise) convolutional kernel (3$\times$3$\times$3). Compared with voxel-wise FCNs, this has the advantage of allowing the network to leverage local anatomical information. Compared with image-wise CNNs, the minimal kernel vastly reduces training data demand. Evaluated against both conventional model fitting and a voxel-wise FCN, Patch-CNN, trained with a single subject is shown to improve the estimation of both scalar dMRI parameters and fibre orientation from six-direction DWIs. The improved fibre orientation estimation is shown to produce improved tractogram.
2101.06213
Hsing-Chung Chen
Hsing-Chung Chen, Karisma Trinanda Putra, Jerry Chun-WeiLin
A Novel Prediction Approach for Exploring PM2.5 Spatiotemporal Propagation Based on Convolutional Recursive Neural Networks
null
null
null
Report-no: HCC-2021-01
cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
The spread of PM2.5 pollutants that endanger health is difficult to predict because it involves many atmospheric variables. These micron particles can spread rapidly from their source to residential areas, increasing the risk of respiratory disease if exposed for long periods. The prediction system of PM2.5 propagation provides more detailed and accurate information as an early warning system to reduce health impacts on the community. According to the idea of transformative computing, the approach we propose in this paper allows computation on the dataset obtained from massive-scale PM2.5 sensor nodes via wireless sensor network. In the scheme, the deep learning model is implemented on the server nodes to extract spatiotemporal features on these datasets. This research was conducted by using dataset of air quality monitoring systems in Taiwan. This study presents a new model based on the convolutional recursive neural network to generate the prediction map. In general, the model is able to provide accurate predictive results by considering the bonds among measurement nodes in both spatially and temporally. Therefore, the particulate pollutant propagation of PM2.5 could be precisely monitored by using the model we propose in this paper.
[ { "created": "Fri, 15 Jan 2021 17:00:04 GMT", "version": "v1" } ]
2021-01-18
[ [ "Chen", "Hsing-Chung", "" ], [ "Putra", "Karisma Trinanda", "" ], [ "Chun-WeiLin", "Jerry", "" ] ]
The spread of PM2.5 pollutants that endanger health is difficult to predict because it involves many atmospheric variables. These micron particles can spread rapidly from their source to residential areas, increasing the risk of respiratory disease if exposed for long periods. The prediction system of PM2.5 propagation provides more detailed and accurate information as an early warning system to reduce health impacts on the community. According to the idea of transformative computing, the approach we propose in this paper allows computation on the dataset obtained from massive-scale PM2.5 sensor nodes via wireless sensor network. In the scheme, the deep learning model is implemented on the server nodes to extract spatiotemporal features on these datasets. This research was conducted by using dataset of air quality monitoring systems in Taiwan. This study presents a new model based on the convolutional recursive neural network to generate the prediction map. In general, the model is able to provide accurate predictive results by considering the bonds among measurement nodes in both spatially and temporally. Therefore, the particulate pollutant propagation of PM2.5 could be precisely monitored by using the model we propose in this paper.
1709.10142
Arash Rahnama
Arash Rahnama and Panos J. Antsaklis
Resilient Learning-Based Control for Synchronization of Passive Multi-Agent Systems under Attack
null
null
null
null
cs.SY math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we show synchronization for a group of output passive agents that communicate with each other according to an underlying communication graph to achieve a common goal. We propose a distributed event-triggered control framework that will guarantee synchronization and considerably decrease the required communication load on the band-limited network. We define a general Byzantine attack on the event-triggered multi-agent network system and characterize its negative effects on synchronization. The Byzantine agents are capable of intelligently falsifying their data and manipulating the underlying communication graph by altering their respective control feedback weights. We introduce a decentralized detection framework and analyze its steady-state and transient performances. We propose a way of identifying individual Byzantine neighbors and a learning-based method of estimating the attack parameters. Lastly, we propose learning-based control approaches to mitigate the negative effects of the adversarial attack.
[ { "created": "Thu, 28 Sep 2017 19:36:53 GMT", "version": "v1" } ]
2017-10-02
[ [ "Rahnama", "Arash", "" ], [ "Antsaklis", "Panos J.", "" ] ]
In this paper, we show synchronization for a group of output passive agents that communicate with each other according to an underlying communication graph to achieve a common goal. We propose a distributed event-triggered control framework that will guarantee synchronization and considerably decrease the required communication load on the band-limited network. We define a general Byzantine attack on the event-triggered multi-agent network system and characterize its negative effects on synchronization. The Byzantine agents are capable of intelligently falsifying their data and manipulating the underlying communication graph by altering their respective control feedback weights. We introduce a decentralized detection framework and analyze its steady-state and transient performances. We propose a way of identifying individual Byzantine neighbors and a learning-based method of estimating the attack parameters. Lastly, we propose learning-based control approaches to mitigate the negative effects of the adversarial attack.
2407.11766
Joseph Chen
Joseph Chen
Vectoring Languages
12 pages including references
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent breakthroughs in large language models (LLM) have stirred up global attention, and the research has been accelerating non-stop since then. Philosophers and psychologists have also been researching the structure of language for decades, but they are having a hard time finding a theory that directly benefits from the breakthroughs of LLMs. In this article, we propose a novel structure of language that reflects well on the mechanisms behind language models and go on to show that this structure is also better at capturing the diverse nature of language compared to previous methods. An analogy of linear algebra is adapted to strengthen the basis of this perspective. We further argue about the difference between this perspective and the design philosophy for current language models. Lastly, we discuss how this perspective can lead us to research directions that may accelerate the improvements of science fastest.
[ { "created": "Tue, 16 Jul 2024 14:25:55 GMT", "version": "v1" } ]
2024-07-17
[ [ "Chen", "Joseph", "" ] ]
Recent breakthroughs in large language models (LLM) have stirred up global attention, and the research has been accelerating non-stop since then. Philosophers and psychologists have also been researching the structure of language for decades, but they are having a hard time finding a theory that directly benefits from the breakthroughs of LLMs. In this article, we propose a novel structure of language that reflects well on the mechanisms behind language models and go on to show that this structure is also better at capturing the diverse nature of language compared to previous methods. An analogy of linear algebra is adapted to strengthen the basis of this perspective. We further argue about the difference between this perspective and the design philosophy for current language models. Lastly, we discuss how this perspective can lead us to research directions that may accelerate the improvements of science fastest.
2006.03280
Fran\c{c}ois Schwarzentruber
Arthur Queffelec and Ocan Sankur and Fran\c{c}ois Schwarzentruber
Conflict-Based Search for Connected Multi-Agent Path Finding
null
null
null
null
cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a variant of the multi-agent path finding problem (MAPF) in which agents are required to remain connected to each other and to a designated base. This problem has applications in search and rescue missions where the entire execution must be monitored by a human operator. We re-visit the conflict-based search algorithm known for MAPF, and define a variant where conflicts arise from disconnections rather than collisions. We study optimizations, and give experimental results in which we compare our algorithms with the literature.
[ { "created": "Fri, 5 Jun 2020 08:02:36 GMT", "version": "v1" } ]
2020-06-08
[ [ "Queffelec", "Arthur", "" ], [ "Sankur", "Ocan", "" ], [ "Schwarzentruber", "François", "" ] ]
We study a variant of the multi-agent path finding problem (MAPF) in which agents are required to remain connected to each other and to a designated base. This problem has applications in search and rescue missions where the entire execution must be monitored by a human operator. We re-visit the conflict-based search algorithm known for MAPF, and define a variant where conflicts arise from disconnections rather than collisions. We study optimizations, and give experimental results in which we compare our algorithms with the literature.
2404.05961
Parishad BehnamGhader
Parishad BehnamGhader, Vaibhav Adlakha, Marius Mosbach, Dzmitry Bahdanau, Nicolas Chapados, Siva Reddy
LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Large decoder-only language models (LLMs) are the state-of-the-art models on most of today's NLP tasks and benchmarks. Yet, the community is only slowly adopting these models for text embedding tasks, which require rich contextualized representations. In this work, we introduce LLM2Vec, a simple unsupervised approach that can transform any decoder-only LLM into a strong text encoder. LLM2Vec consists of three simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. We demonstrate the effectiveness of LLM2Vec by applying it to 3 popular LLMs ranging from 1.3B to 7B parameters and evaluate the transformed models on English word- and sequence-level tasks. We outperform encoder-only models by a large margin on word-level tasks and reach a new unsupervised state-of-the-art performance on the Massive Text Embeddings Benchmark (MTEB). Moreover, when combining LLM2Vec with supervised contrastive learning, we achieve state-of-the-art performance on MTEB among models that train only on publicly available data. Our strong empirical results and extensive analysis demonstrate that LLMs can be effectively transformed into universal text encoders in a parameter-efficient manner without the need for expensive adaptation or synthetic GPT-4 generated data.
[ { "created": "Tue, 9 Apr 2024 02:51:05 GMT", "version": "v1" } ]
2024-04-10
[ [ "BehnamGhader", "Parishad", "" ], [ "Adlakha", "Vaibhav", "" ], [ "Mosbach", "Marius", "" ], [ "Bahdanau", "Dzmitry", "" ], [ "Chapados", "Nicolas", "" ], [ "Reddy", "Siva", "" ] ]
Large decoder-only language models (LLMs) are the state-of-the-art models on most of today's NLP tasks and benchmarks. Yet, the community is only slowly adopting these models for text embedding tasks, which require rich contextualized representations. In this work, we introduce LLM2Vec, a simple unsupervised approach that can transform any decoder-only LLM into a strong text encoder. LLM2Vec consists of three simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. We demonstrate the effectiveness of LLM2Vec by applying it to 3 popular LLMs ranging from 1.3B to 7B parameters and evaluate the transformed models on English word- and sequence-level tasks. We outperform encoder-only models by a large margin on word-level tasks and reach a new unsupervised state-of-the-art performance on the Massive Text Embeddings Benchmark (MTEB). Moreover, when combining LLM2Vec with supervised contrastive learning, we achieve state-of-the-art performance on MTEB among models that train only on publicly available data. Our strong empirical results and extensive analysis demonstrate that LLMs can be effectively transformed into universal text encoders in a parameter-efficient manner without the need for expensive adaptation or synthetic GPT-4 generated data.
2406.06967
Kailas Dayanandan
Kailas Dayanandan, Anand Sinha, Brejesh Lall
Dual Thinking and Perceptual Analysis of Deep Learning Models using Human Adversarial Examples
null
null
null
null
cs.CV cs.AI eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
The dual thinking framework considers fast, intuitive processing and slower, logical processing. The perception of dual thinking in vision requires images where inferences from intuitive and logical processing differ. We introduce an adversarial dataset to provide evidence for the dual thinking framework in human vision, which also aids in studying the qualitative behavior of deep learning models. Our study also addresses a major criticism of using classification models as computational models of human vision by using instance segmentation models that localize objects. The evidence underscores the importance of shape in identifying instances in human vision and shows that deep learning models lack an understanding of sub-structures, as indicated by errors related to the position and number of sub-components. Additionally, the similarity in errors made by models and intuitive human processing indicates that models only address intuitive thinking in human vision.
[ { "created": "Tue, 11 Jun 2024 05:50:34 GMT", "version": "v1" } ]
2024-06-12
[ [ "Dayanandan", "Kailas", "" ], [ "Sinha", "Anand", "" ], [ "Lall", "Brejesh", "" ] ]
The dual thinking framework considers fast, intuitive processing and slower, logical processing. The perception of dual thinking in vision requires images where inferences from intuitive and logical processing differ. We introduce an adversarial dataset to provide evidence for the dual thinking framework in human vision, which also aids in studying the qualitative behavior of deep learning models. Our study also addresses a major criticism of using classification models as computational models of human vision by using instance segmentation models that localize objects. The evidence underscores the importance of shape in identifying instances in human vision and shows that deep learning models lack an understanding of sub-structures, as indicated by errors related to the position and number of sub-components. Additionally, the similarity in errors made by models and intuitive human processing indicates that models only address intuitive thinking in human vision.
2005.00205
Baiji Liu
Baiji Liu and Songjun Cao and Sining Sun and Weibin Zhang and Long Ma
Multi-head Monotonic Chunkwise Attention For Online Speech Recognition
null
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The attention mechanism of the Listen, Attend and Spell (LAS) model requires the whole input sequence to calculate the attention context and thus is not suitable for online speech recognition. To deal with this problem, we propose multi-head monotonic chunk-wise attention (MTH-MoChA), an improved version of MoChA. MTH-MoChA splits the input sequence into small chunks and computes multi-head attentions over the chunks. We also explore useful training strategies such as LSTM pooling, minimum world error rate training and SpecAugment to further improve the performance of MTH-MoChA. Experiments on AISHELL-1 data show that the proposed model, along with the training strategies, improve the character error rate (CER) of MoChA from 8.96% to 7.68% on test set. On another 18000 hours in-car speech data set, MTH-MoChA obtains 7.28% CER, which is significantly better than a state-of-the-art hybrid system.
[ { "created": "Fri, 1 May 2020 04:00:51 GMT", "version": "v1" } ]
2020-05-04
[ [ "Liu", "Baiji", "" ], [ "Cao", "Songjun", "" ], [ "Sun", "Sining", "" ], [ "Zhang", "Weibin", "" ], [ "Ma", "Long", "" ] ]
The attention mechanism of the Listen, Attend and Spell (LAS) model requires the whole input sequence to calculate the attention context and thus is not suitable for online speech recognition. To deal with this problem, we propose multi-head monotonic chunk-wise attention (MTH-MoChA), an improved version of MoChA. MTH-MoChA splits the input sequence into small chunks and computes multi-head attentions over the chunks. We also explore useful training strategies such as LSTM pooling, minimum world error rate training and SpecAugment to further improve the performance of MTH-MoChA. Experiments on AISHELL-1 data show that the proposed model, along with the training strategies, improve the character error rate (CER) of MoChA from 8.96% to 7.68% on test set. On another 18000 hours in-car speech data set, MTH-MoChA obtains 7.28% CER, which is significantly better than a state-of-the-art hybrid system.
2408.01245
Alexander Chunikhin
Alexander Yu. Chunikhin
CHTW-systems with resource-depended parameters. CHTW(R)-systems
10 pages, 2 figures. arXiv admin note: substantial text overlap with arXiv:2310.01587
null
null
PIBNASU-08/24
cs.LO
http://creativecommons.org/licenses/by/4.0/
In [1] the concept of CHTW-systems as a multidimensional representation of Petri nets was proposed based on the assumption of multidimensional distribution of tokens (resources) in positions (branes) and, accordingly, multidimensional representation of transitions and arcs. The extension of Petri nets was developed under the assumption of the stationarity of CHTW-system, when its parameters are constant during the system operation. We consider the case when the main parameters of CHTW-system (threshold functions and rate functions) change in accordance with the values of the mark-functions (multidimensional resource) of some container branes of the same CHTW-system. The modification of the basic CHTW-system was designated as a CHTW(R) system, in which (R) means a Resource control of the system parameters.
[ { "created": "Fri, 2 Aug 2024 13:07:04 GMT", "version": "v1" } ]
2024-08-05
[ [ "Chunikhin", "Alexander Yu.", "" ] ]
In [1] the concept of CHTW-systems as a multidimensional representation of Petri nets was proposed based on the assumption of multidimensional distribution of tokens (resources) in positions (branes) and, accordingly, multidimensional representation of transitions and arcs. The extension of Petri nets was developed under the assumption of the stationarity of CHTW-system, when its parameters are constant during the system operation. We consider the case when the main parameters of CHTW-system (threshold functions and rate functions) change in accordance with the values of the mark-functions (multidimensional resource) of some container branes of the same CHTW-system. The modification of the basic CHTW-system was designated as a CHTW(R) system, in which (R) means a Resource control of the system parameters.
2108.12108
Xinran Zhang
Xinran Zhang, Maosong Sun, Jiafeng Liu, Xiaobing Li
Lingxi: A Diversity-aware Chinese Modern Poetry Generation System
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Poetry generation has been a difficult task in natural language processing. Unlike plain neural text generation tasks, poetry has a high requirement for novelty, since an easily-understood sentence with too many high frequency words might not be considered as poetic, while adequately ambiguous sentences with low frequency words can possibly be novel and creative. Inspired by this, we present Lingxi, a diversity-aware Chinese modern poetry generation system. We propose nucleus sampling with randomized head (NS-RH) algorithm, which randomizes the high frequency part ("head") of the predicted distribution, in order to emphasize on the "comparatively low frequency" words. The proposed algorithm can significantly increase the novelty of generated poetry compared with traditional sampling methods. The permutation of distribution is controllable by tuning the filtering parameter that determines the "head" to permutate, achieving diversity-aware sampling. We find that even when a large portion of filtered vocabulary is randomized, it can actually generate fluent poetry but with notably higher novelty. We also propose a semantic-similarity-based rejection sampling algorithm, which creates longer and more informative context on the basis of the short input poetry title while maintaining high semantic similarity to the title, alleviating the off-topic issue.
[ { "created": "Fri, 27 Aug 2021 03:33:28 GMT", "version": "v1" } ]
2021-08-30
[ [ "Zhang", "Xinran", "" ], [ "Sun", "Maosong", "" ], [ "Liu", "Jiafeng", "" ], [ "Li", "Xiaobing", "" ] ]
Poetry generation has been a difficult task in natural language processing. Unlike plain neural text generation tasks, poetry has a high requirement for novelty, since an easily-understood sentence with too many high frequency words might not be considered as poetic, while adequately ambiguous sentences with low frequency words can possibly be novel and creative. Inspired by this, we present Lingxi, a diversity-aware Chinese modern poetry generation system. We propose nucleus sampling with randomized head (NS-RH) algorithm, which randomizes the high frequency part ("head") of the predicted distribution, in order to emphasize on the "comparatively low frequency" words. The proposed algorithm can significantly increase the novelty of generated poetry compared with traditional sampling methods. The permutation of distribution is controllable by tuning the filtering parameter that determines the "head" to permutate, achieving diversity-aware sampling. We find that even when a large portion of filtered vocabulary is randomized, it can actually generate fluent poetry but with notably higher novelty. We also propose a semantic-similarity-based rejection sampling algorithm, which creates longer and more informative context on the basis of the short input poetry title while maintaining high semantic similarity to the title, alleviating the off-topic issue.
1403.6251
Ludovic Mignot
Ludovic Mignot (LITIS Laboratory Normandie University, University of Rouen France), Nadia Ouali Sebti (LITIS Laboratory Normandie University, University of Rouen France), Djelloul Ziadi (LITIS Laboratory Normandie University, University of Rouen France)
K-Position, Follow, Equation and K-C-Continuation Tree Automata Constructions
null
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There exist several methods of computing an automaton recognizing the language denoted by a given regular expression: In the case of words, the position automaton P due to Glushkov, the c-continuation automaton C due to Champarnaud and Ziadi, the follow automaton F due to Ilie and Yu and the equation automaton E due to Antimirov. It has been shown that P and C are isomorphic and that E (resp. F) is a quotient of C (resp. of P). In this paper, we define from a given regular tree expression the k-position tree automaton P and the follow tree automaton F . Using the definition of the equation tree automaton E of Kuske and Meinecke and our previously defined k-C-continuation tree automaton C, we show that the previous morphic relations are still valid on tree expressions.
[ { "created": "Tue, 25 Mar 2014 07:51:12 GMT", "version": "v1" }, { "created": "Wed, 26 Mar 2014 08:53:25 GMT", "version": "v2" }, { "created": "Thu, 22 May 2014 07:44:18 GMT", "version": "v3" }, { "created": "Fri, 11 Jul 2014 05:59:20 GMT", "version": "v4" } ]
2014-07-14
[ [ "Mignot", "Ludovic", "", "LITIS Laboratory Normandie University, University of\n Rouen France" ], [ "Sebti", "Nadia Ouali", "", "LITIS Laboratory Normandie University,\n University of Rouen France" ], [ "Ziadi", "Djelloul", "", "LITIS Laboratory Normandie\n University, University of Rouen France" ] ]
There exist several methods of computing an automaton recognizing the language denoted by a given regular expression: In the case of words, the position automaton P due to Glushkov, the c-continuation automaton C due to Champarnaud and Ziadi, the follow automaton F due to Ilie and Yu and the equation automaton E due to Antimirov. It has been shown that P and C are isomorphic and that E (resp. F) is a quotient of C (resp. of P). In this paper, we define from a given regular tree expression the k-position tree automaton P and the follow tree automaton F . Using the definition of the equation tree automaton E of Kuske and Meinecke and our previously defined k-C-continuation tree automaton C, we show that the previous morphic relations are still valid on tree expressions.
1405.5507
Tianqing Wu
Tianqing Wu, Hong-Chuan Yang
Improved Performance of RF Energy Powered Wireless Sensor Node with Cooperative Beam Selection
17pages, 5 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RF energy harvesting is a promising potential solution to provide convenient and perpetual energy supplies to low-power wireless sensor networks. In this paper, we investigate the energy harvesting performance of a wireless sensor node powered by harvesting RF energy from existing multiuser MIMO system. Specifically, we propose a random unitary beamforming (RUB) based cooperative beam selection scheme to enhance the energy harvesting performance at the sensor. Under a constant total transmission power constraint, the multiuser MIMO system tries to select a maximal number of active beams for data transmission, while satisfying the energy harvesting requirement at the sensor. We derive the exact closed-form expression for the distribution function of harvested energy in a coherence time over Rayleigh fading channels. We further investigate the performance tradeoff of the average harvested energy at the sensor versus the sum-rate of the multiuser MIMO system.
[ { "created": "Wed, 21 May 2014 18:18:07 GMT", "version": "v1" } ]
2014-05-22
[ [ "Wu", "Tianqing", "" ], [ "Yang", "Hong-Chuan", "" ] ]
RF energy harvesting is a promising potential solution to provide convenient and perpetual energy supplies to low-power wireless sensor networks. In this paper, we investigate the energy harvesting performance of a wireless sensor node powered by harvesting RF energy from existing multiuser MIMO system. Specifically, we propose a random unitary beamforming (RUB) based cooperative beam selection scheme to enhance the energy harvesting performance at the sensor. Under a constant total transmission power constraint, the multiuser MIMO system tries to select a maximal number of active beams for data transmission, while satisfying the energy harvesting requirement at the sensor. We derive the exact closed-form expression for the distribution function of harvested energy in a coherence time over Rayleigh fading channels. We further investigate the performance tradeoff of the average harvested energy at the sensor versus the sum-rate of the multiuser MIMO system.
2106.01176
Hossein Monshizadeh Naeen
Maliheh Roknizadeh, Hossein Monshizadeh Naeen
Hybrid Ensemble optimized algorithm based on Genetic Programming for imbalanced data classification
11 pages, 4 Tables, 7 Figures Accepted in Twelfth International Conference on Information Technology, Computer and Telecommunications
null
null
null
cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
One of the most significant current discussions in the field of data mining is classifying imbalanced data. In recent years, several ways are proposed such as algorithm level (internal) approaches, data level (external) techniques, and cost-sensitive methods. Although extensive research has been carried out on imbalanced data classification, however, several unsolved challenges remain such as no attention to the importance of samples to balance, determine the appropriate number of classifiers, and no optimization of classifiers in the combination of classifiers. The purpose of this paper is to improve the efficiency of the ensemble method in the sampling of training data sets, especially in the minority class, and to determine better basic classifiers for combining classifiers than existing methods. We proposed a hybrid ensemble algorithm based on Genetic Programming (GP) for two classes of imbalanced data classification. In this study uses historical data from UCI Machine Learning Repository to assess minority classes in imbalanced datasets. The performance of our proposed algorithm is evaluated by Rapid-miner studio v.7.5. Experimental results show the performance of the proposed method on the specified data sets in the size of the training set shows 40% and 50% better accuracy than other dimensions of the minority class prediction.
[ { "created": "Wed, 2 Jun 2021 14:14:38 GMT", "version": "v1" } ]
2021-06-03
[ [ "Roknizadeh", "Maliheh", "" ], [ "Naeen", "Hossein Monshizadeh", "" ] ]
One of the most significant current discussions in the field of data mining is classifying imbalanced data. In recent years, several ways are proposed such as algorithm level (internal) approaches, data level (external) techniques, and cost-sensitive methods. Although extensive research has been carried out on imbalanced data classification, however, several unsolved challenges remain such as no attention to the importance of samples to balance, determine the appropriate number of classifiers, and no optimization of classifiers in the combination of classifiers. The purpose of this paper is to improve the efficiency of the ensemble method in the sampling of training data sets, especially in the minority class, and to determine better basic classifiers for combining classifiers than existing methods. We proposed a hybrid ensemble algorithm based on Genetic Programming (GP) for two classes of imbalanced data classification. In this study uses historical data from UCI Machine Learning Repository to assess minority classes in imbalanced datasets. The performance of our proposed algorithm is evaluated by Rapid-miner studio v.7.5. Experimental results show the performance of the proposed method on the specified data sets in the size of the training set shows 40% and 50% better accuracy than other dimensions of the minority class prediction.
cs/0110038
Paul Vitanyi
Joel Seiferas (University of Rochester) and Paul Vitanyi (CWI and University of Amsterdam)
Counting is Easy
null
J. Seiferas and P.M.B. Vitanyi, Counting is easy, J. Assoc. Comp. Mach. 35 (1988), pp. 985-1000
null
null
cs.CC cs.DS
null
For any fixed $k$, a remarkably simple single-tape Turing machine can simulate $k$ independent counters in real time. Informally, a counter is a storage unit that maintains a single integer (initially 0), incrementing it, decrementing it, or reporting its sign (positive, negative, or zero) on command. Any automaton that responds to each successive command as a counter would is said to simulate a counter. (Only for a sign inquiry is the response of interest, of course. And zeroness is the only real issue, since a simulator can readily use zero detection to keep track of positivity and negativity in finite-state control. In this paper we describe a remarkably simple real-time simulation, based on just five simple rewriting rules, of any fixed number $k$ of independent counters. On a Turing machine with a single, binary work tape, the simulation runs in real time, handling an arbitrary counter command at each step. The space used by the simulation can be held to $(k+\epsilon) \log_2 n$ bits for the first $n$ commands, for any specified $\epsilon > 0$.
[ { "created": "Thu, 18 Oct 2001 13:21:01 GMT", "version": "v1" } ]
2007-05-23
[ [ "Seiferas", "Joel", "", "University of Rochester" ], [ "Vitanyi", "Paul", "", "CWI and\n University of Amsterdam" ] ]
For any fixed $k$, a remarkably simple single-tape Turing machine can simulate $k$ independent counters in real time. Informally, a counter is a storage unit that maintains a single integer (initially 0), incrementing it, decrementing it, or reporting its sign (positive, negative, or zero) on command. Any automaton that responds to each successive command as a counter would is said to simulate a counter. (Only for a sign inquiry is the response of interest, of course. And zeroness is the only real issue, since a simulator can readily use zero detection to keep track of positivity and negativity in finite-state control. In this paper we describe a remarkably simple real-time simulation, based on just five simple rewriting rules, of any fixed number $k$ of independent counters. On a Turing machine with a single, binary work tape, the simulation runs in real time, handling an arbitrary counter command at each step. The space used by the simulation can be held to $(k+\epsilon) \log_2 n$ bits for the first $n$ commands, for any specified $\epsilon > 0$.