id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2106.16233
Isel Grau
Gonzalo N\'apoles, Isel Grau, Agnieszka Jastrzebska, Yamisleydi Salgueiro
Long Short-term Cognitive Networks
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we present a recurrent neural system named Long Short-term Cognitive Networks (LSTCNs) as a generalization of the Short-term Cognitive Network (STCN) model. Such a generalization is motivated by the difficulty of forecasting very long time series efficiently. The LSTCN model can be defined as a collection of STCN blocks, each processing a specific time patch of the (multivariate) time series being modeled. In this neural ensemble, each block passes information to the subsequent one in the form of weight matrices representing the prior knowledge. As a second contribution, we propose a deterministic learning algorithm to compute the learnable weights while preserving the prior knowledge resulting from previous learning processes. As a third contribution, we introduce a feature influence score as a proxy to explain the forecasting process in multivariate time series. The simulations using three case studies show that our neural system reports small forecasting errors while being significantly faster than state-of-the-art recurrent models.
[ { "created": "Wed, 30 Jun 2021 17:42:09 GMT", "version": "v1" }, { "created": "Thu, 16 Sep 2021 20:18:10 GMT", "version": "v2" } ]
2021-09-20
[ [ "Nápoles", "Gonzalo", "" ], [ "Grau", "Isel", "" ], [ "Jastrzebska", "Agnieszka", "" ], [ "Salgueiro", "Yamisleydi", "" ] ]
In this paper, we present a recurrent neural system named Long Short-term Cognitive Networks (LSTCNs) as a generalization of the Short-term Cognitive Network (STCN) model. Such a generalization is motivated by the difficulty of forecasting very long time series efficiently. The LSTCN model can be defined as a collection of STCN blocks, each processing a specific time patch of the (multivariate) time series being modeled. In this neural ensemble, each block passes information to the subsequent one in the form of weight matrices representing the prior knowledge. As a second contribution, we propose a deterministic learning algorithm to compute the learnable weights while preserving the prior knowledge resulting from previous learning processes. As a third contribution, we introduce a feature influence score as a proxy to explain the forecasting process in multivariate time series. The simulations using three case studies show that our neural system reports small forecasting errors while being significantly faster than state-of-the-art recurrent models.
2011.09884
Qing Guo
Bing Yu and Hua Qi and Qing Guo and Felix Juefei-Xu and Xiaofei Xie and Lei Ma and Jianjun Zhao
DeepRepair: Style-Guided Repairing for DNNs in the Real-world Operational Environment
14 pages; 5 figures
null
null
null
cs.LG cs.AI cs.CV cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) are being widely applied for various real-world applications across domains due to their high performance (e.g., high accuracy on image classification). Nevertheless, a well-trained DNN after deployment could oftentimes raise errors during practical use in the operational environment due to the mismatching between distributions of the training dataset and the potential unknown noise factors in the operational environment, e.g., weather, blur, noise etc. Hence, it poses a rather important problem for the DNNs' real-world applications: how to repair the deployed DNNs for correcting the failure samples (i.e., incorrect prediction) under the deployed operational environment while not harming their capability of handling normal or clean data. The number of failure samples we can collect in practice, caused by the noise factors in the operational environment, is often limited. Therefore, It is rather challenging how to repair more similar failures based on the limited failure samples we can collect. In this paper, we propose a style-guided data augmentation for repairing DNN in the operational environment. We propose a style transfer method to learn and introduce the unknown failure patterns within the failure data into the training data via data augmentation. Moreover, we further propose the clustering-based failure data generation for much more effective style-guided data augmentation. We conduct a large-scale evaluation with fifteen degradation factors that may happen in the real world and compare with four state-of-the-art data augmentation methods and two DNN repairing methods, demonstrating that our method can significantly enhance the deployed DNNs on the corrupted data in the operational environment, and with even better accuracy on clean datasets.
[ { "created": "Thu, 19 Nov 2020 15:09:44 GMT", "version": "v1" } ]
2020-11-20
[ [ "Yu", "Bing", "" ], [ "Qi", "Hua", "" ], [ "Guo", "Qing", "" ], [ "Juefei-Xu", "Felix", "" ], [ "Xie", "Xiaofei", "" ], [ "Ma", "Lei", "" ], [ "Zhao", "Jianjun", "" ] ]
Deep neural networks (DNNs) are being widely applied for various real-world applications across domains due to their high performance (e.g., high accuracy on image classification). Nevertheless, a well-trained DNN after deployment could oftentimes raise errors during practical use in the operational environment due to the mismatching between distributions of the training dataset and the potential unknown noise factors in the operational environment, e.g., weather, blur, noise etc. Hence, it poses a rather important problem for the DNNs' real-world applications: how to repair the deployed DNNs for correcting the failure samples (i.e., incorrect prediction) under the deployed operational environment while not harming their capability of handling normal or clean data. The number of failure samples we can collect in practice, caused by the noise factors in the operational environment, is often limited. Therefore, It is rather challenging how to repair more similar failures based on the limited failure samples we can collect. In this paper, we propose a style-guided data augmentation for repairing DNN in the operational environment. We propose a style transfer method to learn and introduce the unknown failure patterns within the failure data into the training data via data augmentation. Moreover, we further propose the clustering-based failure data generation for much more effective style-guided data augmentation. We conduct a large-scale evaluation with fifteen degradation factors that may happen in the real world and compare with four state-of-the-art data augmentation methods and two DNN repairing methods, demonstrating that our method can significantly enhance the deployed DNNs on the corrupted data in the operational environment, and with even better accuracy on clean datasets.
1407.2170
Giorgos Tolias
Giorgos Tolias (INRIA), Teddy Furon (INRIA), Herv\'e J\'egou (INRIA)
Orientation covariant aggregation of local descriptors with embeddings
European Conference on Computer Vision (2014)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image search systems based on local descriptors typically achieve orientation invariance by aligning the patches on their dominant orientations. Albeit successful, this choice introduces too much invariance because it does not guarantee that the patches are rotated consistently. This paper introduces an aggregation strategy of local descriptors that achieves this covariance property by jointly encoding the angle in the aggregation stage in a continuous manner. It is combined with an efficient monomial embedding to provide a codebook-free method to aggregate local descriptors into a single vector representation. Our strategy is also compatible and employed with several popular encoding methods, in particular bag-of-words, VLAD and the Fisher vector. Our geometric-aware aggregation strategy is effective for image search, as shown by experiments performed on standard benchmarks for image and particular object retrieval, namely Holidays and Oxford buildings.
[ { "created": "Tue, 8 Jul 2014 16:55:36 GMT", "version": "v1" }, { "created": "Tue, 25 Nov 2014 11:43:20 GMT", "version": "v2" } ]
2014-11-26
[ [ "Tolias", "Giorgos", "", "INRIA" ], [ "Furon", "Teddy", "", "INRIA" ], [ "Jégou", "Hervé", "", "INRIA" ] ]
Image search systems based on local descriptors typically achieve orientation invariance by aligning the patches on their dominant orientations. Albeit successful, this choice introduces too much invariance because it does not guarantee that the patches are rotated consistently. This paper introduces an aggregation strategy of local descriptors that achieves this covariance property by jointly encoding the angle in the aggregation stage in a continuous manner. It is combined with an efficient monomial embedding to provide a codebook-free method to aggregate local descriptors into a single vector representation. Our strategy is also compatible and employed with several popular encoding methods, in particular bag-of-words, VLAD and the Fisher vector. Our geometric-aware aggregation strategy is effective for image search, as shown by experiments performed on standard benchmarks for image and particular object retrieval, namely Holidays and Oxford buildings.
2403.11162
Xiaoyu Wu
Xiaoyu Wu, Yang Hua, Chumeng Liang, Jiaru Zhang, Hao Wang, Tao Song, Haibing Guan
CGI-DM: Digital Copyright Authentication for Diffusion Models via Contrasting Gradient Inversion
Accepted by CVPR 2024
null
null
null
cs.CV cs.AI cs.CR cs.CY cs.LG
http://creativecommons.org/licenses/by/4.0/
Diffusion Models (DMs) have evolved into advanced image generation tools, especially for few-shot generation where a pretrained model is fine-tuned on a small set of images to capture a specific style or object. Despite their success, concerns exist about potential copyright violations stemming from the use of unauthorized data in this process. In response, we present Contrasting Gradient Inversion for Diffusion Models (CGI-DM), a novel method featuring vivid visual representations for digital copyright authentication. Our approach involves removing partial information of an image and recovering missing details by exploiting conceptual differences between the pretrained and fine-tuned models. We formulate the differences as KL divergence between latent variables of the two models when given the same input image, which can be maximized through Monte Carlo sampling and Projected Gradient Descent (PGD). The similarity between original and recovered images serves as a strong indicator of potential infringements. Extensive experiments on the WikiArt and Dreambooth datasets demonstrate the high accuracy of CGI-DM in digital copyright authentication, surpassing alternative validation techniques. Code implementation is available at https://github.com/Nicholas0228/Revelio.
[ { "created": "Sun, 17 Mar 2024 10:06:38 GMT", "version": "v1" } ]
2024-03-19
[ [ "Wu", "Xiaoyu", "" ], [ "Hua", "Yang", "" ], [ "Liang", "Chumeng", "" ], [ "Zhang", "Jiaru", "" ], [ "Wang", "Hao", "" ], [ "Song", "Tao", "" ], [ "Guan", "Haibing", "" ] ]
Diffusion Models (DMs) have evolved into advanced image generation tools, especially for few-shot generation where a pretrained model is fine-tuned on a small set of images to capture a specific style or object. Despite their success, concerns exist about potential copyright violations stemming from the use of unauthorized data in this process. In response, we present Contrasting Gradient Inversion for Diffusion Models (CGI-DM), a novel method featuring vivid visual representations for digital copyright authentication. Our approach involves removing partial information of an image and recovering missing details by exploiting conceptual differences between the pretrained and fine-tuned models. We formulate the differences as KL divergence between latent variables of the two models when given the same input image, which can be maximized through Monte Carlo sampling and Projected Gradient Descent (PGD). The similarity between original and recovered images serves as a strong indicator of potential infringements. Extensive experiments on the WikiArt and Dreambooth datasets demonstrate the high accuracy of CGI-DM in digital copyright authentication, surpassing alternative validation techniques. Code implementation is available at https://github.com/Nicholas0228/Revelio.
1807.02693
Mohammad Khodaei
Hamid Noroozi, Mohammad Khodaei, Panos Papadimitratos
VPKIaaS: A Highly-Available and Dynamically-Scalable Vehicular Public-Key Infrastructure
3 pages, 4 figures, Proceedings of the 11th ACM Conference on Security & Privacy in Wireless and Mobile Networks (WiSec), Stockholm, Sweden, June 2018
ACM WiSec, Stockholm, Sweden, June 2018, pp. 302--304
10.1145/3212480.3226100
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The central building block of secure and privacy-preserving Vehicular Communication (VC) systems is a Vehicular Public-Key Infrastructure (VPKI), which provides vehicles with multiple anonymized credentials, termed pseudonyms. These pseudonyms are used to ensure message authenticity and integrity while preserving vehicle (and thus passenger) privacy. In the light of emerging large-scale multi-domain VC environments, the efficiency of the VPKI and, more broadly, its scalability are paramount. In this extended abstract, we leverage the state-of-the-art VPKI system and enhance its functionality towards a highly-available and dynamically-scalable design, this ensures that the system remains operational in the presence of benign failures or any resource depletion attack, and that it dynamically scales out, or possibly scales in, according to the requests' arrival rate. Our full-blown implementation on the Google Cloud Platform shows that deploying a VPKI for a large-scale scenario can be cost-effective, while efficiently issuing pseudonyms for the requesters.
[ { "created": "Sat, 7 Jul 2018 16:59:46 GMT", "version": "v1" } ]
2018-07-10
[ [ "Noroozi", "Hamid", "" ], [ "Khodaei", "Mohammad", "" ], [ "Papadimitratos", "Panos", "" ] ]
The central building block of secure and privacy-preserving Vehicular Communication (VC) systems is a Vehicular Public-Key Infrastructure (VPKI), which provides vehicles with multiple anonymized credentials, termed pseudonyms. These pseudonyms are used to ensure message authenticity and integrity while preserving vehicle (and thus passenger) privacy. In the light of emerging large-scale multi-domain VC environments, the efficiency of the VPKI and, more broadly, its scalability are paramount. In this extended abstract, we leverage the state-of-the-art VPKI system and enhance its functionality towards a highly-available and dynamically-scalable design, this ensures that the system remains operational in the presence of benign failures or any resource depletion attack, and that it dynamically scales out, or possibly scales in, according to the requests' arrival rate. Our full-blown implementation on the Google Cloud Platform shows that deploying a VPKI for a large-scale scenario can be cost-effective, while efficiently issuing pseudonyms for the requesters.
1507.08087
Marko Van Dooren
Benoit Desouter, Tom Schrijvers, Marko van Dooren
Tabling as a Library with Delimited Control
15 pages. To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 2015
Theory and Practice of Logic Programming 15 (2015) 419-433
10.1017/S1471068415000137
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tabling is probably the most widely studied extension of Prolog. But despite its importance and practicality, tabling is not implemented by most Prolog systems. Existing approaches require substantial changes to the Prolog engine, which is an investment out of reach of most systems. To enable more widespread adoption, we present a new implementation of tabling in under 600 lines of Prolog code. Our lightweight approach relies on delimited control and provides reasonable performance.
[ { "created": "Wed, 29 Jul 2015 10:01:22 GMT", "version": "v1" } ]
2020-02-19
[ [ "Desouter", "Benoit", "" ], [ "Schrijvers", "Tom", "" ], [ "van Dooren", "Marko", "" ] ]
Tabling is probably the most widely studied extension of Prolog. But despite its importance and practicality, tabling is not implemented by most Prolog systems. Existing approaches require substantial changes to the Prolog engine, which is an investment out of reach of most systems. To enable more widespread adoption, we present a new implementation of tabling in under 600 lines of Prolog code. Our lightweight approach relies on delimited control and provides reasonable performance.
1811.02886
Dave Cliff
Ellie Birbeck and Dave Cliff
Using Stock Prices as Ground Truth in Sentiment Analysis to Generate Profitable Trading Signals
8 pages, 6 figures. To be presented at IEEE Symposium on Computational Intelligence in Financial Engineering (CIFEr), Bengaluru, November 18-21, 2018
null
null
null
cs.CE q-fin.ST
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing availability of "big" (large volume) social media data has motivated a great deal of research in applying sentiment analysis to predict the movement of prices within financial markets. Previous work in this field investigates how the true sentiment of text (i.e. positive or negative opinions) can be used for financial predictions, based on the assumption that sentiments expressed online are representative of the true market sentiment. Here we consider the converse idea, that using the stock price as the ground-truth in the system may be a better indication of sentiment. Tweets are labelled as Buy or Sell dependent on whether the stock price discussed rose or fell over the following hour, and from this, stock-specific dictionaries are built for individual companies. A Bayesian classifier is used to generate stock predictions, which are input to an automated trading algorithm. Placing 468 trades over a 1 month period yields a return rate of 5.18%, which annualises to approximately 83% per annum. This approach performs significantly better than random chance and outperforms two baseline sentiment analysis methods tested.
[ { "created": "Wed, 7 Nov 2018 13:55:01 GMT", "version": "v1" } ]
2018-11-08
[ [ "Birbeck", "Ellie", "" ], [ "Cliff", "Dave", "" ] ]
The increasing availability of "big" (large volume) social media data has motivated a great deal of research in applying sentiment analysis to predict the movement of prices within financial markets. Previous work in this field investigates how the true sentiment of text (i.e. positive or negative opinions) can be used for financial predictions, based on the assumption that sentiments expressed online are representative of the true market sentiment. Here we consider the converse idea, that using the stock price as the ground-truth in the system may be a better indication of sentiment. Tweets are labelled as Buy or Sell dependent on whether the stock price discussed rose or fell over the following hour, and from this, stock-specific dictionaries are built for individual companies. A Bayesian classifier is used to generate stock predictions, which are input to an automated trading algorithm. Placing 468 trades over a 1 month period yields a return rate of 5.18%, which annualises to approximately 83% per annum. This approach performs significantly better than random chance and outperforms two baseline sentiment analysis methods tested.
2407.20695
Ciaran Eising
Mirza Akhi Khatun, Mangolika Bhattacharya, Ciar\'an Eising, Lubna Luxmi Dhirani
Time Series Anomaly Detection with CNN for Environmental Sensors in Healthcare-IoT
null
Proceedings of the 12th IEEE International Conference on Healthcare Informatics (IEEE ICHI 2024)
null
null
cs.LG cs.CR cs.CV
http://creativecommons.org/licenses/by/4.0/
This research develops a new method to detect anomalies in time series data using Convolutional Neural Networks (CNNs) in healthcare-IoT. The proposed method creates a Distributed Denial of Service (DDoS) attack using an IoT network simulator, Cooja, which emulates environmental sensors such as temperature and humidity. CNNs detect anomalies in time series data, resulting in a 92\% accuracy in identifying possible attacks.
[ { "created": "Tue, 30 Jul 2024 09:43:42 GMT", "version": "v1" } ]
2024-07-31
[ [ "Khatun", "Mirza Akhi", "" ], [ "Bhattacharya", "Mangolika", "" ], [ "Eising", "Ciarán", "" ], [ "Dhirani", "Lubna Luxmi", "" ] ]
This research develops a new method to detect anomalies in time series data using Convolutional Neural Networks (CNNs) in healthcare-IoT. The proposed method creates a Distributed Denial of Service (DDoS) attack using an IoT network simulator, Cooja, which emulates environmental sensors such as temperature and humidity. CNNs detect anomalies in time series data, resulting in a 92\% accuracy in identifying possible attacks.
1705.02882
Michele Polese
Marco Mezzavilla, Menglei Zhang, Michele Polese, Russell Ford, Sourjya Dutta, Sundeep Rangan, Michele Zorzi
End-to-End Simulation of 5G mmWave Networks
25 pages, 16 figures, submitted to IEEE Communications Surveys and Tutorials (revised Jan. 2018)
null
10.1109/COMST.2018.2828880
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to its potential for multi-gigabit and low latency wireless links, millimeter wave (mmWave) technology is expected to play a central role in 5th generation cellular systems. While there has been considerable progress in understanding the mmWave physical layer, innovations will be required at all layers of the protocol stack, in both the access and the core network. Discrete-event network simulation is essential for end-to-end, cross-layer research and development. This paper provides a tutorial on a recently developed full-stack mmWave module integrated into the widely used open-source ns--3 simulator. The module includes a number of detailed statistical channel models as well as the ability to incorporate real measurements or ray-tracing data. The Physical (PHY) and Medium Access Control (MAC) layers are modular and highly customizable, making it easy to integrate algorithms or compare Orthogonal Frequency Division Multiplexing (OFDM) numerologies, for example. The module is interfaced with the core network of the ns--3 Long Term Evolution (LTE) module for full-stack simulations of end-to-end connectivity, and advanced architectural features, such as dual-connectivity, are also available. To facilitate the understanding of the module, and verify its correct functioning, we provide several examples that show the performance of the custom mmWave stack as well as custom congestion control algorithms designed specifically for efficient utilization of the mmWave channel.
[ { "created": "Mon, 8 May 2017 14:01:29 GMT", "version": "v1" }, { "created": "Wed, 4 Oct 2017 07:02:23 GMT", "version": "v2" }, { "created": "Mon, 5 Feb 2018 07:21:20 GMT", "version": "v3" } ]
2018-09-06
[ [ "Mezzavilla", "Marco", "" ], [ "Zhang", "Menglei", "" ], [ "Polese", "Michele", "" ], [ "Ford", "Russell", "" ], [ "Dutta", "Sourjya", "" ], [ "Rangan", "Sundeep", "" ], [ "Zorzi", "Michele", "" ] ]
Due to its potential for multi-gigabit and low latency wireless links, millimeter wave (mmWave) technology is expected to play a central role in 5th generation cellular systems. While there has been considerable progress in understanding the mmWave physical layer, innovations will be required at all layers of the protocol stack, in both the access and the core network. Discrete-event network simulation is essential for end-to-end, cross-layer research and development. This paper provides a tutorial on a recently developed full-stack mmWave module integrated into the widely used open-source ns--3 simulator. The module includes a number of detailed statistical channel models as well as the ability to incorporate real measurements or ray-tracing data. The Physical (PHY) and Medium Access Control (MAC) layers are modular and highly customizable, making it easy to integrate algorithms or compare Orthogonal Frequency Division Multiplexing (OFDM) numerologies, for example. The module is interfaced with the core network of the ns--3 Long Term Evolution (LTE) module for full-stack simulations of end-to-end connectivity, and advanced architectural features, such as dual-connectivity, are also available. To facilitate the understanding of the module, and verify its correct functioning, we provide several examples that show the performance of the custom mmWave stack as well as custom congestion control algorithms designed specifically for efficient utilization of the mmWave channel.
2210.03087
Wang Zhu
Jacob Krantz, Shurjo Banerjee, Wang Zhu, Jason Corso, Peter Anderson, Stefan Lee and Jesse Thomason
Iterative Vision-and-Language Navigation
Accepted by CVPR 2023
null
null
null
cs.CV cs.CL cs.RO
http://creativecommons.org/licenses/by/4.0/
We present Iterative Vision-and-Language Navigation (IVLN), a paradigm for evaluating language-guided agents navigating in a persistent environment over time. Existing Vision-and-Language Navigation (VLN) benchmarks erase the agent's memory at the beginning of every episode, testing the ability to perform cold-start navigation with no prior information. However, deployed robots occupy the same environment for long periods of time. The IVLN paradigm addresses this disparity by training and evaluating VLN agents that maintain memory across tours of scenes that consist of up to 100 ordered instruction-following Room-to-Room (R2R) episodes, each defined by an individual language instruction and a target path. We present discrete and continuous Iterative Room-to-Room (IR2R) benchmarks comprising about 400 tours each in 80 indoor scenes. We find that extending the implicit memory of high-performing transformer VLN agents is not sufficient for IVLN, but agents that build maps can benefit from environment persistence, motivating a renewed focus on map-building agents in VLN.
[ { "created": "Thu, 6 Oct 2022 17:46:00 GMT", "version": "v1" }, { "created": "Wed, 20 Dec 2023 17:24:33 GMT", "version": "v2" }, { "created": "Sun, 24 Dec 2023 05:37:26 GMT", "version": "v3" } ]
2023-12-27
[ [ "Krantz", "Jacob", "" ], [ "Banerjee", "Shurjo", "" ], [ "Zhu", "Wang", "" ], [ "Corso", "Jason", "" ], [ "Anderson", "Peter", "" ], [ "Lee", "Stefan", "" ], [ "Thomason", "Jesse", "" ] ]
We present Iterative Vision-and-Language Navigation (IVLN), a paradigm for evaluating language-guided agents navigating in a persistent environment over time. Existing Vision-and-Language Navigation (VLN) benchmarks erase the agent's memory at the beginning of every episode, testing the ability to perform cold-start navigation with no prior information. However, deployed robots occupy the same environment for long periods of time. The IVLN paradigm addresses this disparity by training and evaluating VLN agents that maintain memory across tours of scenes that consist of up to 100 ordered instruction-following Room-to-Room (R2R) episodes, each defined by an individual language instruction and a target path. We present discrete and continuous Iterative Room-to-Room (IR2R) benchmarks comprising about 400 tours each in 80 indoor scenes. We find that extending the implicit memory of high-performing transformer VLN agents is not sufficient for IVLN, but agents that build maps can benefit from environment persistence, motivating a renewed focus on map-building agents in VLN.
2008.08932
J Terry
J. K. Terry, Benjamin Black, Ananth Hari
SuperSuit: Simple Microwrappers for Reinforcement Learning Environments
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In reinforcement learning, wrappers are universally used to transform the information that passes between a model and an environment. Despite their ubiquity, no library exists with reasonable implementations of all popular preprocessing methods. This leads to unnecessary bugs, code inefficiencies, and wasted developer time. Accordingly we introduce SuperSuit, a Python library that includes all popular wrappers, and wrappers that can easily apply lambda functions to the observations/actions/reward. It's compatible with the standard Gym environment specification, as well as the PettingZoo specification for multi-agent environments. The library is available at https://github.com/PettingZoo-Team/SuperSuit,and can be installed via pip.
[ { "created": "Mon, 17 Aug 2020 00:30:06 GMT", "version": "v1" } ]
2021-11-17
[ [ "Terry", "J. K.", "" ], [ "Black", "Benjamin", "" ], [ "Hari", "Ananth", "" ] ]
In reinforcement learning, wrappers are universally used to transform the information that passes between a model and an environment. Despite their ubiquity, no library exists with reasonable implementations of all popular preprocessing methods. This leads to unnecessary bugs, code inefficiencies, and wasted developer time. Accordingly we introduce SuperSuit, a Python library that includes all popular wrappers, and wrappers that can easily apply lambda functions to the observations/actions/reward. It's compatible with the standard Gym environment specification, as well as the PettingZoo specification for multi-agent environments. The library is available at https://github.com/PettingZoo-Team/SuperSuit,and can be installed via pip.
2405.13010
Khanh-Tung Tran
Khanh-Tung Tran, Barry O'Sullivan, Hoang D. Nguyen
UCCIX: Irish-eXcellence Large Language Model
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
The development of Large Language Models (LLMs) has predominantly focused on high-resource languages, leaving extremely low-resource languages like Irish with limited representation. This work presents UCCIX, a pioneering effort on the development of an open-source Irish-based LLM. We propose a novel framework for continued pre-training of LLMs specifically adapted for extremely low-resource languages, requiring only a fraction of the textual data typically needed for training LLMs according to scaling laws. Our model, based on Llama 2-13B, outperforms much larger models on Irish language tasks with up to 12% performance improvement, showcasing the effectiveness and efficiency of our approach. We also contribute comprehensive Irish benchmarking datasets, including IrishQA, a question-answering dataset, and Irish version of MT-bench. These datasets enable rigorous evaluation and facilitate future research in Irish LLM systems. Our work aims to preserve and promote the Irish language, knowledge, and culture of Ireland in the digital era while providing a framework for adapting LLMs to other indigenous languages.
[ { "created": "Mon, 13 May 2024 13:19:27 GMT", "version": "v1" } ]
2024-05-24
[ [ "Tran", "Khanh-Tung", "" ], [ "O'Sullivan", "Barry", "" ], [ "Nguyen", "Hoang D.", "" ] ]
The development of Large Language Models (LLMs) has predominantly focused on high-resource languages, leaving extremely low-resource languages like Irish with limited representation. This work presents UCCIX, a pioneering effort on the development of an open-source Irish-based LLM. We propose a novel framework for continued pre-training of LLMs specifically adapted for extremely low-resource languages, requiring only a fraction of the textual data typically needed for training LLMs according to scaling laws. Our model, based on Llama 2-13B, outperforms much larger models on Irish language tasks with up to 12% performance improvement, showcasing the effectiveness and efficiency of our approach. We also contribute comprehensive Irish benchmarking datasets, including IrishQA, a question-answering dataset, and Irish version of MT-bench. These datasets enable rigorous evaluation and facilitate future research in Irish LLM systems. Our work aims to preserve and promote the Irish language, knowledge, and culture of Ireland in the digital era while providing a framework for adapting LLMs to other indigenous languages.
1703.03905
Alireza Poshtkohi
Alireza Poshtkohi, M.B. Ghaznavi-Ghoushchi
DotDFS: A Grid-based high-throughput file transfer system
28 pages, 21 figures
Parallel Computing, 37 (2011) 114-136
10.1016/j.parco.2010.12.003
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
DotGrid platform is a Grid infrastructure integrated with a set of open and standard protocols recently implemented on the top of Microsoft .NET in Windows and MONO .NET in UNIX/Linux. DotGrid infrastructure along with its proposed protocols provides a right and solid approach to targeting other platforms, e.g., the native C/C++ runtime. In this paper, we propose a new file transfer protocol called DotDFS as a high-throughput distributed file transfer component for DotGrid. DotDFS introduces some open binary protocols for efficient file transfers on current Grid infrastructures. DotDFS protocol also provides mechanisms for multiple file streams to gain high-throughput file transfer similar to GridFTP protocol, but by proposing and implementing a new parallel TCP connection-oriented paradigm. In our LAN tests, we have achieved better results than Globus GridFTP implementation particularly in multiple TCP streams and directory tree transfers. Our LAN experiences in memory-to-memory tests show that DotDFS accesses to the 94% bottleneck bandwidth while GridFTP is accessing 91%. In LAN disk-to-disk tests, comparing DotDFS protocol with GridFTP protocol unveils a set of interesting and technical problems in GridFTP for both the nature of the protocol and its implementation by Globus. In the WAN experimental studies, we propose a new idea for analytical modeling of file transfer protocols like DotDFS inspired by sampling, experimentation and mathematical interpolation approaches. The cross-platform and open standard-based features of DotDFS provide a substantial framework for unifying data access and resource sharing in real heterogeneous Grid environments.
[ { "created": "Sat, 11 Mar 2017 04:17:07 GMT", "version": "v1" }, { "created": "Sat, 18 Mar 2017 08:47:54 GMT", "version": "v2" } ]
2017-03-21
[ [ "Poshtkohi", "Alireza", "" ], [ "Ghaznavi-Ghoushchi", "M. B.", "" ] ]
DotGrid platform is a Grid infrastructure integrated with a set of open and standard protocols recently implemented on the top of Microsoft .NET in Windows and MONO .NET in UNIX/Linux. DotGrid infrastructure along with its proposed protocols provides a right and solid approach to targeting other platforms, e.g., the native C/C++ runtime. In this paper, we propose a new file transfer protocol called DotDFS as a high-throughput distributed file transfer component for DotGrid. DotDFS introduces some open binary protocols for efficient file transfers on current Grid infrastructures. DotDFS protocol also provides mechanisms for multiple file streams to gain high-throughput file transfer similar to GridFTP protocol, but by proposing and implementing a new parallel TCP connection-oriented paradigm. In our LAN tests, we have achieved better results than Globus GridFTP implementation particularly in multiple TCP streams and directory tree transfers. Our LAN experiences in memory-to-memory tests show that DotDFS accesses to the 94% bottleneck bandwidth while GridFTP is accessing 91%. In LAN disk-to-disk tests, comparing DotDFS protocol with GridFTP protocol unveils a set of interesting and technical problems in GridFTP for both the nature of the protocol and its implementation by Globus. In the WAN experimental studies, we propose a new idea for analytical modeling of file transfer protocols like DotDFS inspired by sampling, experimentation and mathematical interpolation approaches. The cross-platform and open standard-based features of DotDFS provide a substantial framework for unifying data access and resource sharing in real heterogeneous Grid environments.
2107.12979
Beren Millidge Mr
Beren Millidge, Anil Seth, Christopher L Buckley
Predictive Coding: a Theoretical and Experimental Review
27/07/21 initial upload; 14/01/22 maths fix; 05/07/22 maths fix; 12/07/22 text fixes
null
null
null
cs.AI cs.NE q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predictive coding offers a potentially unifying account of cortical function -- postulating that the core function of the brain is to minimize prediction errors with respect to a generative model of the world. The theory is closely related to the Bayesian brain framework and, over the last two decades, has gained substantial influence in the fields of theoretical and cognitive neuroscience. A large body of research has arisen based on both empirically testing improved and extended theoretical and mathematical models of predictive coding, as well as in evaluating their potential biological plausibility for implementation in the brain and the concrete neurophysiological and psychological predictions made by the theory. Despite this enduring popularity, however, no comprehensive review of predictive coding theory, and especially of recent developments in this field, exists. Here, we provide a comprehensive review both of the core mathematical structure and logic of predictive coding, thus complementing recent tutorials in the literature. We also review a wide range of classic and recent work within the framework, ranging from the neurobiologically realistic microcircuits that could implement predictive coding, to the close relationship between predictive coding and the widely-used backpropagation of error algorithm, as well as surveying the close relationships between predictive coding and modern machine learning techniques.
[ { "created": "Tue, 27 Jul 2021 17:44:21 GMT", "version": "v1" }, { "created": "Fri, 14 Jan 2022 22:13:59 GMT", "version": "v2" }, { "created": "Tue, 5 Jul 2022 13:26:48 GMT", "version": "v3" }, { "created": "Tue, 12 Jul 2022 19:51:47 GMT", "version": "v4" } ]
2022-07-14
[ [ "Millidge", "Beren", "" ], [ "Seth", "Anil", "" ], [ "Buckley", "Christopher L", "" ] ]
Predictive coding offers a potentially unifying account of cortical function -- postulating that the core function of the brain is to minimize prediction errors with respect to a generative model of the world. The theory is closely related to the Bayesian brain framework and, over the last two decades, has gained substantial influence in the fields of theoretical and cognitive neuroscience. A large body of research has arisen based on both empirically testing improved and extended theoretical and mathematical models of predictive coding, as well as in evaluating their potential biological plausibility for implementation in the brain and the concrete neurophysiological and psychological predictions made by the theory. Despite this enduring popularity, however, no comprehensive review of predictive coding theory, and especially of recent developments in this field, exists. Here, we provide a comprehensive review both of the core mathematical structure and logic of predictive coding, thus complementing recent tutorials in the literature. We also review a wide range of classic and recent work within the framework, ranging from the neurobiologically realistic microcircuits that could implement predictive coding, to the close relationship between predictive coding and the widely-used backpropagation of error algorithm, as well as surveying the close relationships between predictive coding and modern machine learning techniques.
2009.12005
Zhaojiang Lin
Zhaojiang Lin, Andrea Madotto, Genta Indra Winata, Pascale Fung
MinTL: Minimalist Transfer Learning for Task-Oriented Dialogue Systems
EMNLP 2020 camera ready
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose Minimalist Transfer Learning (MinTL) to simplify the system design process of task-oriented dialogue systems and alleviate the over-dependency on annotated data. MinTL is a simple yet effective transfer learning framework, which allows us to plug-and-play pre-trained seq2seq models, and jointly learn dialogue state tracking and dialogue response generation. Unlike previous approaches, which use a copy mechanism to "carryover" the old dialogue states to the new one, we introduce Levenshtein belief spans (Lev), that allows efficient dialogue state tracking with a minimal generation length. We instantiate our learning framework with two pre-trained backbones: T5 and BART, and evaluate them on MultiWOZ. Extensive experiments demonstrate that: 1) our systems establish new state-of-the-art results on end-to-end response generation, 2) MinTL-based systems are more robust than baseline methods in the low resource setting, and they achieve competitive results with only 20\% training data, and 3) Lev greatly improves the inference efficiency.
[ { "created": "Fri, 25 Sep 2020 02:19:13 GMT", "version": "v1" }, { "created": "Mon, 28 Sep 2020 06:43:17 GMT", "version": "v2" } ]
2020-09-29
[ [ "Lin", "Zhaojiang", "" ], [ "Madotto", "Andrea", "" ], [ "Winata", "Genta Indra", "" ], [ "Fung", "Pascale", "" ] ]
In this paper, we propose Minimalist Transfer Learning (MinTL) to simplify the system design process of task-oriented dialogue systems and alleviate the over-dependency on annotated data. MinTL is a simple yet effective transfer learning framework, which allows us to plug-and-play pre-trained seq2seq models, and jointly learn dialogue state tracking and dialogue response generation. Unlike previous approaches, which use a copy mechanism to "carryover" the old dialogue states to the new one, we introduce Levenshtein belief spans (Lev), that allows efficient dialogue state tracking with a minimal generation length. We instantiate our learning framework with two pre-trained backbones: T5 and BART, and evaluate them on MultiWOZ. Extensive experiments demonstrate that: 1) our systems establish new state-of-the-art results on end-to-end response generation, 2) MinTL-based systems are more robust than baseline methods in the low resource setting, and they achieve competitive results with only 20\% training data, and 3) Lev greatly improves the inference efficiency.
1802.05911
Mehmet Baygin
Mehmet Baygin, Mehmet Karakose, Alisan Sarimaden, Erhan Akin
An Image Processing based Object Counting Approach for Machine Vision Application
International Conference on Advances and Innovations in Engineering (ICAIE)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine vision applications are low cost and high precision measurement systems which are frequently used in production lines. With these systems that provide contactless control and measurement, production facilities are able to reach high production numbers without errors. Machine vision operations such as product counting, error control, dimension measurement can be performed through a camera. In this paper, a machine vision application is proposed, which can perform object-independent product counting. The proposed approach is based on Otsu thresholding and Hough transformation and performs automatic counting independently of product type and color. Basically one camera is used in the system. Through this camera, an image of the products passing through a conveyor is taken and various image processing algorithms are applied to these images. In this approach using images obtained from a real experimental setup, a real-time machine vision application was installed. As a result of the experimental studies performed, it has been determined that the proposed approach gives fast, accurate and reliable results.
[ { "created": "Fri, 16 Feb 2018 12:31:35 GMT", "version": "v1" } ]
2018-02-19
[ [ "Baygin", "Mehmet", "" ], [ "Karakose", "Mehmet", "" ], [ "Sarimaden", "Alisan", "" ], [ "Akin", "Erhan", "" ] ]
Machine vision applications are low cost and high precision measurement systems which are frequently used in production lines. With these systems that provide contactless control and measurement, production facilities are able to reach high production numbers without errors. Machine vision operations such as product counting, error control, dimension measurement can be performed through a camera. In this paper, a machine vision application is proposed, which can perform object-independent product counting. The proposed approach is based on Otsu thresholding and Hough transformation and performs automatic counting independently of product type and color. Basically one camera is used in the system. Through this camera, an image of the products passing through a conveyor is taken and various image processing algorithms are applied to these images. In this approach using images obtained from a real experimental setup, a real-time machine vision application was installed. As a result of the experimental studies performed, it has been determined that the proposed approach gives fast, accurate and reliable results.
1003.5635
Ashley Smith
Fahad Al-Zahrani
Web-Based Learning and Training for Virtual Metrology Lab
null
Journal of Telecommunications,Volume 1, Issue 2, pp42-54, March 2010
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of World Web Wide for distance education has received increasing attention over the past decades. The real challenge of adapting this technology for engineering education and training is to facilitate the laboratory experiments via Internet. In the sciences, measurement plays an important role. The accuracy of the measurement, as well as the units, help scientists to better understand phenomena occurring in nature. This paper introduces Metrology educators to the use and adoption of Java-applets in order to create virtual, online Metrology laboratories for students. These techniques have been used to successfully form a laboratory course which augments the more conventional lectures in concepts of Metrology course at Faculty of Engineering, Albaha University, KSA. Improvements of the package are still undergoing to incorporate Web-based technologies (Internet home page, HTML, Java programming etc...). This Web-based education and training has been successfully class-tested within an undergraduate preliminary year engineering course and students reported a positive experience with its use. The use of these labs should be self-explanatory and their reliable operation has been thoroughly tested.
[ { "created": "Mon, 29 Mar 2010 18:35:08 GMT", "version": "v1" } ]
2010-03-30
[ [ "Al-Zahrani", "Fahad", "" ] ]
The use of World Web Wide for distance education has received increasing attention over the past decades. The real challenge of adapting this technology for engineering education and training is to facilitate the laboratory experiments via Internet. In the sciences, measurement plays an important role. The accuracy of the measurement, as well as the units, help scientists to better understand phenomena occurring in nature. This paper introduces Metrology educators to the use and adoption of Java-applets in order to create virtual, online Metrology laboratories for students. These techniques have been used to successfully form a laboratory course which augments the more conventional lectures in concepts of Metrology course at Faculty of Engineering, Albaha University, KSA. Improvements of the package are still undergoing to incorporate Web-based technologies (Internet home page, HTML, Java programming etc...). This Web-based education and training has been successfully class-tested within an undergraduate preliminary year engineering course and students reported a positive experience with its use. The use of these labs should be self-explanatory and their reliable operation has been thoroughly tested.
2207.08051
Samar Khanna
Yezhen Cong, Samar Khanna, Chenlin Meng, Patrick Liu, Erik Rozi, Yutong He, Marshall Burke, David B. Lobell, Stefano Ermon
SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery
Published at NeurIPS 2022. The first two listed names contributed equally to this project
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Unsupervised pre-training methods for large vision models have shown to enhance performance on downstream supervised tasks. Developing similar techniques for satellite imagery presents significant opportunities as unlabelled data is plentiful and the inherent temporal and multi-spectral structure provides avenues to further improve existing pre-training strategies. In this paper, we present SatMAE, a pre-training framework for temporal or multi-spectral satellite imagery based on Masked Autoencoder (MAE). To leverage temporal information, we include a temporal embedding along with independently masking image patches across time. In addition, we demonstrate that encoding multi-spectral data as groups of bands with distinct spectral positional encodings is beneficial. Our approach yields strong improvements over previous state-of-the-art techniques, both in terms of supervised learning performance on benchmark datasets (up to $\uparrow$ 7%), and transfer learning performance on downstream remote sensing tasks, including land cover classification (up to $\uparrow$ 14%) and semantic segmentation. Code and data are available on the project website: https://sustainlab-group.github.io/SatMAE/
[ { "created": "Sun, 17 Jul 2022 01:35:29 GMT", "version": "v1" }, { "created": "Thu, 20 Oct 2022 01:04:57 GMT", "version": "v2" }, { "created": "Sun, 15 Jan 2023 19:27:57 GMT", "version": "v3" } ]
2023-01-18
[ [ "Cong", "Yezhen", "" ], [ "Khanna", "Samar", "" ], [ "Meng", "Chenlin", "" ], [ "Liu", "Patrick", "" ], [ "Rozi", "Erik", "" ], [ "He", "Yutong", "" ], [ "Burke", "Marshall", "" ], [ "Lobell", "David B.", "" ], [ "Ermon", "Stefano", "" ] ]
Unsupervised pre-training methods for large vision models have shown to enhance performance on downstream supervised tasks. Developing similar techniques for satellite imagery presents significant opportunities as unlabelled data is plentiful and the inherent temporal and multi-spectral structure provides avenues to further improve existing pre-training strategies. In this paper, we present SatMAE, a pre-training framework for temporal or multi-spectral satellite imagery based on Masked Autoencoder (MAE). To leverage temporal information, we include a temporal embedding along with independently masking image patches across time. In addition, we demonstrate that encoding multi-spectral data as groups of bands with distinct spectral positional encodings is beneficial. Our approach yields strong improvements over previous state-of-the-art techniques, both in terms of supervised learning performance on benchmark datasets (up to $\uparrow$ 7%), and transfer learning performance on downstream remote sensing tasks, including land cover classification (up to $\uparrow$ 14%) and semantic segmentation. Code and data are available on the project website: https://sustainlab-group.github.io/SatMAE/
2406.07499
Lue Fan
Lue Fan, Yuxue Yang, Minxing Li, Hongsheng Li and Zhaoxiang Zhang
Trim 3D Gaussian Splatting for Accurate Geometry Representation
Project page: https://trimgs.github.io/
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce Trim 3D Gaussian Splatting (TrimGS) to reconstruct accurate 3D geometry from images. Previous arts for geometry reconstruction from 3D Gaussians mainly focus on exploring strong geometry regularization. Instead, from a fresh perspective, we propose to obtain accurate 3D geometry of a scene by Gaussian trimming, which selectively removes the inaccurate geometry while preserving accurate structures. To achieve this, we analyze the contributions of individual 3D Gaussians and propose a contribution-based trimming strategy to remove the redundant or inaccurate Gaussians. Furthermore, our experimental and theoretical analyses reveal that a relatively small Gaussian scale is a non-negligible factor in representing and optimizing the intricate details. Therefore the proposed TrimGS maintains relatively small Gaussian scales. In addition, TrimGS is also compatible with the effective geometry regularization strategies in previous arts. When combined with the original 3DGS and the state-of-the-art 2DGS, TrimGS consistently yields more accurate geometry and higher perceptual quality. Our project page is https://trimgs.github.io
[ { "created": "Tue, 11 Jun 2024 17:34:46 GMT", "version": "v1" } ]
2024-06-12
[ [ "Fan", "Lue", "" ], [ "Yang", "Yuxue", "" ], [ "Li", "Minxing", "" ], [ "Li", "Hongsheng", "" ], [ "Zhang", "Zhaoxiang", "" ] ]
In this paper, we introduce Trim 3D Gaussian Splatting (TrimGS) to reconstruct accurate 3D geometry from images. Previous arts for geometry reconstruction from 3D Gaussians mainly focus on exploring strong geometry regularization. Instead, from a fresh perspective, we propose to obtain accurate 3D geometry of a scene by Gaussian trimming, which selectively removes the inaccurate geometry while preserving accurate structures. To achieve this, we analyze the contributions of individual 3D Gaussians and propose a contribution-based trimming strategy to remove the redundant or inaccurate Gaussians. Furthermore, our experimental and theoretical analyses reveal that a relatively small Gaussian scale is a non-negligible factor in representing and optimizing the intricate details. Therefore the proposed TrimGS maintains relatively small Gaussian scales. In addition, TrimGS is also compatible with the effective geometry regularization strategies in previous arts. When combined with the original 3DGS and the state-of-the-art 2DGS, TrimGS consistently yields more accurate geometry and higher perceptual quality. Our project page is https://trimgs.github.io
2103.02828
David D. Fan
David D. Fan, Kyohei Otsu, Yuki Kubo, Anushri Dixit, Joel Burdick, and Ali-Akbar Agha-Mohammadi
STEP: Stochastic Traversability Evaluation and Planning for Risk-Aware Off-road Navigation
Accepted to Robotics: Science and Systems (RSS) 2021. Video link: https://youtu.be/N97cv4eH5c8
null
null
null
cs.RO cs.AI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although ground robotic autonomy has gained widespread usage in structured and controlled environments, autonomy in unknown and off-road terrain remains a difficult problem. Extreme, off-road, and unstructured environments such as undeveloped wilderness, caves, and rubble pose unique and challenging problems for autonomous navigation. To tackle these problems we propose an approach for assessing traversability and planning a safe, feasible, and fast trajectory in real-time. Our approach, which we name STEP (Stochastic Traversability Evaluation and Planning), relies on: 1) rapid uncertainty-aware mapping and traversability evaluation, 2) tail risk assessment using the Conditional Value-at-Risk (CVaR), and 3) efficient risk and constraint-aware kinodynamic motion planning using sequential quadratic programming-based (SQP) model predictive control (MPC). We analyze our method in simulation and validate its efficacy on wheeled and legged robotic platforms exploring extreme terrains including an abandoned subway and an underground lava tube.
[ { "created": "Thu, 4 Mar 2021 04:24:19 GMT", "version": "v1" }, { "created": "Fri, 25 Jun 2021 19:45:43 GMT", "version": "v2" } ]
2021-06-29
[ [ "Fan", "David D.", "" ], [ "Otsu", "Kyohei", "" ], [ "Kubo", "Yuki", "" ], [ "Dixit", "Anushri", "" ], [ "Burdick", "Joel", "" ], [ "Agha-Mohammadi", "Ali-Akbar", "" ] ]
Although ground robotic autonomy has gained widespread usage in structured and controlled environments, autonomy in unknown and off-road terrain remains a difficult problem. Extreme, off-road, and unstructured environments such as undeveloped wilderness, caves, and rubble pose unique and challenging problems for autonomous navigation. To tackle these problems we propose an approach for assessing traversability and planning a safe, feasible, and fast trajectory in real-time. Our approach, which we name STEP (Stochastic Traversability Evaluation and Planning), relies on: 1) rapid uncertainty-aware mapping and traversability evaluation, 2) tail risk assessment using the Conditional Value-at-Risk (CVaR), and 3) efficient risk and constraint-aware kinodynamic motion planning using sequential quadratic programming-based (SQP) model predictive control (MPC). We analyze our method in simulation and validate its efficacy on wheeled and legged robotic platforms exploring extreme terrains including an abandoned subway and an underground lava tube.
1906.09732
Amihood Amir
Amihood Amir and Itai Boneh
Dynamic Palindrome Detection
arXiv admin note: text overlap with arXiv:1806.02718 by other authors
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lately, there is a growing interest in dynamic string matching problems. Specifically, the dynamic Longest Common Factor problem has been researched and some interesting results has been reached. In this paper we examine another classic string problem in a dynamic setting - finding the longest palindrome substring of a given string. We show that the longest palindrome can be maintained in poly-logarithmic time per symbol edit.
[ { "created": "Mon, 24 Jun 2019 05:33:41 GMT", "version": "v1" } ]
2019-06-25
[ [ "Amir", "Amihood", "" ], [ "Boneh", "Itai", "" ] ]
Lately, there is a growing interest in dynamic string matching problems. Specifically, the dynamic Longest Common Factor problem has been researched and some interesting results has been reached. In this paper we examine another classic string problem in a dynamic setting - finding the longest palindrome substring of a given string. We show that the longest palindrome can be maintained in poly-logarithmic time per symbol edit.
2405.12205
Aniket Didolkar
Aniket Didolkar, Anirudh Goyal, Nan Rosemary Ke, Siyuan Guo, Michal Valko, Timothy Lillicrap, Danilo Rezende, Yoshua Bengio, Michael Mozer, Sanjeev Arora
Metacognitive Capabilities of LLMs: An Exploration in Mathematical Problem Solving
Preprint. Under review
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Metacognitive knowledge refers to humans' intuitive knowledge of their own thinking and reasoning processes. Today's best LLMs clearly possess some reasoning processes. The paper gives evidence that they also have metacognitive knowledge, including ability to name skills and procedures to apply given a task. We explore this primarily in context of math reasoning, developing a prompt-guided interaction procedure to get a powerful LLM to assign sensible skill labels to math questions, followed by having it perform semantic clustering to obtain coarser families of skill labels. These coarse skill labels look interpretable to humans. To validate that these skill labels are meaningful and relevant to the LLM's reasoning processes we perform the following experiments. (a) We ask GPT-4 to assign skill labels to training questions in math datasets GSM8K and MATH. (b) When using an LLM to solve the test questions, we present it with the full list of skill labels and ask it to identify the skill needed. Then it is presented with randomly selected exemplar solved questions associated with that skill label. This improves accuracy on GSM8k and MATH for several strong LLMs, including code-assisted models. The methodology presented is domain-agnostic, even though this article applies it to math problems.
[ { "created": "Mon, 20 May 2024 17:45:26 GMT", "version": "v1" } ]
2024-05-21
[ [ "Didolkar", "Aniket", "" ], [ "Goyal", "Anirudh", "" ], [ "Ke", "Nan Rosemary", "" ], [ "Guo", "Siyuan", "" ], [ "Valko", "Michal", "" ], [ "Lillicrap", "Timothy", "" ], [ "Rezende", "Danilo", "" ], [ "Bengio", "Yoshua", "" ], [ "Mozer", "Michael", "" ], [ "Arora", "Sanjeev", "" ] ]
Metacognitive knowledge refers to humans' intuitive knowledge of their own thinking and reasoning processes. Today's best LLMs clearly possess some reasoning processes. The paper gives evidence that they also have metacognitive knowledge, including ability to name skills and procedures to apply given a task. We explore this primarily in context of math reasoning, developing a prompt-guided interaction procedure to get a powerful LLM to assign sensible skill labels to math questions, followed by having it perform semantic clustering to obtain coarser families of skill labels. These coarse skill labels look interpretable to humans. To validate that these skill labels are meaningful and relevant to the LLM's reasoning processes we perform the following experiments. (a) We ask GPT-4 to assign skill labels to training questions in math datasets GSM8K and MATH. (b) When using an LLM to solve the test questions, we present it with the full list of skill labels and ask it to identify the skill needed. Then it is presented with randomly selected exemplar solved questions associated with that skill label. This improves accuracy on GSM8k and MATH for several strong LLMs, including code-assisted models. The methodology presented is domain-agnostic, even though this article applies it to math problems.
1707.04304
Charilaos Mylonas Mr.
Charilaos Mylonas, Valentin Bemetz and Eleni Chatzi
Multiscale Surrogate Modeling and Uncertainty Quantification for Periodic Composite Structures
Appeared in UNCECOMP 2017
null
null
null
cs.CE
http://creativecommons.org/licenses/by/4.0/
Computational modeling of the structural behavior of continuous fiber composite materials often takes into account the periodicity of the underlying micro-structure. A well established method dealing with the structural behavior of periodic micro-structures is the so- called Asymptotic Expansion Homogenization (AEH). By considering a periodic perturbation of the material displacement, scale bridging functions, also referred to as elastic correctors, can be derived in order to connect the strains at the level of the macro-structure with micro- structural strains. For complicated inhomogeneous micro-structures, the derivation of such functions is usually performed by the numerical solution of a PDE problem - typically with the Finite Element Method. Moreover, when dealing with uncertain micro-structural geometry and material parameters, there is considerable uncertainty introduced in the actual stresses experienced by the materials. Due to the high computational cost of computing the elastic correctors, the choice of a pure Monte-Carlo approach for dealing with the inevitable material and geometric uncertainties is clearly computationally intractable. This problem is even more pronounced when the effect of damage in the micro-scale is considered, where re-evaluation of the micro-structural representative volume element is necessary for every occurring damage. The novelty in this paper is that a non-intrusive surrogate modeling approach is employed with the purpose of directly bridging the macro-scale behavior of the structure with the material behavior in the micro-scale, therefore reducing the number of costly evaluations of corrector functions, allowing for future developments on the incorporation of fatigue or static damage in the analysis of composite structural components.
[ { "created": "Tue, 11 Jul 2017 17:45:22 GMT", "version": "v1" } ]
2017-07-17
[ [ "Mylonas", "Charilaos", "" ], [ "Bemetz", "Valentin", "" ], [ "Chatzi", "Eleni", "" ] ]
Computational modeling of the structural behavior of continuous fiber composite materials often takes into account the periodicity of the underlying micro-structure. A well established method dealing with the structural behavior of periodic micro-structures is the so- called Asymptotic Expansion Homogenization (AEH). By considering a periodic perturbation of the material displacement, scale bridging functions, also referred to as elastic correctors, can be derived in order to connect the strains at the level of the macro-structure with micro- structural strains. For complicated inhomogeneous micro-structures, the derivation of such functions is usually performed by the numerical solution of a PDE problem - typically with the Finite Element Method. Moreover, when dealing with uncertain micro-structural geometry and material parameters, there is considerable uncertainty introduced in the actual stresses experienced by the materials. Due to the high computational cost of computing the elastic correctors, the choice of a pure Monte-Carlo approach for dealing with the inevitable material and geometric uncertainties is clearly computationally intractable. This problem is even more pronounced when the effect of damage in the micro-scale is considered, where re-evaluation of the micro-structural representative volume element is necessary for every occurring damage. The novelty in this paper is that a non-intrusive surrogate modeling approach is employed with the purpose of directly bridging the macro-scale behavior of the structure with the material behavior in the micro-scale, therefore reducing the number of costly evaluations of corrector functions, allowing for future developments on the incorporation of fatigue or static damage in the analysis of composite structural components.
1805.03151
Davide Giacomo Cavezza
Davide G. Cavezza, Dalal Alrajeh, Andr\'as Gy\"orgy
A Weakness Measure for GR(1) Formulae
To appear in FM2018 proceedings
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In spite of the theoretical and algorithmic developments for system synthesis in recent years, little effort has been dedicated to quantifying the quality of the specifications used for synthesis. When dealing with unrealizable specifications, finding the weakest environment assumptions that would ensure realizability is typically a desirable property; in such context the weakness of the assumptions is a major quality parameter. The question of whether one assumption is weaker than another is commonly interpreted using implication or, equivalently, language inclusion. However, this interpretation does not provide any further insight into the weakness of assumptions when implication does not hold. To our knowledge, the only measure that is capable of comparing two formulae in this case is entropy, but even it fails to provide a sufficiently refined notion of weakness in case of GR(1) formulae, a subset of linear temporal logic formulae which is of particular interest in controller synthesis. In this paper we propose a more refined measure of weakness based on the Hausdorff dimension, a concept that captures the notion of size of the omega-language satisfying a linear temporal logic formula. We identify the conditions under which this measure is guaranteed to distinguish between weaker and stronger GR(1) formulae. We evaluate our proposed weakness measure in the context of computing GR(1) assumptions refinements.
[ { "created": "Tue, 8 May 2018 16:39:31 GMT", "version": "v1" } ]
2018-05-09
[ [ "Cavezza", "Davide G.", "" ], [ "Alrajeh", "Dalal", "" ], [ "György", "András", "" ] ]
In spite of the theoretical and algorithmic developments for system synthesis in recent years, little effort has been dedicated to quantifying the quality of the specifications used for synthesis. When dealing with unrealizable specifications, finding the weakest environment assumptions that would ensure realizability is typically a desirable property; in such context the weakness of the assumptions is a major quality parameter. The question of whether one assumption is weaker than another is commonly interpreted using implication or, equivalently, language inclusion. However, this interpretation does not provide any further insight into the weakness of assumptions when implication does not hold. To our knowledge, the only measure that is capable of comparing two formulae in this case is entropy, but even it fails to provide a sufficiently refined notion of weakness in case of GR(1) formulae, a subset of linear temporal logic formulae which is of particular interest in controller synthesis. In this paper we propose a more refined measure of weakness based on the Hausdorff dimension, a concept that captures the notion of size of the omega-language satisfying a linear temporal logic formula. We identify the conditions under which this measure is guaranteed to distinguish between weaker and stronger GR(1) formulae. We evaluate our proposed weakness measure in the context of computing GR(1) assumptions refinements.
2308.01471
Ben Agro
Ben Agro, Quinlan Sykora, Sergio Casas, Raquel Urtasun
Implicit Occupancy Flow Fields for Perception and Prediction in Self-Driving
19 pages, 13 figures
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 1379-1388
null
null
cs.CV cs.AI cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants. Existing works either perform object detection followed by trajectory forecasting of the detected objects, or predict dense occupancy and flow grids for the whole scene. The former poses a safety concern as the number of detections needs to be kept low for efficiency reasons, sacrificing object recall. The latter is computationally expensive due to the high-dimensionality of the output grid, and suffers from the limited receptive field inherent to fully convolutional networks. Furthermore, both approaches employ many computational resources predicting areas or objects that might never be queried by the motion planner. This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network. Our method avoids unnecessary computation, as it can be directly queried by the motion planner at continuous spatio-temporal locations. Moreover, we design an architecture that overcomes the limited receptive field of previous explicit occupancy prediction methods by adding an efficient yet effective global attention mechanism. Through extensive experiments in both urban and highway settings, we demonstrate that our implicit model outperforms the current state-of-the-art. For more information, visit the project website: https://waabi.ai/research/implicito.
[ { "created": "Wed, 2 Aug 2023 23:39:24 GMT", "version": "v1" } ]
2023-08-04
[ [ "Agro", "Ben", "" ], [ "Sykora", "Quinlan", "" ], [ "Casas", "Sergio", "" ], [ "Urtasun", "Raquel", "" ] ]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants. Existing works either perform object detection followed by trajectory forecasting of the detected objects, or predict dense occupancy and flow grids for the whole scene. The former poses a safety concern as the number of detections needs to be kept low for efficiency reasons, sacrificing object recall. The latter is computationally expensive due to the high-dimensionality of the output grid, and suffers from the limited receptive field inherent to fully convolutional networks. Furthermore, both approaches employ many computational resources predicting areas or objects that might never be queried by the motion planner. This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network. Our method avoids unnecessary computation, as it can be directly queried by the motion planner at continuous spatio-temporal locations. Moreover, we design an architecture that overcomes the limited receptive field of previous explicit occupancy prediction methods by adding an efficient yet effective global attention mechanism. Through extensive experiments in both urban and highway settings, we demonstrate that our implicit model outperforms the current state-of-the-art. For more information, visit the project website: https://waabi.ai/research/implicito.
2302.13941
Noah Klarmann
Deepak Vivekanandan, Samuel Wirth, Patrick Karlbauer, Noah Klarmann
A Reinforcement Learning Approach for Scheduling Problems With Improved Generalization Through Order Swapping
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
The scheduling of production resources (such as associating jobs to machines) plays a vital role for the manufacturing industry not only for saving energy but also for increasing the overall efficiency. Among the different job scheduling problems, the JSSP is addressed in this work. JSSP falls into the category of NP-hard COP, in which solving the problem through exhaustive search becomes unfeasible. Simple heuristics such as FIFO, LPT and metaheuristics such as Taboo search are often adopted to solve the problem by truncating the search space. The viability of the methods becomes inefficient for large problem sizes as it is either far from the optimum or time consuming. In recent years, the research towards using DRL to solve COP has gained interest and has shown promising results in terms of solution quality and computational efficiency. In this work, we provide an novel approach to solve the JSSP examining the objectives generalization and solution effectiveness using DRL. In particular, we employ the PPO algorithm that adopts the policy-gradient paradigm that is found to perform well in the constrained dispatching of jobs. We incorporated an OSM in the environment to achieve better generalized learning of the problem. The performance of the presented approach is analyzed in depth by using a set of available benchmark instances and comparing our results with the work of other groups.
[ { "created": "Mon, 27 Feb 2023 16:45:04 GMT", "version": "v1" }, { "created": "Mon, 6 Mar 2023 14:44:40 GMT", "version": "v2" } ]
2023-03-07
[ [ "Vivekanandan", "Deepak", "" ], [ "Wirth", "Samuel", "" ], [ "Karlbauer", "Patrick", "" ], [ "Klarmann", "Noah", "" ] ]
The scheduling of production resources (such as associating jobs to machines) plays a vital role for the manufacturing industry not only for saving energy but also for increasing the overall efficiency. Among the different job scheduling problems, the JSSP is addressed in this work. JSSP falls into the category of NP-hard COP, in which solving the problem through exhaustive search becomes unfeasible. Simple heuristics such as FIFO, LPT and metaheuristics such as Taboo search are often adopted to solve the problem by truncating the search space. The viability of the methods becomes inefficient for large problem sizes as it is either far from the optimum or time consuming. In recent years, the research towards using DRL to solve COP has gained interest and has shown promising results in terms of solution quality and computational efficiency. In this work, we provide an novel approach to solve the JSSP examining the objectives generalization and solution effectiveness using DRL. In particular, we employ the PPO algorithm that adopts the policy-gradient paradigm that is found to perform well in the constrained dispatching of jobs. We incorporated an OSM in the environment to achieve better generalized learning of the problem. The performance of the presented approach is analyzed in depth by using a set of available benchmark instances and comparing our results with the work of other groups.
2408.00106
Xudong Xie
Xudong Xie, Yuzhe Li, Yang Liu, Zhifei Zhang, Zhaowen Wang, Wei Xiong, Xiang Bai
WAS: Dataset and Methods for Artistic Text Segmentation
Accepted by ECCV 2024
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate text segmentation results are crucial for text-related generative tasks, such as text image generation, text editing, text removal, and text style transfer. Recently, some scene text segmentation methods have made significant progress in segmenting regular text. However, these methods perform poorly in scenarios containing artistic text. Therefore, this paper focuses on the more challenging task of artistic text segmentation and constructs a real artistic text segmentation dataset. One challenge of the task is that the local stroke shapes of artistic text are changeable with diversity and complexity. We propose a decoder with the layer-wise momentum query to prevent the model from ignoring stroke regions of special shapes. Another challenge is the complexity of the global topological structure. We further design a skeleton-assisted head to guide the model to focus on the global structure. Additionally, to enhance the generalization performance of the text segmentation model, we propose a strategy for training data synthesis, based on the large multi-modal model and the diffusion model. Experimental results show that our proposed method and synthetic dataset can significantly enhance the performance of artistic text segmentation and achieve state-of-the-art results on other public datasets.
[ { "created": "Wed, 31 Jul 2024 18:29:36 GMT", "version": "v1" } ]
2024-08-02
[ [ "Xie", "Xudong", "" ], [ "Li", "Yuzhe", "" ], [ "Liu", "Yang", "" ], [ "Zhang", "Zhifei", "" ], [ "Wang", "Zhaowen", "" ], [ "Xiong", "Wei", "" ], [ "Bai", "Xiang", "" ] ]
Accurate text segmentation results are crucial for text-related generative tasks, such as text image generation, text editing, text removal, and text style transfer. Recently, some scene text segmentation methods have made significant progress in segmenting regular text. However, these methods perform poorly in scenarios containing artistic text. Therefore, this paper focuses on the more challenging task of artistic text segmentation and constructs a real artistic text segmentation dataset. One challenge of the task is that the local stroke shapes of artistic text are changeable with diversity and complexity. We propose a decoder with the layer-wise momentum query to prevent the model from ignoring stroke regions of special shapes. Another challenge is the complexity of the global topological structure. We further design a skeleton-assisted head to guide the model to focus on the global structure. Additionally, to enhance the generalization performance of the text segmentation model, we propose a strategy for training data synthesis, based on the large multi-modal model and the diffusion model. Experimental results show that our proposed method and synthetic dataset can significantly enhance the performance of artistic text segmentation and achieve state-of-the-art results on other public datasets.
1810.04118
Mehdi Mohammadi
Mehdi Mohammadi, Ala Al-Fuqaha, Mohsen Guizani, Jun-Seok Oh
Semi-supervised Deep Reinforcement Learning in Support of IoT and Smart City Services
11 pages, 7 figures. Accepted for publication in IEEE Internet of Things Journal
IEEE Internet of Things Journal, Volume 5, Issue 2, 2018
10.1109/JIOT.2017.2712560
null
cs.NI cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Smart services are an important element of the smart cities and the Internet of Things (IoT) ecosystems where the intelligence behind the services is obtained and improved through the sensory data. Providing a large amount of training data is not always feasible; therefore, we need to consider alternative ways that incorporate unlabeled data as well. In recent years, Deep reinforcement learning (DRL) has gained great success in several application domains. It is an applicable method for IoT and smart city scenarios where auto-generated data can be partially labeled by users' feedback for training purposes. In this paper, we propose a semi-supervised deep reinforcement learning model that fits smart city applications as it consumes both labeled and unlabeled data to improve the performance and accuracy of the learning agent. The model utilizes Variational Autoencoders (VAE) as the inference engine for generalizing optimal policies. To the best of our knowledge, the proposed model is the first investigation that extends deep reinforcement learning to the semi-supervised paradigm. As a case study of smart city applications, we focus on smart buildings and apply the proposed model to the problem of indoor localization based on BLE signal strength. Indoor localization is the main component of smart city services since people spend significant time in indoor environments. Our model learns the best action policies that lead to a close estimation of the target locations with an improvement of 23% in terms of distance to the target and at least 67% more received rewards compared to the supervised DRL model.
[ { "created": "Tue, 9 Oct 2018 16:47:25 GMT", "version": "v1" } ]
2018-10-10
[ [ "Mohammadi", "Mehdi", "" ], [ "Al-Fuqaha", "Ala", "" ], [ "Guizani", "Mohsen", "" ], [ "Oh", "Jun-Seok", "" ] ]
Smart services are an important element of the smart cities and the Internet of Things (IoT) ecosystems where the intelligence behind the services is obtained and improved through the sensory data. Providing a large amount of training data is not always feasible; therefore, we need to consider alternative ways that incorporate unlabeled data as well. In recent years, Deep reinforcement learning (DRL) has gained great success in several application domains. It is an applicable method for IoT and smart city scenarios where auto-generated data can be partially labeled by users' feedback for training purposes. In this paper, we propose a semi-supervised deep reinforcement learning model that fits smart city applications as it consumes both labeled and unlabeled data to improve the performance and accuracy of the learning agent. The model utilizes Variational Autoencoders (VAE) as the inference engine for generalizing optimal policies. To the best of our knowledge, the proposed model is the first investigation that extends deep reinforcement learning to the semi-supervised paradigm. As a case study of smart city applications, we focus on smart buildings and apply the proposed model to the problem of indoor localization based on BLE signal strength. Indoor localization is the main component of smart city services since people spend significant time in indoor environments. Our model learns the best action policies that lead to a close estimation of the target locations with an improvement of 23% in terms of distance to the target and at least 67% more received rewards compared to the supervised DRL model.
1407.7143
Tanmay Sinha
Tanmay Sinha
"Your click decides your fate": Leveraging clickstream patterns from MOOC videos to infer students' information processing & attrition behavior
Undergraduate (B.Tech, Computer Science) Thesis Report, 2014, Vellore Institute of Technology, India
null
null
null
cs.HC cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With an expansive and ubiquitously available gold mine of educational data, Massive Open Online courses (MOOCs) have become the an important foci of learning analytics research. The hope is that this new surge of development will bring the vision of equitable access to lifelong learning opportunities within practical reach. MOOCs offer many valuable learning experiences to students, from video lectures, readings, assignments and exams, to opportunities to connect and collaborate with others through threaded discussion forums and other Web 2.0 technologies. Nevertheless, despite all this potential, MOOCs have so far failed to produce evidence that this potential is being realized in the current instantiation of MOOCs. In this work, we primarily explore video lecture interaction in Massive Open Online Courses (MOOCs), which is central to student learning experience on these educational platforms. As a research contribution, we operationalize video lecture clickstreams of students into behavioral actions, and construct a quantitative information processing index, that can aid instructors to better understand MOOC hurdles and reason about unsatisfactory learning outcomes. Our results illuminate the effectiveness of developing such a metric inspired by cognitive psychology, towards answering critical questions regarding students' engagement, their future click interactions and participation trajectories that lead to in-video dropouts. We leverage recurring click behaviors to differentiate distinct video watching profiles for students in MOOCs. Additionally, we discuss about prediction of complete course dropouts, incorporating diverse perspectives from statistics and machine learning, to offer a more nuanced view into how the second generation of MOOCs be benefited, if course instructors were to better comprehend factors that lead to student attrition.
[ { "created": "Sat, 26 Jul 2014 17:53:58 GMT", "version": "v1" } ]
2014-07-29
[ [ "Sinha", "Tanmay", "" ] ]
With an expansive and ubiquitously available gold mine of educational data, Massive Open Online courses (MOOCs) have become the an important foci of learning analytics research. The hope is that this new surge of development will bring the vision of equitable access to lifelong learning opportunities within practical reach. MOOCs offer many valuable learning experiences to students, from video lectures, readings, assignments and exams, to opportunities to connect and collaborate with others through threaded discussion forums and other Web 2.0 technologies. Nevertheless, despite all this potential, MOOCs have so far failed to produce evidence that this potential is being realized in the current instantiation of MOOCs. In this work, we primarily explore video lecture interaction in Massive Open Online Courses (MOOCs), which is central to student learning experience on these educational platforms. As a research contribution, we operationalize video lecture clickstreams of students into behavioral actions, and construct a quantitative information processing index, that can aid instructors to better understand MOOC hurdles and reason about unsatisfactory learning outcomes. Our results illuminate the effectiveness of developing such a metric inspired by cognitive psychology, towards answering critical questions regarding students' engagement, their future click interactions and participation trajectories that lead to in-video dropouts. We leverage recurring click behaviors to differentiate distinct video watching profiles for students in MOOCs. Additionally, we discuss about prediction of complete course dropouts, incorporating diverse perspectives from statistics and machine learning, to offer a more nuanced view into how the second generation of MOOCs be benefited, if course instructors were to better comprehend factors that lead to student attrition.
2406.15163
Muhammad Imran
Muhammad Imran, Olga Kellert, Carlos G\'omez-Rodr\'iguez
A Syntax-Injected Approach for Faster and More Accurate Sentiment Analysis
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Sentiment Analysis (SA) is a crucial aspect of Natural Language Processing (NLP), addressing subjective assessments in textual content. Syntactic parsing is useful in SA because explicit syntactic information can improve accuracy while providing explainability, but it tends to be a computational bottleneck in practice due to the slowness of parsing algorithms. This paper addresses said bottleneck by using a SEquence Labeling Syntactic Parser (SELSP) to inject syntax into SA. By treating dependency parsing as a sequence labeling problem, we greatly enhance the speed of syntax-based SA. SELSP is trained and evaluated on a ternary polarity classification task, demonstrating its faster performance and better accuracy in polarity prediction tasks compared to conventional parsers like Stanza and to heuristic approaches that use shallow syntactic rules for SA like VADER. This increased speed and improved accuracy make SELSP particularly appealing to SA practitioners in both research and industry. In addition, we test several sentiment dictionaries on our SELSP to see which one improves the performance in polarity prediction tasks. Moreover, we compare the SELSP with Transformer-based models trained on a 5-label classification task. The results show that dictionaries that capture polarity judgment variation provide better results than dictionaries that ignore polarity judgment variation. Moreover, we show that SELSP is considerably faster than Transformer-based models in polarity prediction tasks.
[ { "created": "Fri, 21 Jun 2024 14:08:25 GMT", "version": "v1" } ]
2024-06-24
[ [ "Imran", "Muhammad", "" ], [ "Kellert", "Olga", "" ], [ "Gómez-Rodríguez", "Carlos", "" ] ]
Sentiment Analysis (SA) is a crucial aspect of Natural Language Processing (NLP), addressing subjective assessments in textual content. Syntactic parsing is useful in SA because explicit syntactic information can improve accuracy while providing explainability, but it tends to be a computational bottleneck in practice due to the slowness of parsing algorithms. This paper addresses said bottleneck by using a SEquence Labeling Syntactic Parser (SELSP) to inject syntax into SA. By treating dependency parsing as a sequence labeling problem, we greatly enhance the speed of syntax-based SA. SELSP is trained and evaluated on a ternary polarity classification task, demonstrating its faster performance and better accuracy in polarity prediction tasks compared to conventional parsers like Stanza and to heuristic approaches that use shallow syntactic rules for SA like VADER. This increased speed and improved accuracy make SELSP particularly appealing to SA practitioners in both research and industry. In addition, we test several sentiment dictionaries on our SELSP to see which one improves the performance in polarity prediction tasks. Moreover, we compare the SELSP with Transformer-based models trained on a 5-label classification task. The results show that dictionaries that capture polarity judgment variation provide better results than dictionaries that ignore polarity judgment variation. Moreover, we show that SELSP is considerably faster than Transformer-based models in polarity prediction tasks.
2112.06300
Zachary Ferguson
David Belgrod, Bolun Wang, Zachary Ferguson, Xin Zhao, Marco Attene, Daniele Panozzo, Teseo Schneider
Time of Impact Dataset for Continuous Collision Detection and a Scalable Conservative Algorithm
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a large-scale benchmark for broad- and narrow-phase continuous collision detection (CCD) over linearized trajectories with exact time of impacts and use it to evaluate the accuracy, correctness, and efficiency of 13 state-of-the-art CCD algorithms. Our analysis shows that several methods exhibit problems either in efficiency or accuracy. To overcome these limitations, we introduce an algorithm for CCD designed to be scalable on modern parallel architectures and provably correct when implemented using floating point arithmetic. We integrate our algorithm within the Incremental Potential Contact solver [Li et al . 2021] and evaluate its impact on various simulation scenarios. Our approach includes a broad-phase CCD to quickly filter out primitives having disjoint bounding boxes and a narrow-phase CCD that establishes whether the remaining primitive pairs indeed collide. Our broad-phase algorithm is efficient and scalable thanks to the experimental observation that sweeping along a coordinate axis performs surprisingly well on modern parallel architectures. For narrow-phase CCD, we re-design the recently proposed interval-based algorithm of Wang et al. [2021] to work on massively parallel hardware. To foster the adoption and development of future linear CCD algorithms, and to evaluate their correctness, scalability, and overall performance, we release the dataset with analytic ground truth, the implementation of all the algorithms tested, and our testing framework.
[ { "created": "Sun, 12 Dec 2021 18:47:55 GMT", "version": "v1" }, { "created": "Tue, 1 Feb 2022 00:45:48 GMT", "version": "v2" }, { "created": "Mon, 22 Aug 2022 21:56:18 GMT", "version": "v3" }, { "created": "Sun, 13 Aug 2023 08:02:00 GMT", "version": "v4" } ]
2023-08-15
[ [ "Belgrod", "David", "" ], [ "Wang", "Bolun", "" ], [ "Ferguson", "Zachary", "" ], [ "Zhao", "Xin", "" ], [ "Attene", "Marco", "" ], [ "Panozzo", "Daniele", "" ], [ "Schneider", "Teseo", "" ] ]
We introduce a large-scale benchmark for broad- and narrow-phase continuous collision detection (CCD) over linearized trajectories with exact time of impacts and use it to evaluate the accuracy, correctness, and efficiency of 13 state-of-the-art CCD algorithms. Our analysis shows that several methods exhibit problems either in efficiency or accuracy. To overcome these limitations, we introduce an algorithm for CCD designed to be scalable on modern parallel architectures and provably correct when implemented using floating point arithmetic. We integrate our algorithm within the Incremental Potential Contact solver [Li et al . 2021] and evaluate its impact on various simulation scenarios. Our approach includes a broad-phase CCD to quickly filter out primitives having disjoint bounding boxes and a narrow-phase CCD that establishes whether the remaining primitive pairs indeed collide. Our broad-phase algorithm is efficient and scalable thanks to the experimental observation that sweeping along a coordinate axis performs surprisingly well on modern parallel architectures. For narrow-phase CCD, we re-design the recently proposed interval-based algorithm of Wang et al. [2021] to work on massively parallel hardware. To foster the adoption and development of future linear CCD algorithms, and to evaluate their correctness, scalability, and overall performance, we release the dataset with analytic ground truth, the implementation of all the algorithms tested, and our testing framework.
2304.03893
Naoki Wake
Naoki Wake, Atsushi Kanehira, Kazuhiro Sasabuchi, Jun Takamatsu, Katsushi Ikeuchi
ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application
21 figures, 7 tables. Published in IEEE Access (in press). Last updated August 29th, 2023
null
10.1109/ACCESS.2023.3310935
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper demonstrates how OpenAI's ChatGPT can be used in a few-shot setting to convert natural language instructions into a sequence of executable robot actions. The paper proposes easy-to-customize input prompts for ChatGPT that meet common requirements in practical applications, such as easy integration with robot execution systems and applicability to various environments while minimizing the impact of ChatGPT's token limit. The prompts encourage ChatGPT to output a sequence of predefined robot actions, represent the operating environment in a formalized style, and infer the updated state of the operating environment. Experiments confirmed that the proposed prompts enable ChatGPT to act according to requirements in various environments, and users can adjust ChatGPT's output with natural language feedback for safe and robust operation. The proposed prompts and source code are open-source and publicly available at https://github.com/microsoft/ChatGPT-Robot-Manipulation-Prompts
[ { "created": "Sat, 8 Apr 2023 02:41:40 GMT", "version": "v1" }, { "created": "Tue, 11 Apr 2023 14:29:41 GMT", "version": "v2" }, { "created": "Tue, 18 Apr 2023 03:18:42 GMT", "version": "v3" }, { "created": "Mon, 3 Jul 2023 12:04:21 GMT", "version": "v4" }, { "created": "Wed, 5 Jul 2023 21:20:20 GMT", "version": "v5" }, { "created": "Wed, 30 Aug 2023 03:38:22 GMT", "version": "v6" } ]
2023-08-31
[ [ "Wake", "Naoki", "" ], [ "Kanehira", "Atsushi", "" ], [ "Sasabuchi", "Kazuhiro", "" ], [ "Takamatsu", "Jun", "" ], [ "Ikeuchi", "Katsushi", "" ] ]
This paper demonstrates how OpenAI's ChatGPT can be used in a few-shot setting to convert natural language instructions into a sequence of executable robot actions. The paper proposes easy-to-customize input prompts for ChatGPT that meet common requirements in practical applications, such as easy integration with robot execution systems and applicability to various environments while minimizing the impact of ChatGPT's token limit. The prompts encourage ChatGPT to output a sequence of predefined robot actions, represent the operating environment in a formalized style, and infer the updated state of the operating environment. Experiments confirmed that the proposed prompts enable ChatGPT to act according to requirements in various environments, and users can adjust ChatGPT's output with natural language feedback for safe and robust operation. The proposed prompts and source code are open-source and publicly available at https://github.com/microsoft/ChatGPT-Robot-Manipulation-Prompts
2403.10327
Rouzbeh Behnia
Varol Kayhan, Shivendu Shivendu, Rouzbeh Behnia, Clinton Daniel, Manish Agrawal
Unsupervised Threat Hunting using Continuous Bag-of-Terms-and-Time (CBoTT)
null
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by/4.0/
Threat hunting is sifting through system logs to detect malicious activities that might have bypassed existing security measures. It can be performed in several ways, one of which is based on detecting anomalies. We propose an unsupervised framework, called continuous bag-of-terms-and-time (CBoTT), and publish its application programming interface (API) to help researchers and cybersecurity analysts perform anomaly-based threat hunting among SIEM logs geared toward process auditing on endpoint devices. Analyses show that our framework consistently outperforms benchmark approaches. When logs are sorted by likelihood of being an anomaly (from most likely to least), our approach identifies anomalies at higher percentiles (between 1.82-6.46) while benchmark approaches identify the same anomalies at lower percentiles (between 3.25-80.92). This framework can be used by other researchers to conduct benchmark analyses and cybersecurity analysts to find anomalies in SIEM logs.
[ { "created": "Fri, 15 Mar 2024 14:16:10 GMT", "version": "v1" } ]
2024-03-18
[ [ "Kayhan", "Varol", "" ], [ "Shivendu", "Shivendu", "" ], [ "Behnia", "Rouzbeh", "" ], [ "Daniel", "Clinton", "" ], [ "Agrawal", "Manish", "" ] ]
Threat hunting is sifting through system logs to detect malicious activities that might have bypassed existing security measures. It can be performed in several ways, one of which is based on detecting anomalies. We propose an unsupervised framework, called continuous bag-of-terms-and-time (CBoTT), and publish its application programming interface (API) to help researchers and cybersecurity analysts perform anomaly-based threat hunting among SIEM logs geared toward process auditing on endpoint devices. Analyses show that our framework consistently outperforms benchmark approaches. When logs are sorted by likelihood of being an anomaly (from most likely to least), our approach identifies anomalies at higher percentiles (between 1.82-6.46) while benchmark approaches identify the same anomalies at lower percentiles (between 3.25-80.92). This framework can be used by other researchers to conduct benchmark analyses and cybersecurity analysts to find anomalies in SIEM logs.
1803.07640
Matt Gardner
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, Luke Zettlemoyer
AllenNLP: A Deep Semantic Natural Language Processing Platform
Describes the initial version of AllenNLP. Many features and models have been added since the first release. This is the paper to cite if you use AllenNLP in your research. Updated 5/31/2018 with version accepted to the NLP OSS workshop help at ACL 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding. AllenNLP is designed to support researchers who want to build novel language understanding models quickly and easily. It is built on top of PyTorch, allowing for dynamic computation graphs, and provides (1) a flexible data API that handles intelligent batching and padding, (2) high-level abstractions for common operations in working with text, and (3) a modular and extensible experiment framework that makes doing good science easy. It also includes reference implementations of high quality approaches for both core semantic problems (e.g. semantic role labeling (Palmer et al., 2005)) and language understanding applications (e.g. machine comprehension (Rajpurkar et al., 2016)). AllenNLP is an ongoing open-source effort maintained by engineers and researchers at the Allen Institute for Artificial Intelligence.
[ { "created": "Tue, 20 Mar 2018 20:32:07 GMT", "version": "v1" }, { "created": "Thu, 31 May 2018 17:56:14 GMT", "version": "v2" } ]
2018-06-01
[ [ "Gardner", "Matt", "" ], [ "Grus", "Joel", "" ], [ "Neumann", "Mark", "" ], [ "Tafjord", "Oyvind", "" ], [ "Dasigi", "Pradeep", "" ], [ "Liu", "Nelson", "" ], [ "Peters", "Matthew", "" ], [ "Schmitz", "Michael", "" ], [ "Zettlemoyer", "Luke", "" ] ]
This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding. AllenNLP is designed to support researchers who want to build novel language understanding models quickly and easily. It is built on top of PyTorch, allowing for dynamic computation graphs, and provides (1) a flexible data API that handles intelligent batching and padding, (2) high-level abstractions for common operations in working with text, and (3) a modular and extensible experiment framework that makes doing good science easy. It also includes reference implementations of high quality approaches for both core semantic problems (e.g. semantic role labeling (Palmer et al., 2005)) and language understanding applications (e.g. machine comprehension (Rajpurkar et al., 2016)). AllenNLP is an ongoing open-source effort maintained by engineers and researchers at the Allen Institute for Artificial Intelligence.
2305.04867
Mithun Bairagi Ph.D.
Mithun Bairagi
A New Algorithm to determine Adomian Polynomials for nonlinear polynomial functions
null
Asian Research Journal of Mathematics, Volume 19, Issue 7, Page 1-12, 2023
10.9734/arjom/2023/v19i7670
null
cs.CE
http://creativecommons.org/licenses/by/4.0/
We present a new algorithm by which the Adomian polynomials can be determined for scalar-valued nonlinear polynomial functional in a Hilbert space. This algorithm calculates the Adomian polynomials without the complicated operations such as parametrization, expansion, regrouping, differentiation, etc. The algorithm involves only some matrix operations. Because of the simplicity in the mathematical operations, the new algorithm is faster and more efficient than the other algorithms previously reported in the literature. We also implement the algorithm in the MATHEMATICA code. The computing speed and efficiency of the new algorithm are compared with some other algorithms in the one-dimensional case.
[ { "created": "Tue, 11 Apr 2023 15:05:20 GMT", "version": "v1" } ]
2023-05-09
[ [ "Bairagi", "Mithun", "" ] ]
We present a new algorithm by which the Adomian polynomials can be determined for scalar-valued nonlinear polynomial functional in a Hilbert space. This algorithm calculates the Adomian polynomials without the complicated operations such as parametrization, expansion, regrouping, differentiation, etc. The algorithm involves only some matrix operations. Because of the simplicity in the mathematical operations, the new algorithm is faster and more efficient than the other algorithms previously reported in the literature. We also implement the algorithm in the MATHEMATICA code. The computing speed and efficiency of the new algorithm are compared with some other algorithms in the one-dimensional case.
2310.13625
Lennart Heim
Janet Egan and Lennart Heim
Oversight for Frontier AI through a Know-Your-Customer Scheme for Compute Providers
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
To address security and safety risks stemming from highly capable artificial intelligence (AI) models, we propose that the US government should ensure compute providers implement Know-Your-Customer (KYC) schemes. Compute - the computational power and infrastructure required to train and run these AI models - is emerging as a node for oversight. KYC, a standard developed by the banking sector to identify and verify client identity, could provide a mechanism for greater public oversight of frontier AI development and close loopholes in existing export controls. Such a scheme has the potential to identify and warn stakeholders of potentially problematic and/or sudden advancements in AI capabilities, build government capacity for AI regulation, and allow for the development and implementation of more nuanced and targeted export controls. Unlike the strategy of limiting access to AI chip purchases, regulating the digital access to compute offers more precise controls, allowing regulatory control over compute quantities, as well as the flexibility to suspend access at any time. To enact a KYC scheme, the US government will need to work closely with industry to (1) establish a dynamic threshold of compute that effectively captures high-risk frontier model development, while minimizing imposition on developers not engaged in frontier AI; (2) set requirements and guidance for compute providers to keep records and report high-risk entities; (3) establish government capacity that allows for co-design, implementation, administration and enforcement of the scheme; and (4) engage internationally to promote international alignment with the scheme and support its long-term efficacy. While the scheme will not address all AI risks, it complements proposed solutions by allowing for a more precise and flexible approach to controlling the development of frontier AI models and unwanted AI proliferation.
[ { "created": "Fri, 20 Oct 2023 16:17:29 GMT", "version": "v1" } ]
2023-10-23
[ [ "Egan", "Janet", "" ], [ "Heim", "Lennart", "" ] ]
To address security and safety risks stemming from highly capable artificial intelligence (AI) models, we propose that the US government should ensure compute providers implement Know-Your-Customer (KYC) schemes. Compute - the computational power and infrastructure required to train and run these AI models - is emerging as a node for oversight. KYC, a standard developed by the banking sector to identify and verify client identity, could provide a mechanism for greater public oversight of frontier AI development and close loopholes in existing export controls. Such a scheme has the potential to identify and warn stakeholders of potentially problematic and/or sudden advancements in AI capabilities, build government capacity for AI regulation, and allow for the development and implementation of more nuanced and targeted export controls. Unlike the strategy of limiting access to AI chip purchases, regulating the digital access to compute offers more precise controls, allowing regulatory control over compute quantities, as well as the flexibility to suspend access at any time. To enact a KYC scheme, the US government will need to work closely with industry to (1) establish a dynamic threshold of compute that effectively captures high-risk frontier model development, while minimizing imposition on developers not engaged in frontier AI; (2) set requirements and guidance for compute providers to keep records and report high-risk entities; (3) establish government capacity that allows for co-design, implementation, administration and enforcement of the scheme; and (4) engage internationally to promote international alignment with the scheme and support its long-term efficacy. While the scheme will not address all AI risks, it complements proposed solutions by allowing for a more precise and flexible approach to controlling the development of frontier AI models and unwanted AI proliferation.
1904.06217
Ines Rehbein
Federico Nanni, Goran Glavas, Ines Rehbein, Simone Paolo Ponzetto, Heiner Stuckenschmidt
Political Text Scaling Meets Computational Semantics
Updated version - accepted for Transactions on Data Science (TDS)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During the last fifteen years, automatic text scaling has become one of the key tools of the Text as Data community in political science. Prominent text scaling algorithms, however, rely on the assumption that latent positions can be captured just by leveraging the information about word frequencies in documents under study. We challenge this traditional view and present a new, semantically aware text scaling algorithm, SemScale, which combines recent developments in the area of computational linguistics with unsupervised graph-based clustering. We conduct an extensive quantitative analysis over a collection of speeches from the European Parliament in five different languages and from two different legislative terms, and show that a scaling approach relying on semantic document representations is often better at capturing known underlying political dimensions than the established frequency-based (i.e., symbolic) scaling method. We further validate our findings through a series of experiments focused on text preprocessing and feature selection, document representation, scaling of party manifestos, and a supervised extension of our algorithm. To catalyze further research on this new branch of text scaling methods, we release a Python implementation of SemScale with all included data sets and evaluation procedures.
[ { "created": "Fri, 12 Apr 2019 13:05:06 GMT", "version": "v1" }, { "created": "Wed, 8 May 2019 12:23:22 GMT", "version": "v2" }, { "created": "Thu, 14 Oct 2021 12:20:04 GMT", "version": "v3" } ]
2021-10-15
[ [ "Nanni", "Federico", "" ], [ "Glavas", "Goran", "" ], [ "Rehbein", "Ines", "" ], [ "Ponzetto", "Simone Paolo", "" ], [ "Stuckenschmidt", "Heiner", "" ] ]
During the last fifteen years, automatic text scaling has become one of the key tools of the Text as Data community in political science. Prominent text scaling algorithms, however, rely on the assumption that latent positions can be captured just by leveraging the information about word frequencies in documents under study. We challenge this traditional view and present a new, semantically aware text scaling algorithm, SemScale, which combines recent developments in the area of computational linguistics with unsupervised graph-based clustering. We conduct an extensive quantitative analysis over a collection of speeches from the European Parliament in five different languages and from two different legislative terms, and show that a scaling approach relying on semantic document representations is often better at capturing known underlying political dimensions than the established frequency-based (i.e., symbolic) scaling method. We further validate our findings through a series of experiments focused on text preprocessing and feature selection, document representation, scaling of party manifestos, and a supervised extension of our algorithm. To catalyze further research on this new branch of text scaling methods, we release a Python implementation of SemScale with all included data sets and evaluation procedures.
1906.07077
Felix Assion
Felix Assion, Peter Schlicht, Florens Gre{\ss}ner, Wiebke G\"unther, Fabian H\"uger, Nico Schmidt, Umair Rasheed
The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks
CVPR SAIAD - Workshop 2019
null
null
null
cs.LG cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most state-of-the-art machine learning (ML) classification systems are vulnerable to adversarial perturbations. As a consequence, adversarial robustness poses a significant challenge for the deployment of ML-based systems in safety- and security-critical environments like autonomous driving, disease detection or unmanned aerial vehicles. In the past years we have seen an impressive amount of publications presenting more and more new adversarial attacks. However, the attack research seems to be rather unstructured and new attacks often appear to be random selections from the unlimited set of possible adversarial attacks. With this publication, we present a structured analysis of the adversarial attack creation process. By detecting different building blocks of adversarial attacks, we outline the road to new sets of adversarial attacks. We call this the "attack generator". In the pursuit of this objective, we summarize and extend existing adversarial perturbation taxonomies. The resulting taxonomy is then linked to the application context of computer vision systems for autonomous vehicles, i.e. semantic segmentation and object detection. Finally, in order to prove the usefulness of the attack generator, we investigate existing semantic segmentation attacks with respect to the detected defining components of adversarial attacks.
[ { "created": "Mon, 17 Jun 2019 15:06:47 GMT", "version": "v1" } ]
2019-06-18
[ [ "Assion", "Felix", "" ], [ "Schlicht", "Peter", "" ], [ "Greßner", "Florens", "" ], [ "Günther", "Wiebke", "" ], [ "Hüger", "Fabian", "" ], [ "Schmidt", "Nico", "" ], [ "Rasheed", "Umair", "" ] ]
Most state-of-the-art machine learning (ML) classification systems are vulnerable to adversarial perturbations. As a consequence, adversarial robustness poses a significant challenge for the deployment of ML-based systems in safety- and security-critical environments like autonomous driving, disease detection or unmanned aerial vehicles. In the past years we have seen an impressive amount of publications presenting more and more new adversarial attacks. However, the attack research seems to be rather unstructured and new attacks often appear to be random selections from the unlimited set of possible adversarial attacks. With this publication, we present a structured analysis of the adversarial attack creation process. By detecting different building blocks of adversarial attacks, we outline the road to new sets of adversarial attacks. We call this the "attack generator". In the pursuit of this objective, we summarize and extend existing adversarial perturbation taxonomies. The resulting taxonomy is then linked to the application context of computer vision systems for autonomous vehicles, i.e. semantic segmentation and object detection. Finally, in order to prove the usefulness of the attack generator, we investigate existing semantic segmentation attacks with respect to the detected defining components of adversarial attacks.
1610.04161
Shiyu Liang
Shiyu Liang and R. Srikant
Why Deep Neural Networks for Function Approximation?
The paper is published at the 5th International Conference on Learning Representations (ICLR)
null
null
null
cs.LG cs.NE
http://creativecommons.org/publicdomain/zero/1.0/
Recently there has been much interest in understanding why deep neural networks are preferred to shallow networks. We show that, for a large class of piecewise smooth functions, the number of neurons needed by a shallow network to approximate a function is exponentially larger than the corresponding number of neurons needed by a deep network for a given degree of function approximation. First, we consider univariate functions on a bounded interval and require a neural network to achieve an approximation error of $\varepsilon$ uniformly over the interval. We show that shallow networks (i.e., networks whose depth does not depend on $\varepsilon$) require $\Omega(\text{poly}(1/\varepsilon))$ neurons while deep networks (i.e., networks whose depth grows with $1/\varepsilon$) require $\mathcal{O}(\text{polylog}(1/\varepsilon))$ neurons. We then extend these results to certain classes of important multivariate functions. Our results are derived for neural networks which use a combination of rectifier linear units (ReLUs) and binary step units, two of the most popular type of activation functions. Our analysis builds on a simple observation: the multiplication of two bits can be represented by a ReLU.
[ { "created": "Thu, 13 Oct 2016 16:34:30 GMT", "version": "v1" }, { "created": "Fri, 3 Mar 2017 20:43:04 GMT", "version": "v2" } ]
2017-03-07
[ [ "Liang", "Shiyu", "" ], [ "Srikant", "R.", "" ] ]
Recently there has been much interest in understanding why deep neural networks are preferred to shallow networks. We show that, for a large class of piecewise smooth functions, the number of neurons needed by a shallow network to approximate a function is exponentially larger than the corresponding number of neurons needed by a deep network for a given degree of function approximation. First, we consider univariate functions on a bounded interval and require a neural network to achieve an approximation error of $\varepsilon$ uniformly over the interval. We show that shallow networks (i.e., networks whose depth does not depend on $\varepsilon$) require $\Omega(\text{poly}(1/\varepsilon))$ neurons while deep networks (i.e., networks whose depth grows with $1/\varepsilon$) require $\mathcal{O}(\text{polylog}(1/\varepsilon))$ neurons. We then extend these results to certain classes of important multivariate functions. Our results are derived for neural networks which use a combination of rectifier linear units (ReLUs) and binary step units, two of the most popular type of activation functions. Our analysis builds on a simple observation: the multiplication of two bits can be represented by a ReLU.
1004.0383
Ali Tajer
Ali Tajer and Xiaodong Wang
Multiuser Diversity Gain in Cognitive Networks
32 pages, 3 figures, to appear in the IEEE/ACM Transactions on Networking
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dynamic allocation of resources to the \emph{best} link in large multiuser networks offers considerable improvement in spectral efficiency. This gain, often referred to as \emph{multiuser diversity gain}, can be cast as double-logarithmic growth of the network throughput with the number of users. In this paper we consider large cognitive networks granted concurrent spectrum access with license-holding users. The primary network affords to share its under-utilized spectrum bands with the secondary users. We assess the optimal multiuser diversity gain in the cognitive networks by quantifying how the sum-rate throughput of the network scales with the number of secondary users. For this purpose we look at the optimal pairing of spectrum bands and secondary users, which is supervised by a central entity fully aware of the instantaneous channel conditions, and show that the throughput of the cognitive network scales double-logarithmically with the number of secondary users ($N$) and linearly with the number of available spectrum bands ($M$), i.e., $M\log\log N$. We then propose a \emph{distributed} spectrum allocation scheme, which does not necessitate a central controller or any information exchange between different secondary users and still obeys the optimal throughput scaling law. This scheme requires that \emph{some} secondary transmitter-receiver pairs exchange $\log M$ information bits among themselves. We also show that the aggregate amount of information exchange between secondary transmitter-receiver pairs is {\em asymptotically} equal to $M\log M$. Finally, we show that our distributed scheme guarantees fairness among the secondary users, meaning that they are equally likely to get access to an available spectrum band.
[ { "created": "Fri, 2 Apr 2010 19:58:11 GMT", "version": "v1" }, { "created": "Fri, 16 Apr 2010 14:25:46 GMT", "version": "v2" } ]
2010-04-19
[ [ "Tajer", "Ali", "" ], [ "Wang", "Xiaodong", "" ] ]
Dynamic allocation of resources to the \emph{best} link in large multiuser networks offers considerable improvement in spectral efficiency. This gain, often referred to as \emph{multiuser diversity gain}, can be cast as double-logarithmic growth of the network throughput with the number of users. In this paper we consider large cognitive networks granted concurrent spectrum access with license-holding users. The primary network affords to share its under-utilized spectrum bands with the secondary users. We assess the optimal multiuser diversity gain in the cognitive networks by quantifying how the sum-rate throughput of the network scales with the number of secondary users. For this purpose we look at the optimal pairing of spectrum bands and secondary users, which is supervised by a central entity fully aware of the instantaneous channel conditions, and show that the throughput of the cognitive network scales double-logarithmically with the number of secondary users ($N$) and linearly with the number of available spectrum bands ($M$), i.e., $M\log\log N$. We then propose a \emph{distributed} spectrum allocation scheme, which does not necessitate a central controller or any information exchange between different secondary users and still obeys the optimal throughput scaling law. This scheme requires that \emph{some} secondary transmitter-receiver pairs exchange $\log M$ information bits among themselves. We also show that the aggregate amount of information exchange between secondary transmitter-receiver pairs is {\em asymptotically} equal to $M\log M$. Finally, we show that our distributed scheme guarantees fairness among the secondary users, meaning that they are equally likely to get access to an available spectrum band.
1605.09486
Chenglei Wu
Chenglei Wu, Zhi Wang, Shiqiang Yang
Drone Streaming with Wi-Fi Grid Aggregation for Virtual Tour
null
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To provide a live, active and high-quality virtual touring streaming experience, we propose an unmanned drone stereoscopic streaming paradigm using a control and streaming infrastructure of a 2.4GHz Wi-Fi grid. Our system allows users to actively control the streaming captured by a drone, receive and watch the streaming using a head mount display (HMD); a Wi-Fi grid is deployed across the remote scene with multi-channel support to enable high-bitrate stream- ing broadcast from the drones. The system adopt a joint view adaptation and drone control scheme to enable fast viewer movement including both head rotation and touring. We implement the prototype on Dji M100 quadcopter and HTC Vive in a demo scene.
[ { "created": "Tue, 31 May 2016 03:59:03 GMT", "version": "v1" } ]
2016-06-01
[ [ "Wu", "Chenglei", "" ], [ "Wang", "Zhi", "" ], [ "Yang", "Shiqiang", "" ] ]
To provide a live, active and high-quality virtual touring streaming experience, we propose an unmanned drone stereoscopic streaming paradigm using a control and streaming infrastructure of a 2.4GHz Wi-Fi grid. Our system allows users to actively control the streaming captured by a drone, receive and watch the streaming using a head mount display (HMD); a Wi-Fi grid is deployed across the remote scene with multi-channel support to enable high-bitrate stream- ing broadcast from the drones. The system adopt a joint view adaptation and drone control scheme to enable fast viewer movement including both head rotation and touring. We implement the prototype on Dji M100 quadcopter and HTC Vive in a demo scene.
1811.07493
Gurjeet Singh
Gurjeet Singh, Sun Miao, Shi Shi and Patrick Chiang
FotonNet: A HW-Efficient Object Detection System Using 3D-Depth Segmentation and 2D-DNN Classifier
7 pages, 10 figures, 2 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Object detection and classification is one of the most important computer vision problems. Ever since the introduction of deep learning \cite{krizhevsky2012imagenet}, we have witnessed a dramatic increase in the accuracy of this object detection problem. However, most of these improvements have occurred using conventional 2D image processing. Recently, low-cost 3D-image sensors, such as the Microsoft Kinect (Time-of-Flight) or the Apple FaceID (Structured-Light), can provide 3D-depth or point cloud data that can be added to a convolutional neural network, acting as an extra set of dimensions. In our proposed approach, we introduce a new 2D + 3D system that takes the 3D-data to determine the object region followed by any conventional 2D-DNN, such as AlexNet. In this method, our approach can easily dissociate the information collection from the Point Cloud and 2D-Image data and combine both operations later. Hence, our system can use any existing trained 2D network on a large image dataset, and does not require a large 3D-depth dataset for new training. Experimental object detection results across 30 images show an accuracy of 0.67, versus 0.54 and 0.51 for RCNN and YOLO, respectively.
[ { "created": "Mon, 19 Nov 2018 04:31:29 GMT", "version": "v1" } ]
2018-11-20
[ [ "Singh", "Gurjeet", "" ], [ "Miao", "Sun", "" ], [ "Shi", "Shi", "" ], [ "Chiang", "Patrick", "" ] ]
Object detection and classification is one of the most important computer vision problems. Ever since the introduction of deep learning \cite{krizhevsky2012imagenet}, we have witnessed a dramatic increase in the accuracy of this object detection problem. However, most of these improvements have occurred using conventional 2D image processing. Recently, low-cost 3D-image sensors, such as the Microsoft Kinect (Time-of-Flight) or the Apple FaceID (Structured-Light), can provide 3D-depth or point cloud data that can be added to a convolutional neural network, acting as an extra set of dimensions. In our proposed approach, we introduce a new 2D + 3D system that takes the 3D-data to determine the object region followed by any conventional 2D-DNN, such as AlexNet. In this method, our approach can easily dissociate the information collection from the Point Cloud and 2D-Image data and combine both operations later. Hence, our system can use any existing trained 2D network on a large image dataset, and does not require a large 3D-depth dataset for new training. Experimental object detection results across 30 images show an accuracy of 0.67, versus 0.54 and 0.51 for RCNN and YOLO, respectively.
2405.15767
Atsushi Nitanda
Atsushi Nitanda
Improved Particle Approximation Error for Mean Field Neural Networks
16 pages
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mean-field Langevin dynamics (MFLD) minimizes an entropy-regularized nonlinear convex functional defined over the space of probability distributions. MFLD has gained attention due to its connection with noisy gradient descent for mean-field two-layer neural networks. Unlike standard Langevin dynamics, the nonlinearity of the objective functional induces particle interactions, necessitating multiple particles to approximate the dynamics in a finite-particle setting. Recent works (Chen et al., 2022; Suzuki et al., 2023b) have demonstrated the uniform-in-time propagation of chaos for MFLD, showing that the gap between the particle system and its mean-field limit uniformly shrinks over time as the number of particles increases. In this work, we improve the dependence on logarithmic Sobolev inequality (LSI) constants in their particle approximation errors, which can exponentially deteriorate with the regularization coefficient. Specifically, we establish an LSI-constant-free particle approximation error concerning the objective gap by leveraging the problem structure in risk minimization. As the application, we demonstrate improved convergence of MFLD, sampling guarantee for the mean-field stationary distribution, and uniform-in-time Wasserstein propagation of chaos in terms of particle complexity.
[ { "created": "Fri, 24 May 2024 17:59:06 GMT", "version": "v1" }, { "created": "Fri, 14 Jun 2024 13:20:06 GMT", "version": "v2" } ]
2024-06-17
[ [ "Nitanda", "Atsushi", "" ] ]
Mean-field Langevin dynamics (MFLD) minimizes an entropy-regularized nonlinear convex functional defined over the space of probability distributions. MFLD has gained attention due to its connection with noisy gradient descent for mean-field two-layer neural networks. Unlike standard Langevin dynamics, the nonlinearity of the objective functional induces particle interactions, necessitating multiple particles to approximate the dynamics in a finite-particle setting. Recent works (Chen et al., 2022; Suzuki et al., 2023b) have demonstrated the uniform-in-time propagation of chaos for MFLD, showing that the gap between the particle system and its mean-field limit uniformly shrinks over time as the number of particles increases. In this work, we improve the dependence on logarithmic Sobolev inequality (LSI) constants in their particle approximation errors, which can exponentially deteriorate with the regularization coefficient. Specifically, we establish an LSI-constant-free particle approximation error concerning the objective gap by leveraging the problem structure in risk minimization. As the application, we demonstrate improved convergence of MFLD, sampling guarantee for the mean-field stationary distribution, and uniform-in-time Wasserstein propagation of chaos in terms of particle complexity.
2102.07127
Samrat Kumar Dey
Md. Mahbubur Rahman, Akash Poddar, Md. Golam Rabiul Alam, and Samrat Kumar Dey
Affective State Recognition through EEG Signals Feature Level Fusion and Ensemble Classifier
18 pages, 7 figures
null
null
null
cs.HC
http://creativecommons.org/licenses/by-sa/4.0/
Human affects are complex paradox and an active research domain in affective computing. Affects are traditionally determined through a self-report based psychometric questionnaire or through facial expression recognition. However, few state-of-the-arts pieces of research have shown the possibilities of recognizing human affects from psychophysiological and neurological signals. In this article, electroencephalogram (EEG) signals are used to recognize human affects. The electroencephalogram (EEG) of 100 participants are collected where they are given to watch one-minute video stimuli to induce different affective states. The videos with emotional tags have a variety range of affects including happy, sad, disgust, and peaceful. The experimental stimuli are collected and analyzed intensively. The interrelationship between the EEG signal frequencies and the ratings given by the participants are taken into consideration for classifying affective states. Advanced feature extraction techniques are applied along with the statistical features to prepare a fused feature vector of affective state recognition. Factor analysis methods are also applied to select discriminative features. Finally, several popular supervised machine learning classifier is applied to recognize different affective states from the discriminative feature vector. Based on the experiment, the designed random forest classifier produces 89.06% accuracy in classifying four basic affective states.
[ { "created": "Sun, 14 Feb 2021 10:56:08 GMT", "version": "v1" } ]
2021-02-16
[ [ "Rahman", "Md. Mahbubur", "" ], [ "Poddar", "Akash", "" ], [ "Alam", "Md. Golam Rabiul", "" ], [ "Dey", "Samrat Kumar", "" ] ]
Human affects are complex paradox and an active research domain in affective computing. Affects are traditionally determined through a self-report based psychometric questionnaire or through facial expression recognition. However, few state-of-the-arts pieces of research have shown the possibilities of recognizing human affects from psychophysiological and neurological signals. In this article, electroencephalogram (EEG) signals are used to recognize human affects. The electroencephalogram (EEG) of 100 participants are collected where they are given to watch one-minute video stimuli to induce different affective states. The videos with emotional tags have a variety range of affects including happy, sad, disgust, and peaceful. The experimental stimuli are collected and analyzed intensively. The interrelationship between the EEG signal frequencies and the ratings given by the participants are taken into consideration for classifying affective states. Advanced feature extraction techniques are applied along with the statistical features to prepare a fused feature vector of affective state recognition. Factor analysis methods are also applied to select discriminative features. Finally, several popular supervised machine learning classifier is applied to recognize different affective states from the discriminative feature vector. Based on the experiment, the designed random forest classifier produces 89.06% accuracy in classifying four basic affective states.
2208.12561
Komal Pathade
Komal Pathade and Uday Khedker
Computing Maximum Fixed Point Solutions over Feasible Paths in Data Flow Analyses
68 pages, 22 figures
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
The control flow graph (CFG) representation of a procedure used by virtually all flow-sensitive program analyses, admits a large number of infeasible control flow paths i.e., these paths do not occur in any execution of the program. Hence the information reaching along infeasible paths in an analysis is spurious. This affects the precision of the conventional maximum fixed point (MFP) solution of the data flow analysis, because it includes the information reaching along all control flow paths. The existing approaches for removing this imprecision are either specific to a data flow problem with no straightforward generalization or involve control flow graph restructuring which may exponentially blow up the size of the CFG. We lift the notion of MFP solution to define the notion of feasible path MFP (FPMFP) solutions that exclude the data flowing along known infeasible paths. The notion of FPMFP is generic and does not involve CFG restructuring. Instead, it takes externally supplied information about infeasible paths and lifts any data flow analysis to an analysis that maintains the distinctions between different paths where these distinctions are beneficial, and ignores them where they are not. Thus it gets the benefit of a path-sensitive analysis where it is useful without performing a conventional path-sensitive analysis. We evaluated the proposed feasible path MFP solutions for reaching definitions analysis and potentially uninitialized variable analysis on 30 benchmarks. The evaluation results indicate that precision improvement in these two analyses respectively reduce the number def-use pairs by up to 13.6% (average 2.87%, geometric mean 1.75%), and reduce the potentially uninitialized variable alarms by up to 100% (average 18.5%, geo. mean 3%). We found that the FPMFP computation time was 2.9X of MFP computation time on average.
[ { "created": "Fri, 26 Aug 2022 10:12:27 GMT", "version": "v1" } ]
2022-08-29
[ [ "Pathade", "Komal", "" ], [ "Khedker", "Uday", "" ] ]
The control flow graph (CFG) representation of a procedure used by virtually all flow-sensitive program analyses, admits a large number of infeasible control flow paths i.e., these paths do not occur in any execution of the program. Hence the information reaching along infeasible paths in an analysis is spurious. This affects the precision of the conventional maximum fixed point (MFP) solution of the data flow analysis, because it includes the information reaching along all control flow paths. The existing approaches for removing this imprecision are either specific to a data flow problem with no straightforward generalization or involve control flow graph restructuring which may exponentially blow up the size of the CFG. We lift the notion of MFP solution to define the notion of feasible path MFP (FPMFP) solutions that exclude the data flowing along known infeasible paths. The notion of FPMFP is generic and does not involve CFG restructuring. Instead, it takes externally supplied information about infeasible paths and lifts any data flow analysis to an analysis that maintains the distinctions between different paths where these distinctions are beneficial, and ignores them where they are not. Thus it gets the benefit of a path-sensitive analysis where it is useful without performing a conventional path-sensitive analysis. We evaluated the proposed feasible path MFP solutions for reaching definitions analysis and potentially uninitialized variable analysis on 30 benchmarks. The evaluation results indicate that precision improvement in these two analyses respectively reduce the number def-use pairs by up to 13.6% (average 2.87%, geometric mean 1.75%), and reduce the potentially uninitialized variable alarms by up to 100% (average 18.5%, geo. mean 3%). We found that the FPMFP computation time was 2.9X of MFP computation time on average.
1710.09490
Chuhang Zou
Chuhang Zou, Ruiqi Guo, Zhizhong Li, Derek Hoiem
Complete 3D Scene Parsing from an RGBD Image
Accepted to International Journal of Computer Vision (IJCV), 2018 arXiv admin note: text overlap with arXiv:1504.02437
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One major goal of vision is to infer physical models of objects, surfaces, and their layout from sensors. In this paper, we aim to interpret indoor scenes from one RGBD image. Our representation encodes the layout of orthogonal walls and the extent of objects, modeled with CAD-like 3D shapes. We parse both the visible and occluded portions of the scene and all observable objects, producing a complete 3D parse. Such a scene interpretation is useful for robotics and visual reasoning, but difficult to produce due to the well-known challenge of segmentation, the high degree of occlusion, and the diversity of objects in indoor scenes. We take a data-driven approach, generating sets of potential object regions, matching to regions in training images, and transferring and aligning associated 3D models while encouraging fit to observations and spatial consistency. We use support inference to aid interpretation and propose a retrieval scheme that uses convolutional neural networks (CNNs) to classify regions and retrieve objects with similar shapes. We demonstrate the performance of our method on our newly annotated NYUd v2 dataset with detailed 3D shapes.
[ { "created": "Wed, 25 Oct 2017 23:04:14 GMT", "version": "v1" }, { "created": "Tue, 13 Nov 2018 18:05:14 GMT", "version": "v2" } ]
2018-11-15
[ [ "Zou", "Chuhang", "" ], [ "Guo", "Ruiqi", "" ], [ "Li", "Zhizhong", "" ], [ "Hoiem", "Derek", "" ] ]
One major goal of vision is to infer physical models of objects, surfaces, and their layout from sensors. In this paper, we aim to interpret indoor scenes from one RGBD image. Our representation encodes the layout of orthogonal walls and the extent of objects, modeled with CAD-like 3D shapes. We parse both the visible and occluded portions of the scene and all observable objects, producing a complete 3D parse. Such a scene interpretation is useful for robotics and visual reasoning, but difficult to produce due to the well-known challenge of segmentation, the high degree of occlusion, and the diversity of objects in indoor scenes. We take a data-driven approach, generating sets of potential object regions, matching to regions in training images, and transferring and aligning associated 3D models while encouraging fit to observations and spatial consistency. We use support inference to aid interpretation and propose a retrieval scheme that uses convolutional neural networks (CNNs) to classify regions and retrieve objects with similar shapes. We demonstrate the performance of our method on our newly annotated NYUd v2 dataset with detailed 3D shapes.
1907.12223
Haisheng Su
Haisheng Su and Xu Zhao and Shuming Liu
Multi-Granularity Fusion Network for Proposal and Activity Localization: Submission to ActivityNet Challenge 2019 Task 1 and Task 2
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This technical report presents an overview of our solution used in the submission to ActivityNet Challenge 2019 Task 1 (\textbf{temporal action proposal generation}) and Task 2 (\textbf{temporal action localization/detection}). Temporal action proposal indicates the temporal intervals containing the actions and plays an important role in temporal action localization. Top-down and bottom-up methods are the two main categories used for proposal generation in the existing literature. In this paper, we devise a novel Multi-Granularity Fusion Network (MGFN) to combine the proposals generated from different frameworks for complementary filtering and confidence re-ranking. Specifically, we consider the diversity comprehensively from multiple perspectives, e.g. the characteristic aspect, the data aspect, the model aspect and the result aspect. Our MGFN achieves the state-of-the-art performance on the temporal action proposal task with 69.85 AUC score and the temporal action localization task with 38.90 mAP on the challenge testing set.
[ { "created": "Mon, 29 Jul 2019 06:10:51 GMT", "version": "v1" } ]
2019-07-30
[ [ "Su", "Haisheng", "" ], [ "Zhao", "Xu", "" ], [ "Liu", "Shuming", "" ] ]
This technical report presents an overview of our solution used in the submission to ActivityNet Challenge 2019 Task 1 (\textbf{temporal action proposal generation}) and Task 2 (\textbf{temporal action localization/detection}). Temporal action proposal indicates the temporal intervals containing the actions and plays an important role in temporal action localization. Top-down and bottom-up methods are the two main categories used for proposal generation in the existing literature. In this paper, we devise a novel Multi-Granularity Fusion Network (MGFN) to combine the proposals generated from different frameworks for complementary filtering and confidence re-ranking. Specifically, we consider the diversity comprehensively from multiple perspectives, e.g. the characteristic aspect, the data aspect, the model aspect and the result aspect. Our MGFN achieves the state-of-the-art performance on the temporal action proposal task with 69.85 AUC score and the temporal action localization task with 38.90 mAP on the challenge testing set.
2302.05675
Chung-Ju Huang
Chung-ju Huang and Leye Wang and Xiao Han
Vertical Federated Knowledge Transfer via Representation Distillation for Healthcare Collaboration Networks
null
null
10.1145/3543507.3583874
null
cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Collaboration between healthcare institutions can significantly lessen the imbalance in medical resources across various geographic areas. However, directly sharing diagnostic information between institutions is typically not permitted due to the protection of patients' highly sensitive privacy. As a novel privacy-preserving machine learning paradigm, federated learning (FL) makes it possible to maximize the data utility among multiple medical institutions. These feature-enrichment FL techniques are referred to as vertical FL (VFL). Traditional VFL can only benefit multi-parties' shared samples, which strongly restricts its application scope. In order to improve the information-sharing capability and innovation of various healthcare-related institutions, and then to establish a next-generation open medical collaboration network, we propose a unified framework for vertical federated knowledge transfer mechanism (VFedTrans) based on a novel cross-hospital representation distillation component. Specifically, our framework includes three steps. First, shared samples' federated representations are extracted by collaboratively modeling multi-parties' joint features with current efficient vertical federated representation learning methods. Second, for each hospital, we learn a local-representation-distilled module, which can transfer the knowledge from shared samples' federated representations to enrich local samples' representations. Finally, each hospital can leverage local samples' representations enriched by the distillation module to boost arbitrary downstream machine learning tasks. The experiments on real-life medical datasets verify the knowledge transfer effectiveness of our framework.
[ { "created": "Sat, 11 Feb 2023 12:15:37 GMT", "version": "v1" } ]
2023-02-14
[ [ "Huang", "Chung-ju", "" ], [ "Wang", "Leye", "" ], [ "Han", "Xiao", "" ] ]
Collaboration between healthcare institutions can significantly lessen the imbalance in medical resources across various geographic areas. However, directly sharing diagnostic information between institutions is typically not permitted due to the protection of patients' highly sensitive privacy. As a novel privacy-preserving machine learning paradigm, federated learning (FL) makes it possible to maximize the data utility among multiple medical institutions. These feature-enrichment FL techniques are referred to as vertical FL (VFL). Traditional VFL can only benefit multi-parties' shared samples, which strongly restricts its application scope. In order to improve the information-sharing capability and innovation of various healthcare-related institutions, and then to establish a next-generation open medical collaboration network, we propose a unified framework for vertical federated knowledge transfer mechanism (VFedTrans) based on a novel cross-hospital representation distillation component. Specifically, our framework includes three steps. First, shared samples' federated representations are extracted by collaboratively modeling multi-parties' joint features with current efficient vertical federated representation learning methods. Second, for each hospital, we learn a local-representation-distilled module, which can transfer the knowledge from shared samples' federated representations to enrich local samples' representations. Finally, each hospital can leverage local samples' representations enriched by the distillation module to boost arbitrary downstream machine learning tasks. The experiments on real-life medical datasets verify the knowledge transfer effectiveness of our framework.
2012.02590
Ryan Krueger
Ryan Krueger, Jesse Michael Han and Daniel Selsam
Automatically Building Diagrams for Olympiad Geometry Problems
null
null
null
null
cs.CG cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method for automatically building diagrams for olympiad-level geometry problems and implement our approach in a new open-source software tool, the Geometry Model Builder (GMB). Central to our method is a new domain-specific language, the Geometry Model-Building Language (GMBL), for specifying geometry problems along with additional metadata useful for building diagrams. A GMBL program specifies (1) how to parameterize geometric objects (or sets of geometric objects) and initialize these parameterized quantities, (2) which quantities to compute directly from other quantities, and (3) additional constraints to accumulate into a (differentiable) loss function. A GMBL program induces a (usually) tractable numerical optimization problem whose solutions correspond to diagrams of the original problem statement, and that we can solve reliably using gradient descent. Of the 39 geometry problems since 2000 appearing in the International Mathematical Olympiad, 36 can be expressed in our logic and our system can produce diagrams for 94% of them on average. To the best of our knowledge, our method is the first in automated geometry diagram construction to generate models for such complex problems.
[ { "created": "Tue, 1 Dec 2020 05:56:25 GMT", "version": "v1" }, { "created": "Sat, 1 May 2021 00:41:22 GMT", "version": "v2" } ]
2021-05-04
[ [ "Krueger", "Ryan", "" ], [ "Han", "Jesse Michael", "" ], [ "Selsam", "Daniel", "" ] ]
We present a method for automatically building diagrams for olympiad-level geometry problems and implement our approach in a new open-source software tool, the Geometry Model Builder (GMB). Central to our method is a new domain-specific language, the Geometry Model-Building Language (GMBL), for specifying geometry problems along with additional metadata useful for building diagrams. A GMBL program specifies (1) how to parameterize geometric objects (or sets of geometric objects) and initialize these parameterized quantities, (2) which quantities to compute directly from other quantities, and (3) additional constraints to accumulate into a (differentiable) loss function. A GMBL program induces a (usually) tractable numerical optimization problem whose solutions correspond to diagrams of the original problem statement, and that we can solve reliably using gradient descent. Of the 39 geometry problems since 2000 appearing in the International Mathematical Olympiad, 36 can be expressed in our logic and our system can produce diagrams for 94% of them on average. To the best of our knowledge, our method is the first in automated geometry diagram construction to generate models for such complex problems.
2312.17106
Olivier Moliner
Olivier Moliner, Sangxia Huang and Kalle {\AA}str\"om
Geometry-Biased Transformer for Robust Multi-View 3D Human Pose Reconstruction
Accepted: 18th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2024)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the challenges in estimating 3D human poses from multiple views under occlusion and with limited overlapping views. We approach multi-view, single-person 3D human pose reconstruction as a regression problem and propose a novel encoder-decoder Transformer architecture to estimate 3D poses from multi-view 2D pose sequences. The encoder refines 2D skeleton joints detected across different views and times, fusing multi-view and temporal information through global self-attention. We enhance the encoder by incorporating a geometry-biased attention mechanism, effectively leveraging geometric relationships between views. Additionally, we use detection scores provided by the 2D pose detector to further guide the encoder's attention based on the reliability of the 2D detections. The decoder subsequently regresses the 3D pose sequence from these refined tokens, using pre-defined queries for each joint. To enhance the generalization of our method to unseen scenes and improve resilience to missing joints, we implement strategies including scene centering, synthetic views, and token dropout. We conduct extensive experiments on three benchmark public datasets, Human3.6M, CMU Panoptic and Occlusion-Persons. Our results demonstrate the efficacy of our approach, particularly in occluded scenes and when few views are available, which are traditionally challenging scenarios for triangulation-based methods.
[ { "created": "Thu, 28 Dec 2023 16:30:05 GMT", "version": "v1" } ]
2023-12-29
[ [ "Moliner", "Olivier", "" ], [ "Huang", "Sangxia", "" ], [ "Åström", "Kalle", "" ] ]
We address the challenges in estimating 3D human poses from multiple views under occlusion and with limited overlapping views. We approach multi-view, single-person 3D human pose reconstruction as a regression problem and propose a novel encoder-decoder Transformer architecture to estimate 3D poses from multi-view 2D pose sequences. The encoder refines 2D skeleton joints detected across different views and times, fusing multi-view and temporal information through global self-attention. We enhance the encoder by incorporating a geometry-biased attention mechanism, effectively leveraging geometric relationships between views. Additionally, we use detection scores provided by the 2D pose detector to further guide the encoder's attention based on the reliability of the 2D detections. The decoder subsequently regresses the 3D pose sequence from these refined tokens, using pre-defined queries for each joint. To enhance the generalization of our method to unseen scenes and improve resilience to missing joints, we implement strategies including scene centering, synthetic views, and token dropout. We conduct extensive experiments on three benchmark public datasets, Human3.6M, CMU Panoptic and Occlusion-Persons. Our results demonstrate the efficacy of our approach, particularly in occluded scenes and when few views are available, which are traditionally challenging scenarios for triangulation-based methods.
1110.4278
Marina Sokol
Konstantin Avrachenkov (INRIA Sophia Antipolis), Paulo Gon\c{c}alves (LIP), Alexey Mishenin, Marina Sokol (INRIA Sophia Antipolis)
Generalized Optimization Framework for Graph-based Semi-supervised Learning
null
null
null
RR-7774
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a generalized optimization framework for graph-based semi-supervised learning. The framework gives as particular cases the Standard Laplacian, Normalized Laplacian and PageRank based methods. We have also provided new probabilistic interpretation based on random walks and characterized the limiting behaviour of the methods. The random walk based interpretation allows us to explain di erences between the performances of methods with di erent smoothing kernels. It appears that the PageRank based method is robust with respect to the choice of the regularization parameter and the labelled data. We illustrate our theoretical results with two realistic datasets, characterizing di erent challenges: Les Miserables characters social network and Wikipedia hyper-link graph. The graph-based semi-supervised learning classi- es the Wikipedia articles with very good precision and perfect recall employing only the information about the hyper-text links.
[ { "created": "Wed, 19 Oct 2011 13:29:32 GMT", "version": "v1" } ]
2011-10-20
[ [ "Avrachenkov", "Konstantin", "", "INRIA Sophia Antipolis" ], [ "Gonçalves", "Paulo", "", "LIP" ], [ "Mishenin", "Alexey", "", "INRIA Sophia Antipolis" ], [ "Sokol", "Marina", "", "INRIA Sophia Antipolis" ] ]
We develop a generalized optimization framework for graph-based semi-supervised learning. The framework gives as particular cases the Standard Laplacian, Normalized Laplacian and PageRank based methods. We have also provided new probabilistic interpretation based on random walks and characterized the limiting behaviour of the methods. The random walk based interpretation allows us to explain di erences between the performances of methods with di erent smoothing kernels. It appears that the PageRank based method is robust with respect to the choice of the regularization parameter and the labelled data. We illustrate our theoretical results with two realistic datasets, characterizing di erent challenges: Les Miserables characters social network and Wikipedia hyper-link graph. The graph-based semi-supervised learning classi- es the Wikipedia articles with very good precision and perfect recall employing only the information about the hyper-text links.
2205.00668
Sayed Kamaledin Ghiasi-Shirazi
Sayed Kamaledin Ghiasi-Shirazi
Revisiting Classical Multiclass Linear Discriminant Analysis with a Novel Prototype-based Interpretable Solution
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Linear discriminant analysis (LDA) is a fundamental method for feature extraction and dimensionality reduction. Despite having many variants, classical LDA has its own importance, as it is a keystone in human knowledge about statistical pattern recognition. For a dataset containing C clusters, the classical solution to LDA extracts at most C-1 features. Here, we introduce a novel solution to classical LDA, called LDA++, that yields C features, each interpretable as measuring similarity to one cluster. This novel solution bridges dimensionality reduction and multiclass classification. Specifically, we prove that, for homoscedastic Gaussian data and under some mild conditions, the optimal weights of a linear multiclass classifier also make an optimal solution to LDA. In addition, we show that LDA++ reveals some important new facts about LDA that remarkably changes our understanding of classical multiclass LDA after 75 years of its introduction. We provide a complete numerical solution for LDA++ for the cases 1) when the scatter matrices can be constructed explicitly, 2) when constructing the scatter matrices is infeasible, and 3) the kernel extension.
[ { "created": "Mon, 2 May 2022 06:12:42 GMT", "version": "v1" }, { "created": "Fri, 30 Sep 2022 07:45:25 GMT", "version": "v2" } ]
2022-10-03
[ [ "Ghiasi-Shirazi", "Sayed Kamaledin", "" ] ]
Linear discriminant analysis (LDA) is a fundamental method for feature extraction and dimensionality reduction. Despite having many variants, classical LDA has its own importance, as it is a keystone in human knowledge about statistical pattern recognition. For a dataset containing C clusters, the classical solution to LDA extracts at most C-1 features. Here, we introduce a novel solution to classical LDA, called LDA++, that yields C features, each interpretable as measuring similarity to one cluster. This novel solution bridges dimensionality reduction and multiclass classification. Specifically, we prove that, for homoscedastic Gaussian data and under some mild conditions, the optimal weights of a linear multiclass classifier also make an optimal solution to LDA. In addition, we show that LDA++ reveals some important new facts about LDA that remarkably changes our understanding of classical multiclass LDA after 75 years of its introduction. We provide a complete numerical solution for LDA++ for the cases 1) when the scatter matrices can be constructed explicitly, 2) when constructing the scatter matrices is infeasible, and 3) the kernel extension.
1206.5162
James Hensman
James Hensman, Magnus Rattray and Neil D. Lawrence
Fast Variational Inference in the Conjugate Exponential Family
Accepted at NIPS 2012
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a general method for deriving collapsed variational inference algo- rithms for probabilistic models in the conjugate exponential family. Our method unifies many existing approaches to collapsed variational inference. Our collapsed variational inference leads to a new lower bound on the marginal likelihood. We exploit the information geometry of the bound to derive much faster optimization methods based on conjugate gradients for these models. Our approach is very general and is easily applied to any model where the mean field update equations have been derived. Empirically we show significant speed-ups for probabilistic models optimized using our bound.
[ { "created": "Fri, 22 Jun 2012 14:36:15 GMT", "version": "v1" }, { "created": "Tue, 4 Dec 2012 19:35:34 GMT", "version": "v2" } ]
2012-12-05
[ [ "Hensman", "James", "" ], [ "Rattray", "Magnus", "" ], [ "Lawrence", "Neil D.", "" ] ]
We present a general method for deriving collapsed variational inference algo- rithms for probabilistic models in the conjugate exponential family. Our method unifies many existing approaches to collapsed variational inference. Our collapsed variational inference leads to a new lower bound on the marginal likelihood. We exploit the information geometry of the bound to derive much faster optimization methods based on conjugate gradients for these models. Our approach is very general and is easily applied to any model where the mean field update equations have been derived. Empirically we show significant speed-ups for probabilistic models optimized using our bound.
2001.08096
Shaoshan Liu
Bai Li, Shaoshan Liu, Jie Tang, Jean-Luc Gaudiot, Liangliang Zhang, Qi Kong
Autonomous Last-mile Delivery Vehicles in Complex Traffic Environments
6 pages 6 figures, submitted to IEEE Computer
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
E-commerce has evolved with the digital technology revolution over the years. Last-mile logistics service contributes a significant part of the e-commerce experience. In contrast to the traditional last-mile logistics services, smart logistics service with autonomous driving technologies provides a promising solution to reduce the delivery cost and to improve efficiency. However, the traffic conditions in complex traffic environments, such as those in China, are more challenging compared to those in well-developed countries. Many types of moving objects (such as pedestrians, bicycles, electric bicycles, and motorcycles, etc.) share the road with autonomous vehicles, and their behaviors are not easy to track and predict. This paper introduces a technical solution from JD.com, a leading E-commerce company in China, to the autonomous last-mile delivery in complex traffic environments. Concretely, the methodologies in each module of our autonomous vehicles are presented, together with safety guarantee strategies. Up to this point, JD.com has deployed more than 300 self-driving vehicles for trial operations in tens of provinces of China, with an accumulated 715,819 miles and up to millions of on-road testing hours.
[ { "created": "Wed, 22 Jan 2020 16:00:31 GMT", "version": "v1" }, { "created": "Wed, 25 Mar 2020 13:16:56 GMT", "version": "v2" } ]
2020-03-26
[ [ "Li", "Bai", "" ], [ "Liu", "Shaoshan", "" ], [ "Tang", "Jie", "" ], [ "Gaudiot", "Jean-Luc", "" ], [ "Zhang", "Liangliang", "" ], [ "Kong", "Qi", "" ] ]
E-commerce has evolved with the digital technology revolution over the years. Last-mile logistics service contributes a significant part of the e-commerce experience. In contrast to the traditional last-mile logistics services, smart logistics service with autonomous driving technologies provides a promising solution to reduce the delivery cost and to improve efficiency. However, the traffic conditions in complex traffic environments, such as those in China, are more challenging compared to those in well-developed countries. Many types of moving objects (such as pedestrians, bicycles, electric bicycles, and motorcycles, etc.) share the road with autonomous vehicles, and their behaviors are not easy to track and predict. This paper introduces a technical solution from JD.com, a leading E-commerce company in China, to the autonomous last-mile delivery in complex traffic environments. Concretely, the methodologies in each module of our autonomous vehicles are presented, together with safety guarantee strategies. Up to this point, JD.com has deployed more than 300 self-driving vehicles for trial operations in tens of provinces of China, with an accumulated 715,819 miles and up to millions of on-road testing hours.
1907.11710
Seemanta Saha
Seemanta Saha, Ismet Burak Kadron, William Eiers, Lucas Bang, and Tevfik Bultan
Attack Synthesis for Strings using Meta-Heuristics
arXiv admin note: substantial text overlap with arXiv:1905.05322
null
null
null
cs.SE cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information leaks are a significant problem in modern computer systems and string manipulation is prevalent in modern software. We present techniques for automated synthesis of side-channel attacks that recover secret string values based on timing observations on string manipulating code. Our attack synthesis techniques iteratively generate inputs which, when fed to code that accesses the secret, reveal partial information about the secret based on the timing observations, leading to recovery of the secret at the end of the attack sequence. We use symbolic execution to extract path constraints, automata-based model counting to estimate the probability of execution paths, and meta-heuristic methods to maximize information gain based on entropy for synthesizing adaptive attack steps.
[ { "created": "Fri, 26 Jul 2019 07:48:40 GMT", "version": "v1" } ]
2019-07-30
[ [ "Saha", "Seemanta", "" ], [ "Kadron", "Ismet Burak", "" ], [ "Eiers", "William", "" ], [ "Bang", "Lucas", "" ], [ "Bultan", "Tevfik", "" ] ]
Information leaks are a significant problem in modern computer systems and string manipulation is prevalent in modern software. We present techniques for automated synthesis of side-channel attacks that recover secret string values based on timing observations on string manipulating code. Our attack synthesis techniques iteratively generate inputs which, when fed to code that accesses the secret, reveal partial information about the secret based on the timing observations, leading to recovery of the secret at the end of the attack sequence. We use symbolic execution to extract path constraints, automata-based model counting to estimate the probability of execution paths, and meta-heuristic methods to maximize information gain based on entropy for synthesizing adaptive attack steps.
2311.14337
Zimian Wei
Zimian Wei, Hengyue Pan, Lujun Li, Peijie Dong, Zhiliang Tian, Xin Niu, Dongsheng Li
TVT: Training-Free Vision Transformer Search on Tiny Datasets
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Training-free Vision Transformer (ViT) architecture search is presented to search for a better ViT with zero-cost proxies. While ViTs achieve significant distillation gains from CNN teacher models on small datasets, the current zero-cost proxies in ViTs do not generalize well to the distillation training paradigm according to our experimental observations. In this paper, for the first time, we investigate how to search in a training-free manner with the help of teacher models and devise an effective Training-free ViT (TVT) search framework. Firstly, we observe that the similarity of attention maps between ViT and ConvNet teachers affects distill accuracy notably. Thus, we present a teacher-aware metric conditioned on the feature attention relations between teacher and student. Additionally, TVT employs the L2-Norm of the student's weights as the student-capability metric to improve ranking consistency. Finally, TVT searches for the best ViT for distilling with ConvNet teachers via our teacher-aware metric and student-capability metric, resulting in impressive gains in efficiency and effectiveness. Extensive experiments on various tiny datasets and search spaces show that our TVT outperforms state-of-the-art training-free search methods. The code will be released.
[ { "created": "Fri, 24 Nov 2023 08:24:31 GMT", "version": "v1" } ]
2023-11-27
[ [ "Wei", "Zimian", "" ], [ "Pan", "Hengyue", "" ], [ "Li", "Lujun", "" ], [ "Dong", "Peijie", "" ], [ "Tian", "Zhiliang", "" ], [ "Niu", "Xin", "" ], [ "Li", "Dongsheng", "" ] ]
Training-free Vision Transformer (ViT) architecture search is presented to search for a better ViT with zero-cost proxies. While ViTs achieve significant distillation gains from CNN teacher models on small datasets, the current zero-cost proxies in ViTs do not generalize well to the distillation training paradigm according to our experimental observations. In this paper, for the first time, we investigate how to search in a training-free manner with the help of teacher models and devise an effective Training-free ViT (TVT) search framework. Firstly, we observe that the similarity of attention maps between ViT and ConvNet teachers affects distill accuracy notably. Thus, we present a teacher-aware metric conditioned on the feature attention relations between teacher and student. Additionally, TVT employs the L2-Norm of the student's weights as the student-capability metric to improve ranking consistency. Finally, TVT searches for the best ViT for distilling with ConvNet teachers via our teacher-aware metric and student-capability metric, resulting in impressive gains in efficiency and effectiveness. Extensive experiments on various tiny datasets and search spaces show that our TVT outperforms state-of-the-art training-free search methods. The code will be released.
2001.03243
Alon Kipnis
Alon Kipnis, Galen Reeves
Gaussian Approximation of Quantization Error for Estimation from Compressed Data
null
IEEE Transactions on Information Theory (Volume: 67, Issue: 8, Aug. 2021)
10.1109/TIT.2021.3083271
null
cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the distributional connection between the lossy compressed representation of a high-dimensional signal $X$ using a random spherical code and the observation of $X$ under an additive white Gaussian noise (AWGN). We show that the Wasserstein distance between a bitrate-$R$ compressed version of $X$ and its observation under an AWGN-channel of signal-to-noise ratio $2^{2R}-1$ is sub-linear in the problem dimension. We utilize this fact to connect the risk of an estimator based on an AWGN-corrupted version of $X$ to the risk attained by the same estimator when fed with its bitrate-$R$ quantized version. We demonstrate the usefulness of this connection by deriving various novel results for inference problems under compression constraints, including minimax estimation, sparse regression, compressed sensing, and the universality of linear estimation in remote source coding.
[ { "created": "Thu, 9 Jan 2020 22:10:10 GMT", "version": "v1" }, { "created": "Thu, 12 Mar 2020 00:15:45 GMT", "version": "v2" }, { "created": "Sun, 12 Dec 2021 06:42:10 GMT", "version": "v3" } ]
2021-12-14
[ [ "Kipnis", "Alon", "" ], [ "Reeves", "Galen", "" ] ]
We consider the distributional connection between the lossy compressed representation of a high-dimensional signal $X$ using a random spherical code and the observation of $X$ under an additive white Gaussian noise (AWGN). We show that the Wasserstein distance between a bitrate-$R$ compressed version of $X$ and its observation under an AWGN-channel of signal-to-noise ratio $2^{2R}-1$ is sub-linear in the problem dimension. We utilize this fact to connect the risk of an estimator based on an AWGN-corrupted version of $X$ to the risk attained by the same estimator when fed with its bitrate-$R$ quantized version. We demonstrate the usefulness of this connection by deriving various novel results for inference problems under compression constraints, including minimax estimation, sparse regression, compressed sensing, and the universality of linear estimation in remote source coding.
2203.08327
Daniel Gonzalez Cedre
William Theisen, Daniel Gonzalez Cedre, Zachariah Carmichael, Daniel Moreira, Tim Weninger, and Walter Scheirer
Motif Mining: Finding and Summarizing Remixed Image Content
41 pages, 21 figures
null
null
null
cs.CV cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
On the internet, images are no longer static; they have become dynamic content. Thanks to the availability of smartphones with cameras and easy-to-use editing software, images can be remixed (i.e., redacted, edited, and recombined with other content) on-the-fly and with a world-wide audience that can repeat the process. From digital art to memes, the evolution of images through time is now an important topic of study for digital humanists, social scientists, and media forensics specialists. However, because typical data sets in computer vision are composed of static content, the development of automated algorithms to analyze remixed content has been limited. In this paper, we introduce the idea of Motif Mining - the process of finding and summarizing remixed image content in large collections of unlabeled and unsorted data. In this paper, this idea is formalized and a reference implementation is introduced. Experiments are conducted on three meme-style data sets, including a newly collected set associated with the information war in the Russo-Ukrainian conflict. The proposed motif mining approach is able to identify related remixed content that, when compared to similar approaches, more closely aligns with the preferences and expectations of human observers.
[ { "created": "Wed, 16 Mar 2022 00:14:19 GMT", "version": "v1" }, { "created": "Thu, 17 Mar 2022 14:54:55 GMT", "version": "v2" } ]
2022-03-18
[ [ "Theisen", "William", "" ], [ "Cedre", "Daniel Gonzalez", "" ], [ "Carmichael", "Zachariah", "" ], [ "Moreira", "Daniel", "" ], [ "Weninger", "Tim", "" ], [ "Scheirer", "Walter", "" ] ]
On the internet, images are no longer static; they have become dynamic content. Thanks to the availability of smartphones with cameras and easy-to-use editing software, images can be remixed (i.e., redacted, edited, and recombined with other content) on-the-fly and with a world-wide audience that can repeat the process. From digital art to memes, the evolution of images through time is now an important topic of study for digital humanists, social scientists, and media forensics specialists. However, because typical data sets in computer vision are composed of static content, the development of automated algorithms to analyze remixed content has been limited. In this paper, we introduce the idea of Motif Mining - the process of finding and summarizing remixed image content in large collections of unlabeled and unsorted data. In this paper, this idea is formalized and a reference implementation is introduced. Experiments are conducted on three meme-style data sets, including a newly collected set associated with the information war in the Russo-Ukrainian conflict. The proposed motif mining approach is able to identify related remixed content that, when compared to similar approaches, more closely aligns with the preferences and expectations of human observers.
2301.01531
Razvan Caramalau
Razvan Caramalau, Binod Bhattarai, Danail Stoyanov, Tae-Kyun Kim
MoBYv2AL: Self-supervised Active Learning for Image Classification
Poster accepted at BMVC 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Active learning(AL) has recently gained popularity for deep learning(DL) models. This is due to efficient and informative sampling, especially when the learner requires large-scale labelled datasets. Commonly, the sampling and training happen in stages while more batches are added. One main bottleneck in this strategy is the narrow representation learned by the model that affects the overall AL selection. We present MoBYv2AL, a novel self-supervised active learning framework for image classification. Our contribution lies in lifting MoBY, one of the most successful self-supervised learning algorithms, to the AL pipeline. Thus, we add the downstream task-aware objective function and optimize it jointly with contrastive loss. Further, we derive a data-distribution selection function from labelling the new examples. Finally, we test and study our pipeline robustness and performance for image classification tasks. We successfully achieved state-of-the-art results when compared to recent AL methods. Code available: https://github.com/razvancaramalau/MoBYv2AL
[ { "created": "Wed, 4 Jan 2023 10:52:02 GMT", "version": "v1" } ]
2023-01-05
[ [ "Caramalau", "Razvan", "" ], [ "Bhattarai", "Binod", "" ], [ "Stoyanov", "Danail", "" ], [ "Kim", "Tae-Kyun", "" ] ]
Active learning(AL) has recently gained popularity for deep learning(DL) models. This is due to efficient and informative sampling, especially when the learner requires large-scale labelled datasets. Commonly, the sampling and training happen in stages while more batches are added. One main bottleneck in this strategy is the narrow representation learned by the model that affects the overall AL selection. We present MoBYv2AL, a novel self-supervised active learning framework for image classification. Our contribution lies in lifting MoBY, one of the most successful self-supervised learning algorithms, to the AL pipeline. Thus, we add the downstream task-aware objective function and optimize it jointly with contrastive loss. Further, we derive a data-distribution selection function from labelling the new examples. Finally, we test and study our pipeline robustness and performance for image classification tasks. We successfully achieved state-of-the-art results when compared to recent AL methods. Code available: https://github.com/razvancaramalau/MoBYv2AL
1412.7689
Roshan Ragel
Akmal Jahan Mac and Roshan G Ragel
Locating Tables in Scanned Documents for Reconstructing and Republishing (ICIAfS14)
The 7th International Conference on Information and Automation for Sustainability (ICIAfS) 2014
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pool of knowledge available to the mankind depends on the source of learning resources, which can vary from ancient printed documents to present electronic material. The rapid conversion of material available in traditional libraries to digital form needs a significant amount of work if we are to maintain the format and the look of the electronic documents as same as their printed counterparts. Most of the printed documents contain not only characters and its formatting but also some associated non text objects such as tables, charts and graphical objects. It is challenging to detect them and to concentrate on the format preservation of the contents while reproducing them. To address this issue, we propose an algorithm using local thresholds for word space and line height to locate and extract all categories of tables from scanned document images. From the experiments performed on 298 documents, we conclude that our algorithm has an overall accuracy of about 75% in detecting tables from the scanned document images. Since the algorithm does not completely depend on rule lines, it can detect all categories of tables in a range of scanned documents with different font types, styles and sizes to extract their formatting features. Moreover, the algorithm can be applied to locate tables in multi column layouts with small modification in layout analysis. Treating tables with their existing formatting features will tremendously help the reproducing of printed documents for reprinting and updating purposes.
[ { "created": "Wed, 24 Dec 2014 15:29:13 GMT", "version": "v1" } ]
2014-12-25
[ [ "Mac", "Akmal Jahan", "" ], [ "Ragel", "Roshan G", "" ] ]
Pool of knowledge available to the mankind depends on the source of learning resources, which can vary from ancient printed documents to present electronic material. The rapid conversion of material available in traditional libraries to digital form needs a significant amount of work if we are to maintain the format and the look of the electronic documents as same as their printed counterparts. Most of the printed documents contain not only characters and its formatting but also some associated non text objects such as tables, charts and graphical objects. It is challenging to detect them and to concentrate on the format preservation of the contents while reproducing them. To address this issue, we propose an algorithm using local thresholds for word space and line height to locate and extract all categories of tables from scanned document images. From the experiments performed on 298 documents, we conclude that our algorithm has an overall accuracy of about 75% in detecting tables from the scanned document images. Since the algorithm does not completely depend on rule lines, it can detect all categories of tables in a range of scanned documents with different font types, styles and sizes to extract their formatting features. Moreover, the algorithm can be applied to locate tables in multi column layouts with small modification in layout analysis. Treating tables with their existing formatting features will tremendously help the reproducing of printed documents for reprinting and updating purposes.
2104.13638
Manu Joseph
Manu Joseph
PyTorch Tabular: A Framework for Deep Learning with Tabular Data
This work has been submitted to the IEEE for possible publication
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
In spite of showing unreasonable effectiveness in modalities like Text and Image, Deep Learning has always lagged Gradient Boosting in tabular data - both in popularity and performance. But recently there have been newer models created specifically for tabular data, which is pushing the performance bar. But popularity is still a challenge because there is no easy, ready-to-use library like Sci-Kit Learn for deep learning. PyTorch Tabular is a new deep learning library which makes working with Deep Learning and tabular data easy and fast. It is a library built on top of PyTorch and PyTorch Lightning and works on pandas dataframes directly. Many SOTA models like NODE and TabNet are already integrated and implemented in the library with a unified API. PyTorch Tabular is designed to be easily extensible for researchers, simple for practitioners, and robust in industrial deployments.
[ { "created": "Wed, 28 Apr 2021 08:50:08 GMT", "version": "v1" } ]
2021-04-29
[ [ "Joseph", "Manu", "" ] ]
In spite of showing unreasonable effectiveness in modalities like Text and Image, Deep Learning has always lagged Gradient Boosting in tabular data - both in popularity and performance. But recently there have been newer models created specifically for tabular data, which is pushing the performance bar. But popularity is still a challenge because there is no easy, ready-to-use library like Sci-Kit Learn for deep learning. PyTorch Tabular is a new deep learning library which makes working with Deep Learning and tabular data easy and fast. It is a library built on top of PyTorch and PyTorch Lightning and works on pandas dataframes directly. Many SOTA models like NODE and TabNet are already integrated and implemented in the library with a unified API. PyTorch Tabular is designed to be easily extensible for researchers, simple for practitioners, and robust in industrial deployments.
1203.0443
Pierre de Leusse
Pierre de Leusse, Panos Periorellis, Paul Watson, Andreas Maierhofer
Secure & Rapid Composition of Infrastructure Services in the Cloud
null
SENSORCOMM '08. Second International Conference on , vol., no., pp.770-775, 25-31 Aug. 2008
10.1109/SENSORCOMM.2008.130
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental ambition of grid and distributed systems is to be capable of sustaining evolution and allowing for adaptability ((F. Losavio et al., 2002), (S. Radhakrishnan, 2005)). Furthermore, as the complexity and sophistication of theses structures increases, so does the need for adaptability of each component. One of the primary benefits of service oriented architecture (SOA) is the ability to compose applications, processes or more complex services from other services which increases the capacity for adaptation. This document proposes a novel infrastructure composition model that aims at increasing the adaptability of the capabilities exposed through it by dynamically managing their non functional requirements.
[ { "created": "Fri, 2 Mar 2012 12:25:09 GMT", "version": "v1" } ]
2012-03-05
[ [ "de Leusse", "Pierre", "" ], [ "Periorellis", "Panos", "" ], [ "Watson", "Paul", "" ], [ "Maierhofer", "Andreas", "" ] ]
A fundamental ambition of grid and distributed systems is to be capable of sustaining evolution and allowing for adaptability ((F. Losavio et al., 2002), (S. Radhakrishnan, 2005)). Furthermore, as the complexity and sophistication of theses structures increases, so does the need for adaptability of each component. One of the primary benefits of service oriented architecture (SOA) is the ability to compose applications, processes or more complex services from other services which increases the capacity for adaptation. This document proposes a novel infrastructure composition model that aims at increasing the adaptability of the capabilities exposed through it by dynamically managing their non functional requirements.
2308.10401
Shengzhi Wang
Xiangyu Chu+, Shengzhi Wang+, Minjian Feng, Jiaxi Zheng, Yuxuan Zhao, Jing Huang, and K. W. Samuel Au
Model-Free Large-Scale Cloth Spreading With Mobile Manipulation: Initial Feasibility Study
6 pages, 6 figures, submit to CASE2023
2023 IEEE International Conference on Automation Science and Engineering (CASE)
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cloth manipulation is common in domestic and service tasks, and most studies use fixed-base manipulators to manipulate objects whose sizes are relatively small with respect to the manipulators' workspace, such as towels, shirts, and rags. In contrast, manipulation of large-scale cloth, such as bed making and tablecloth spreading, poses additional challenges of reachability and manipulation control. To address them, this paper presents a novel framework to spread large-scale cloth, with a single-arm mobile manipulator that can solve the reachability issue, for an initial feasibility study. On the manipulation control side, without modeling highly deformable cloth, a vision-based manipulation control scheme is applied and based on an online-update Jacobian matrix mapping from selected feature points to the end-effector motion. To coordinate the control of the manipulator and mobile platform, Behavior Trees (BTs) are used because of their modularity. Finally, experiments are conducted, including validation of the model-free manipulation control for cloth spreading in different conditions and the large-scale cloth spreading framework. The experimental results demonstrate the large-scale cloth spreading task feasibility with a single-arm mobile manipulator and the model-free deformation controller.
[ { "created": "Mon, 21 Aug 2023 00:30:43 GMT", "version": "v1" } ]
2023-08-22
[ [ "Chu+", "Xiangyu", "" ], [ "Wang+", "Shengzhi", "" ], [ "Feng", "Minjian", "" ], [ "Zheng", "Jiaxi", "" ], [ "Zhao", "Yuxuan", "" ], [ "Huang", "Jing", "" ], [ "Au", "K. W. Samuel", "" ] ]
Cloth manipulation is common in domestic and service tasks, and most studies use fixed-base manipulators to manipulate objects whose sizes are relatively small with respect to the manipulators' workspace, such as towels, shirts, and rags. In contrast, manipulation of large-scale cloth, such as bed making and tablecloth spreading, poses additional challenges of reachability and manipulation control. To address them, this paper presents a novel framework to spread large-scale cloth, with a single-arm mobile manipulator that can solve the reachability issue, for an initial feasibility study. On the manipulation control side, without modeling highly deformable cloth, a vision-based manipulation control scheme is applied and based on an online-update Jacobian matrix mapping from selected feature points to the end-effector motion. To coordinate the control of the manipulator and mobile platform, Behavior Trees (BTs) are used because of their modularity. Finally, experiments are conducted, including validation of the model-free manipulation control for cloth spreading in different conditions and the large-scale cloth spreading framework. The experimental results demonstrate the large-scale cloth spreading task feasibility with a single-arm mobile manipulator and the model-free deformation controller.
1410.2131
Waqas Aman
Waqas Aman
A Framework for Analysis and Comparison of Dynamic Malware Analysis Tools
12 pages
International Journal of Network Security & Its Applications 09/2014; 6(5):63-74
10.5121/ijnsa.2014.6505
null
cs.CR
http://creativecommons.org/licenses/publicdomain/
Malware writers have employed various obfuscation and polymorphism techniques to thwart static analysis approaches and bypassing antivirus tools. Dynamic analysis techniques, however, have essentially overcome these deceits by observing the actual behaviour of the code execution. In this regard, various methods, techniques and tools have been proposed. However, because of the diverse concepts and strategies used in the implementation of these methods and tools, security researchers and malware analysts find it difficult to select the required optimum tool to investigate the behaviour of a malware and to contain the associated risk for their study. Focusing on two dynamic analysis techniques: Function Call monitoring and Information Flow Tracking, this paper presents a comparison framework for dynamic malware analysis tools. The framework will assist the researchers and analysts to recognize the tools implementation strategy, analysis approach, system wide analysis support and its overall handling of binaries, helping them to select a suitable and effective one for their study and analysis.
[ { "created": "Wed, 8 Oct 2014 14:19:16 GMT", "version": "v1" } ]
2014-10-09
[ [ "Aman", "Waqas", "" ] ]
Malware writers have employed various obfuscation and polymorphism techniques to thwart static analysis approaches and bypassing antivirus tools. Dynamic analysis techniques, however, have essentially overcome these deceits by observing the actual behaviour of the code execution. In this regard, various methods, techniques and tools have been proposed. However, because of the diverse concepts and strategies used in the implementation of these methods and tools, security researchers and malware analysts find it difficult to select the required optimum tool to investigate the behaviour of a malware and to contain the associated risk for their study. Focusing on two dynamic analysis techniques: Function Call monitoring and Information Flow Tracking, this paper presents a comparison framework for dynamic malware analysis tools. The framework will assist the researchers and analysts to recognize the tools implementation strategy, analysis approach, system wide analysis support and its overall handling of binaries, helping them to select a suitable and effective one for their study and analysis.
1408.1480
Adnan Darwiche
Adnan Darwiche, Gregory M. Provan
Query DAGs: A Practical Paradigm for Implementing Belief Network Inference
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-203-210
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a new paradigm for implementing inference in belief networks, which relies on compiling a belief network into an arithmetic expression called a Query DAG (Q-DAG). Each non-leaf node of a Q-DAG represents a numeric operation, a number, or a symbol for evidence. Each leaf node of a Q-DAG represents the answer to a network query, that is, the probability of some event of interest. It appears that Q-DAGs can be generated using any of the algorithms for exact inference in belief networks --- we show how they can be generated using clustering and conditioning algorithms. The time and space complexity of a Q-DAG generation algorithm is no worse than the time complexity of the inference algorithm on which it is based; that of a Q-DAG on-line evaluation algorithm is linear in the size of the Q-DAG, and such inference amounts to a standard evaluation of the arithmetic expression it represents. The main value of Q-DAGs is in reducing the software and hardware resources required to utilize belief networks in on-line, real-world applications. The proposed framework also facilitates the development of on-line inference on different software and hardware platforms, given the simplicity of the Q-DAG evaluation algorithm. This paper describes this new paradigm for probabilistic inference, explaining how it works, its uses, and outlines some of the research directions that it leads to.
[ { "created": "Thu, 7 Aug 2014 06:22:51 GMT", "version": "v1" } ]
2014-08-08
[ [ "Darwiche", "Adnan", "" ], [ "Provan", "Gregory M.", "" ] ]
We describe a new paradigm for implementing inference in belief networks, which relies on compiling a belief network into an arithmetic expression called a Query DAG (Q-DAG). Each non-leaf node of a Q-DAG represents a numeric operation, a number, or a symbol for evidence. Each leaf node of a Q-DAG represents the answer to a network query, that is, the probability of some event of interest. It appears that Q-DAGs can be generated using any of the algorithms for exact inference in belief networks --- we show how they can be generated using clustering and conditioning algorithms. The time and space complexity of a Q-DAG generation algorithm is no worse than the time complexity of the inference algorithm on which it is based; that of a Q-DAG on-line evaluation algorithm is linear in the size of the Q-DAG, and such inference amounts to a standard evaluation of the arithmetic expression it represents. The main value of Q-DAGs is in reducing the software and hardware resources required to utilize belief networks in on-line, real-world applications. The proposed framework also facilitates the development of on-line inference on different software and hardware platforms, given the simplicity of the Q-DAG evaluation algorithm. This paper describes this new paradigm for probabilistic inference, explaining how it works, its uses, and outlines some of the research directions that it leads to.
2303.16975
Rishi Hazra
Rishi Hazra, Brian Chen, Akshara Rai, Nitin Kamra, Ruta Desai
EgoTV: Egocentric Task Verification from Natural Language Task Descriptions
Accepted at ICCV 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
To enable progress towards egocentric agents capable of understanding everyday tasks specified in natural language, we propose a benchmark and a synthetic dataset called Egocentric Task Verification (EgoTV). The goal in EgoTV is to verify the execution of tasks from egocentric videos based on the natural language description of these tasks. EgoTV contains pairs of videos and their task descriptions for multi-step tasks -- these tasks contain multiple sub-task decompositions, state changes, object interactions, and sub-task ordering constraints. In addition, EgoTV also provides abstracted task descriptions that contain only partial details about ways to accomplish a task. Consequently, EgoTV requires causal, temporal, and compositional reasoning of video and language modalities, which is missing in existing datasets. We also find that existing vision-language models struggle at such all round reasoning needed for task verification in EgoTV. Inspired by the needs of EgoTV, we propose a novel Neuro-Symbolic Grounding (NSG) approach that leverages symbolic representations to capture the compositional and temporal structure of tasks. We demonstrate NSG's capability towards task tracking and verification on our EgoTV dataset and a real-world dataset derived from CrossTask (CTV). We open-source the EgoTV and CTV datasets and the NSG model for future research on egocentric assistive agents.
[ { "created": "Wed, 29 Mar 2023 19:16:49 GMT", "version": "v1" }, { "created": "Tue, 4 Apr 2023 18:41:24 GMT", "version": "v2" }, { "created": "Mon, 17 Apr 2023 18:04:27 GMT", "version": "v3" }, { "created": "Tue, 2 May 2023 15:26:28 GMT", "version": "v4" }, { "created": "Mon, 25 Sep 2023 19:20:58 GMT", "version": "v5" } ]
2023-09-27
[ [ "Hazra", "Rishi", "" ], [ "Chen", "Brian", "" ], [ "Rai", "Akshara", "" ], [ "Kamra", "Nitin", "" ], [ "Desai", "Ruta", "" ] ]
To enable progress towards egocentric agents capable of understanding everyday tasks specified in natural language, we propose a benchmark and a synthetic dataset called Egocentric Task Verification (EgoTV). The goal in EgoTV is to verify the execution of tasks from egocentric videos based on the natural language description of these tasks. EgoTV contains pairs of videos and their task descriptions for multi-step tasks -- these tasks contain multiple sub-task decompositions, state changes, object interactions, and sub-task ordering constraints. In addition, EgoTV also provides abstracted task descriptions that contain only partial details about ways to accomplish a task. Consequently, EgoTV requires causal, temporal, and compositional reasoning of video and language modalities, which is missing in existing datasets. We also find that existing vision-language models struggle at such all round reasoning needed for task verification in EgoTV. Inspired by the needs of EgoTV, we propose a novel Neuro-Symbolic Grounding (NSG) approach that leverages symbolic representations to capture the compositional and temporal structure of tasks. We demonstrate NSG's capability towards task tracking and verification on our EgoTV dataset and a real-world dataset derived from CrossTask (CTV). We open-source the EgoTV and CTV datasets and the NSG model for future research on egocentric assistive agents.
1401.2010
Samuele Giraudo
Samuele Giraudo, Jean-Gabriel Luque, Ludovic Mignot and Florent Nicart
Operads, quasiorders, and regular languages
32 pages
Advances in Applied Mathematics, 75, 56--93, 2016
null
null
cs.FL math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We generalize the construction of multitildes in the aim to provide multitilde operators for regular languages. We show that the underliying algebraic structure involves the action of some operads. An operad is an algebraic structure that mimics the composition of the functions. The involved operads are described in terms of combinatorial objects. These operads are obtained from more primitive objects, namely precompositions, whose algebraic counter-parts are investigated. One of these operads acts faithfully on languages in the sense that two different operators act in two different ways.
[ { "created": "Thu, 9 Jan 2014 14:19:52 GMT", "version": "v1" }, { "created": "Thu, 21 Jan 2016 07:26:14 GMT", "version": "v2" } ]
2016-01-22
[ [ "Giraudo", "Samuele", "" ], [ "Luque", "Jean-Gabriel", "" ], [ "Mignot", "Ludovic", "" ], [ "Nicart", "Florent", "" ] ]
We generalize the construction of multitildes in the aim to provide multitilde operators for regular languages. We show that the underliying algebraic structure involves the action of some operads. An operad is an algebraic structure that mimics the composition of the functions. The involved operads are described in terms of combinatorial objects. These operads are obtained from more primitive objects, namely precompositions, whose algebraic counter-parts are investigated. One of these operads acts faithfully on languages in the sense that two different operators act in two different ways.
1501.04505
Kaihua Zhang
Kaihua Zhang, Qingshan Liu, Yi Wu, Ming-Hsuan Yang
Robust Visual Tracking via Convolutional Networks
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper we present that, even without offline training with a large amount of auxiliary data, simple two-layer convolutional networks can be powerful enough to develop a robust representation for visual tracking. In the first frame, we employ the k-means algorithm to extract a set of normalized patches from the target region as fixed filters, which integrate a series of adaptive contextual filters surrounding the target to define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and the useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps form together a global representation, which is built on mid-level features, thereby remaining close to image-level information, and hence the inner geometric layout of the target is also well preserved. A simple soft shrinkage method with an adaptive threshold is employed to de-noise the global representation, resulting in a robust sparse representation. The representation is updated via a simple and effective online strategy, allowing it to robustly adapt to target appearance variations. Our convolution networks have surprisingly lightweight structure, yet perform favorably against several state-of-the-art methods on the CVPR2013 tracking benchmark dataset with 50 challenging videos.
[ { "created": "Mon, 19 Jan 2015 14:39:51 GMT", "version": "v1" }, { "created": "Mon, 24 Aug 2015 06:07:22 GMT", "version": "v2" } ]
2015-08-25
[ [ "Zhang", "Kaihua", "" ], [ "Liu", "Qingshan", "" ], [ "Wu", "Yi", "" ], [ "Yang", "Ming-Hsuan", "" ] ]
Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper we present that, even without offline training with a large amount of auxiliary data, simple two-layer convolutional networks can be powerful enough to develop a robust representation for visual tracking. In the first frame, we employ the k-means algorithm to extract a set of normalized patches from the target region as fixed filters, which integrate a series of adaptive contextual filters surrounding the target to define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and the useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps form together a global representation, which is built on mid-level features, thereby remaining close to image-level information, and hence the inner geometric layout of the target is also well preserved. A simple soft shrinkage method with an adaptive threshold is employed to de-noise the global representation, resulting in a robust sparse representation. The representation is updated via a simple and effective online strategy, allowing it to robustly adapt to target appearance variations. Our convolution networks have surprisingly lightweight structure, yet perform favorably against several state-of-the-art methods on the CVPR2013 tracking benchmark dataset with 50 challenging videos.
2301.05108
Ibrahim Abdelaziz
Wenting Zhao, Ibrahim Abdelaziz, Julian Dolby, Kavitha Srinivas, Mossad Helali, Essam Mansour
Serenity: Library Based Python Code Analysis for Code Completion and Automated Machine Learning
null
null
null
null
cs.PL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dynamically typed languages such as Python have become very popular. Among other strengths, Python's dynamic nature and its straightforward linking to native code have made it the de-facto language for many research areas such as Artificial Intelligence. This flexibility, however, makes static analysis very hard. While creating a sound, or a soundy, analysis for Python remains an open problem, we present in this work Serenity, a framework for static analysis of Python that turns out to be sufficient for some tasks. The Serenity framework exploits two basic mechanisms: (a) reliance on dynamic dispatch at the core of language translation, and (b) extreme abstraction of libraries, to generate an abstraction of the code. We demonstrate the efficiency and usefulness of Serenity's analysis in two applications: code completion and automated machine learning. In these two applications, we demonstrate that such analysis has a strong signal, and can be leveraged to establish state-of-the-art performance, comparable to neural models and dynamic analysis respectively.
[ { "created": "Thu, 5 Jan 2023 02:09:08 GMT", "version": "v1" } ]
2023-01-13
[ [ "Zhao", "Wenting", "" ], [ "Abdelaziz", "Ibrahim", "" ], [ "Dolby", "Julian", "" ], [ "Srinivas", "Kavitha", "" ], [ "Helali", "Mossad", "" ], [ "Mansour", "Essam", "" ] ]
Dynamically typed languages such as Python have become very popular. Among other strengths, Python's dynamic nature and its straightforward linking to native code have made it the de-facto language for many research areas such as Artificial Intelligence. This flexibility, however, makes static analysis very hard. While creating a sound, or a soundy, analysis for Python remains an open problem, we present in this work Serenity, a framework for static analysis of Python that turns out to be sufficient for some tasks. The Serenity framework exploits two basic mechanisms: (a) reliance on dynamic dispatch at the core of language translation, and (b) extreme abstraction of libraries, to generate an abstraction of the code. We demonstrate the efficiency and usefulness of Serenity's analysis in two applications: code completion and automated machine learning. In these two applications, we demonstrate that such analysis has a strong signal, and can be leveraged to establish state-of-the-art performance, comparable to neural models and dynamic analysis respectively.
1302.1557
Kathryn Blackmond Laskey
Kathryn Blackmond Laskey, Suzanne M. Mahoney
Network Fragments: Representing Knowledge for Constructing Probabilistic Models
Appears in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI1997)
null
null
UAI-P-1997-PG-334-341
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In most current applications of belief networks, domain knowledge is represented by a single belief network that applies to all problem instances in the domain. In more complex domains, problem-specific models must be constructed from a knowledge base encoding probabilistic relationships in the domain. Most work in knowledge-based model construction takes the rule as the basic unit of knowledge. We present a knowledge representation framework that permits the knowledge base designer to specify knowledge in larger semantically meaningful units which we call network fragments. Our framework provides for representation of asymmetric independence and canonical intercausal interaction. We discuss the combination of network fragments to form problem-specific models to reason about particular problem instances. The framework is illustrated using examples from the domain of military situation awareness.
[ { "created": "Wed, 6 Feb 2013 15:57:59 GMT", "version": "v1" } ]
2013-02-08
[ [ "Laskey", "Kathryn Blackmond", "" ], [ "Mahoney", "Suzanne M.", "" ] ]
In most current applications of belief networks, domain knowledge is represented by a single belief network that applies to all problem instances in the domain. In more complex domains, problem-specific models must be constructed from a knowledge base encoding probabilistic relationships in the domain. Most work in knowledge-based model construction takes the rule as the basic unit of knowledge. We present a knowledge representation framework that permits the knowledge base designer to specify knowledge in larger semantically meaningful units which we call network fragments. Our framework provides for representation of asymmetric independence and canonical intercausal interaction. We discuss the combination of network fragments to form problem-specific models to reason about particular problem instances. The framework is illustrated using examples from the domain of military situation awareness.
2111.15514
Xiaoteng Zhou
Xiaoteng Zhou, Changli Yu, Xin Yuan, Haijun Feng, and Yang Xu
Nonlinear Intensity Underwater Sonar Image Matching Method Based on Phase Information and Deep Convolution Features
6 pages, letters, 9 figures. arXiv admin note: substantial text overlap with arXiv:2111.08994
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the field of deep-sea exploration, sonar is presently the only efficient long-distance sensing device. The complicated underwater environment, such as noise interference, low target intensity or background dynamics, has brought many negative effects on sonar imaging. Among them, the problem of nonlinear intensity is extremely prevalent. It is also known as the anisotropy of acoustic sensor imaging, that is, when autonomous underwater vehicles (AUVs) carry sonar to detect the same target from different angles, the intensity variation between image pairs is sometimes very large, which makes the traditional matching algorithm almost ineffective. However, image matching is the basis of comprehensive tasks such as navigation, positioning, and mapping. Therefore, it is very valuable to obtain robust and accurate matching results. This paper proposes a combined matching method based on phase information and deep convolution features. It has two outstanding advantages: one is that the deep convolution features could be used to measure the similarity of the local and global positions of the sonar image; the other is that local feature matching could be performed at the key target position of the sonar image. This method does not need complex manual designs, and completes the matching task of nonlinear intensity sonar images in a close end-to-end manner. Feature matching experiments are carried out on the deep-sea sonar images captured by AUVs, and the results show that our proposal has preeminent matching accuracy and robustness.
[ { "created": "Mon, 29 Nov 2021 02:36:49 GMT", "version": "v1" } ]
2021-12-01
[ [ "Zhou", "Xiaoteng", "" ], [ "Yu", "Changli", "" ], [ "Yuan", "Xin", "" ], [ "Feng", "Haijun", "" ], [ "Xu", "Yang", "" ] ]
In the field of deep-sea exploration, sonar is presently the only efficient long-distance sensing device. The complicated underwater environment, such as noise interference, low target intensity or background dynamics, has brought many negative effects on sonar imaging. Among them, the problem of nonlinear intensity is extremely prevalent. It is also known as the anisotropy of acoustic sensor imaging, that is, when autonomous underwater vehicles (AUVs) carry sonar to detect the same target from different angles, the intensity variation between image pairs is sometimes very large, which makes the traditional matching algorithm almost ineffective. However, image matching is the basis of comprehensive tasks such as navigation, positioning, and mapping. Therefore, it is very valuable to obtain robust and accurate matching results. This paper proposes a combined matching method based on phase information and deep convolution features. It has two outstanding advantages: one is that the deep convolution features could be used to measure the similarity of the local and global positions of the sonar image; the other is that local feature matching could be performed at the key target position of the sonar image. This method does not need complex manual designs, and completes the matching task of nonlinear intensity sonar images in a close end-to-end manner. Feature matching experiments are carried out on the deep-sea sonar images captured by AUVs, and the results show that our proposal has preeminent matching accuracy and robustness.
1809.05286
Kian Ghodoussi
Kian Ghodoussi, Nihar Sheth, Zane Durante, Markie Wagner
Deep CNN Frame Interpolation with Lessons Learned from Natural Language Processing
10 pages, 5 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major area of growth within deep learning has been the study and implementation of convolutional neural networks. The general explanation within the deep learning community of the robustness of convolutional neural networks (CNNs) within image recognition rests upon the idea that CNNs are able to extract localized features. However, recent developments in fields such as Natural Language Processing are demonstrating that this paradigm may be incorrect. In this paper, we analyze the current state of the field concerning CNN's and present a hypothesis that provides a novel explanation for the robustness of CNN models. From there, we demonstrate the effectiveness of our approach by presenting novel deep CNN frame interpolation architecture that is comparable to the state of the art interpolation models with a fraction of the complexity.
[ { "created": "Fri, 14 Sep 2018 07:44:46 GMT", "version": "v1" }, { "created": "Mon, 17 Sep 2018 00:43:58 GMT", "version": "v2" } ]
2018-09-18
[ [ "Ghodoussi", "Kian", "" ], [ "Sheth", "Nihar", "" ], [ "Durante", "Zane", "" ], [ "Wagner", "Markie", "" ] ]
A major area of growth within deep learning has been the study and implementation of convolutional neural networks. The general explanation within the deep learning community of the robustness of convolutional neural networks (CNNs) within image recognition rests upon the idea that CNNs are able to extract localized features. However, recent developments in fields such as Natural Language Processing are demonstrating that this paradigm may be incorrect. In this paper, we analyze the current state of the field concerning CNN's and present a hypothesis that provides a novel explanation for the robustness of CNN models. From there, we demonstrate the effectiveness of our approach by presenting novel deep CNN frame interpolation architecture that is comparable to the state of the art interpolation models with a fraction of the complexity.
1710.10061
Hasnae Rahimi
Hasnae Rahimi and Hanan El Bekkali
State of the art of Trust and Reputation Systems in E-Commerce Context
State of the art (survey) published in IJCSI journal paper indexed by DBLP, EBSCO, Proquest, DOAJ, Google Scholar, with 53 References; http://www.ijcsi.org/ Volume 14, Issue 3, May 2017
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article proposes in depth comparative study of the most popular, used and analyzed Trust and Reputation System (TRS) according to the trust and reputation literature and in terms of specific trustworthiness criteria. This survey is realized relying on a selection of trustworthiness criteria that analyze and evaluate the maturity and effectiveness of TRS. These criteria describe the utility, the usability, the performance and the effectiveness of the TRS. We also provide a summary table of the compared TRS within a detailed and granular selection of trust and reputation aspects.
[ { "created": "Fri, 27 Oct 2017 10:29:34 GMT", "version": "v1" } ]
2017-10-30
[ [ "Rahimi", "Hasnae", "" ], [ "Bekkali", "Hanan El", "" ] ]
This article proposes in depth comparative study of the most popular, used and analyzed Trust and Reputation System (TRS) according to the trust and reputation literature and in terms of specific trustworthiness criteria. This survey is realized relying on a selection of trustworthiness criteria that analyze and evaluate the maturity and effectiveness of TRS. These criteria describe the utility, the usability, the performance and the effectiveness of the TRS. We also provide a summary table of the compared TRS within a detailed and granular selection of trust and reputation aspects.
2312.14023
Yakov Shalunov
Yakov Shalunov
Leakage-Resilient Hardness Equivalence to Logspace Derandomization
19 pages
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Efficient derandomization has long been a goal in complexity theory, and a major recent result by Yanyi Liu and Rafael Pass identifies a new class of hardness assumption under which it is possible to perform time-bounded derandomization efficiently: that of ''leakage-resilient hardness.'' They identify a specific form of this assumption which is $\textit{equivalent}$ to $\mathsf{prP} = \mathsf{prBPP}$. In this paper, we pursue an equivalence to derandomization of $\mathsf{prBP{\cdot}L}$ (logspace promise problems with two-way randomness) through techniques analogous to Liu and Pass. We are able to obtain an equivalence between a similar ''leakage-resilient hardness'' assumption and a slightly stronger statement than derandomization of $\mathsf{prBP{\cdot}L}$, that of finding ''non-no'' instances of ''promise search problems.''
[ { "created": "Thu, 21 Dec 2023 16:54:00 GMT", "version": "v1" }, { "created": "Thu, 27 Jun 2024 20:54:08 GMT", "version": "v2" } ]
2024-07-01
[ [ "Shalunov", "Yakov", "" ] ]
Efficient derandomization has long been a goal in complexity theory, and a major recent result by Yanyi Liu and Rafael Pass identifies a new class of hardness assumption under which it is possible to perform time-bounded derandomization efficiently: that of ''leakage-resilient hardness.'' They identify a specific form of this assumption which is $\textit{equivalent}$ to $\mathsf{prP} = \mathsf{prBPP}$. In this paper, we pursue an equivalence to derandomization of $\mathsf{prBP{\cdot}L}$ (logspace promise problems with two-way randomness) through techniques analogous to Liu and Pass. We are able to obtain an equivalence between a similar ''leakage-resilient hardness'' assumption and a slightly stronger statement than derandomization of $\mathsf{prBP{\cdot}L}$, that of finding ''non-no'' instances of ''promise search problems.''
2004.02132
Hoang Le
Hoang Le, Feng Liu, Shu Zhang, and Aseem Agarwala
Deep Homography Estimation for Dynamic Scenes
CVPR 2020, https://github.com/lcmhoang/hmg-dynamics
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Homography estimation is an important step in many computer vision problems. Recently, deep neural network methods have shown to be favorable for this problem when compared to traditional methods. However, these new methods do not consider dynamic content in input images. They train neural networks with only image pairs that can be perfectly aligned using homographies. This paper investigates and discusses how to design and train a deep neural network that handles dynamic scenes. We first collect a large video dataset with dynamic content. We then develop a multi-scale neural network and show that when properly trained using our new dataset, this neural network can already handle dynamic scenes to some extent. To estimate a homography of a dynamic scene in a more principled way, we need to identify the dynamic content. Since dynamic content detection and homography estimation are two tightly coupled tasks, we follow the multi-task learning principles and augment our multi-scale network such that it jointly estimates the dynamics masks and homographies. Our experiments show that our method can robustly estimate homography for challenging scenarios with dynamic scenes, blur artifacts, or lack of textures.
[ { "created": "Sun, 5 Apr 2020 09:07:18 GMT", "version": "v1" } ]
2020-04-07
[ [ "Le", "Hoang", "" ], [ "Liu", "Feng", "" ], [ "Zhang", "Shu", "" ], [ "Agarwala", "Aseem", "" ] ]
Homography estimation is an important step in many computer vision problems. Recently, deep neural network methods have shown to be favorable for this problem when compared to traditional methods. However, these new methods do not consider dynamic content in input images. They train neural networks with only image pairs that can be perfectly aligned using homographies. This paper investigates and discusses how to design and train a deep neural network that handles dynamic scenes. We first collect a large video dataset with dynamic content. We then develop a multi-scale neural network and show that when properly trained using our new dataset, this neural network can already handle dynamic scenes to some extent. To estimate a homography of a dynamic scene in a more principled way, we need to identify the dynamic content. Since dynamic content detection and homography estimation are two tightly coupled tasks, we follow the multi-task learning principles and augment our multi-scale network such that it jointly estimates the dynamics masks and homographies. Our experiments show that our method can robustly estimate homography for challenging scenarios with dynamic scenes, blur artifacts, or lack of textures.
2208.12619
Dea Editya
Dea Avega Editya
Tinjauan atas Efektivitas Penggunaan Key Opinion Leader (KOL) dalam Penjualan Surat Utang Negara Ritel seri SBR011
15 pages, 7 figures, in Indonesian
null
null
null
cs.CY cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Indonesian Ministry of Finance had endorsed 10 Key Opinion Leaders to help promoting government retail bonds SBR011 during selling period of 25 May-16 June 2022. This study analyzed effectiveness of the endorsement by using several indicators; engagement rate, enthusiasm rate and sentiment analysis of feedbacks from KOL audiens. Data was gathered from social media Instagram and TikTok social platform used by the KOL to post their marketing contents. This paper found that the endorsement is quite effective to promote the SBR011 and yields mostly positive feedback on the marketing campaign.
[ { "created": "Sat, 13 Aug 2022 03:38:53 GMT", "version": "v1" } ]
2022-08-29
[ [ "Editya", "Dea Avega", "" ] ]
Indonesian Ministry of Finance had endorsed 10 Key Opinion Leaders to help promoting government retail bonds SBR011 during selling period of 25 May-16 June 2022. This study analyzed effectiveness of the endorsement by using several indicators; engagement rate, enthusiasm rate and sentiment analysis of feedbacks from KOL audiens. Data was gathered from social media Instagram and TikTok social platform used by the KOL to post their marketing contents. This paper found that the endorsement is quite effective to promote the SBR011 and yields mostly positive feedback on the marketing campaign.
2212.04495
Rishabh Dabral
Rishabh Dabral and Muhammad Hamza Mughal and Vladislav Golyanik and Christian Theobalt
MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis
CVPR23, 11 pages, 6 figures, 2 tables; project page: https://vcai.mpi-inf.mpg.de/projects/MoFusion
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional methods for human motion synthesis are either deterministic or struggle with the trade-off between motion diversity and motion quality. In response to these limitations, we introduce MoFusion, i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can generate long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. The learned latent space can be used for several interactive motion editing applications -- like inbetweening, seed conditioning, and text-based editing -- thus, providing crucial abilities for virtual character animation and robotics. Through comprehensive quantitative evaluations and a perceptual user study, we demonstrate the effectiveness of MoFusion compared to the state of the art on established benchmarks in the literature. We urge the reader to watch our supplementary video and visit https://vcai.mpi-inf.mpg.de/projects/MoFusion.
[ { "created": "Thu, 8 Dec 2022 18:59:48 GMT", "version": "v1" }, { "created": "Mon, 15 May 2023 11:36:57 GMT", "version": "v2" } ]
2023-05-16
[ [ "Dabral", "Rishabh", "" ], [ "Mughal", "Muhammad Hamza", "" ], [ "Golyanik", "Vladislav", "" ], [ "Theobalt", "Christian", "" ] ]
Conventional methods for human motion synthesis are either deterministic or struggle with the trade-off between motion diversity and motion quality. In response to these limitations, we introduce MoFusion, i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can generate long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. The learned latent space can be used for several interactive motion editing applications -- like inbetweening, seed conditioning, and text-based editing -- thus, providing crucial abilities for virtual character animation and robotics. Through comprehensive quantitative evaluations and a perceptual user study, we demonstrate the effectiveness of MoFusion compared to the state of the art on established benchmarks in the literature. We urge the reader to watch our supplementary video and visit https://vcai.mpi-inf.mpg.de/projects/MoFusion.
2303.12878
Morgane Goibert
Morgane Goibert, Cl\'ement Calauz\`enes, Ekhine Irurozki, St\'ephan Cl\'emen\c{c}on
Robust Consensus in Ranking Data Analysis: Definitions, Properties and Computational Issues
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
As the issue of robustness in AI systems becomes vital, statistical learning techniques that are reliable even in presence of partly contaminated data have to be developed. Preference data, in the form of (complete) rankings in the simplest situations, are no exception and the demand for appropriate concepts and tools is all the more pressing given that technologies fed by or producing this type of data (e.g. search engines, recommending systems) are now massively deployed. However, the lack of vector space structure for the set of rankings (i.e. the symmetric group $\mathfrak{S}_n$) and the complex nature of statistics considered in ranking data analysis make the formulation of robustness objectives in this domain challenging. In this paper, we introduce notions of robustness, together with dedicated statistical methods, for Consensus Ranking the flagship problem in ranking data analysis, aiming at summarizing a probability distribution on $\mathfrak{S}_n$ by a median ranking. Precisely, we propose specific extensions of the popular concept of breakdown point, tailored to consensus ranking, and address the related computational issues. Beyond the theoretical contributions, the relevance of the approach proposed is supported by an experimental study.
[ { "created": "Wed, 22 Mar 2023 19:36:56 GMT", "version": "v1" } ]
2023-03-24
[ [ "Goibert", "Morgane", "" ], [ "Calauzènes", "Clément", "" ], [ "Irurozki", "Ekhine", "" ], [ "Clémençon", "Stéphan", "" ] ]
As the issue of robustness in AI systems becomes vital, statistical learning techniques that are reliable even in presence of partly contaminated data have to be developed. Preference data, in the form of (complete) rankings in the simplest situations, are no exception and the demand for appropriate concepts and tools is all the more pressing given that technologies fed by or producing this type of data (e.g. search engines, recommending systems) are now massively deployed. However, the lack of vector space structure for the set of rankings (i.e. the symmetric group $\mathfrak{S}_n$) and the complex nature of statistics considered in ranking data analysis make the formulation of robustness objectives in this domain challenging. In this paper, we introduce notions of robustness, together with dedicated statistical methods, for Consensus Ranking the flagship problem in ranking data analysis, aiming at summarizing a probability distribution on $\mathfrak{S}_n$ by a median ranking. Precisely, we propose specific extensions of the popular concept of breakdown point, tailored to consensus ranking, and address the related computational issues. Beyond the theoretical contributions, the relevance of the approach proposed is supported by an experimental study.
2306.07764
David Samuel
David Samuel and Lilja {\O}vrelid
Tokenization with Factorized Subword Encoding
Findings of ACL 2023
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, language models have become increasingly larger and more complex. However, the input representations for these models continue to rely on simple and greedy subword tokenization methods. In this paper, we propose a novel tokenization method that factorizes subwords onto discrete triplets using a VQ-VAE model. The effectiveness of the proposed tokenization method, referred to as the Factorizer, is evaluated on language modeling and morpho-syntactic tasks for 7 diverse languages. Results indicate that this method is more appropriate and robust for morphological tasks than the commonly used byte-pair encoding (BPE) tokenization algorithm.
[ { "created": "Tue, 13 Jun 2023 13:27:34 GMT", "version": "v1" } ]
2023-06-14
[ [ "Samuel", "David", "" ], [ "Øvrelid", "Lilja", "" ] ]
In recent years, language models have become increasingly larger and more complex. However, the input representations for these models continue to rely on simple and greedy subword tokenization methods. In this paper, we propose a novel tokenization method that factorizes subwords onto discrete triplets using a VQ-VAE model. The effectiveness of the proposed tokenization method, referred to as the Factorizer, is evaluated on language modeling and morpho-syntactic tasks for 7 diverse languages. Results indicate that this method is more appropriate and robust for morphological tasks than the commonly used byte-pair encoding (BPE) tokenization algorithm.
2102.00457
Chang Wei Tan
Chang Wei Tan and Angus Dempster and Christoph Bergmeir and Geoffrey I. Webb
MultiRocket: Multiple pooling operators and transformations for fast and effective time series classification
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
We propose MultiRocket, a fast time series classification (TSC) algorithm that achieves state-of-the-art performance with a tiny fraction of the time and without the complex ensembling structure of many state-of-the-art methods. MultiRocket improves on MiniRocket, one of the fastest TSC algorithms to date, by adding multiple pooling operators and transformations to improve the diversity of the features generated. In addition to processing the raw input series, MultiRocket also applies first order differences to transform the original series. Convolutions are applied to both representations, and four pooling operators are applied to the convolution outputs. When benchmarked using the University of California Riverside TSC benchmark datasets, MultiRocket is significantly more accurate than MiniRocket, and competitive with the best ranked current method in terms of accuracy, HIVE-COTE 2.0, while being orders of magnitude faster.
[ { "created": "Sun, 31 Jan 2021 14:04:10 GMT", "version": "v1" }, { "created": "Tue, 28 Sep 2021 01:21:15 GMT", "version": "v2" }, { "created": "Thu, 7 Oct 2021 05:51:00 GMT", "version": "v3" }, { "created": "Mon, 21 Feb 2022 06:09:57 GMT", "version": "v4" } ]
2022-02-22
[ [ "Tan", "Chang Wei", "" ], [ "Dempster", "Angus", "" ], [ "Bergmeir", "Christoph", "" ], [ "Webb", "Geoffrey I.", "" ] ]
We propose MultiRocket, a fast time series classification (TSC) algorithm that achieves state-of-the-art performance with a tiny fraction of the time and without the complex ensembling structure of many state-of-the-art methods. MultiRocket improves on MiniRocket, one of the fastest TSC algorithms to date, by adding multiple pooling operators and transformations to improve the diversity of the features generated. In addition to processing the raw input series, MultiRocket also applies first order differences to transform the original series. Convolutions are applied to both representations, and four pooling operators are applied to the convolution outputs. When benchmarked using the University of California Riverside TSC benchmark datasets, MultiRocket is significantly more accurate than MiniRocket, and competitive with the best ranked current method in terms of accuracy, HIVE-COTE 2.0, while being orders of magnitude faster.
2404.10745
Weitong Zhang
Weitong Zhang and Zhiyuan Fan and Jiafan He and Quanquan Gu
Settling Constant Regrets in Linear Markov Decision Processes
46 pages, 2 tables
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the constant regret guarantees in reinforcement learning (RL). Our objective is to design an algorithm that incurs only finite regret over infinite episodes with high probability. We introduce an algorithm, Cert-LSVI-UCB, for misspecified linear Markov decision processes (MDPs) where both the transition kernel and the reward function can be approximated by some linear function up to misspecification level $\zeta$. At the core of Cert-LSVI-UCB is an innovative certified estimator, which facilitates a fine-grained concentration analysis for multi-phase value-targeted regression, enabling us to establish an instance-dependent regret bound that is constant w.r.t. the number of episodes. Specifically, we demonstrate that for an MDP characterized by a minimal suboptimality gap $\Delta$, Cert-LSVI-UCB has a cumulative regret of $\tilde{\mathcal{O}}(d^3H^5/\Delta)$ with high probability, provided that the misspecification level $\zeta$ is below $\tilde{\mathcal{O}}(\Delta / (\sqrt{d}H^2))$. Remarkably, this regret bound remains constant relative to the number of episodes $K$. To the best of our knowledge, Cert-LSVI-UCB is the first algorithm to achieve a constant, instance-dependent, high-probability regret bound in RL with linear function approximation for infinite runs without relying on prior distribution assumptions. This not only highlights the robustness of Cert-LSVI-UCB to model misspecification but also introduces novel algorithmic designs and analytical techniques of independent interest.
[ { "created": "Tue, 16 Apr 2024 17:23:19 GMT", "version": "v1" } ]
2024-04-17
[ [ "Zhang", "Weitong", "" ], [ "Fan", "Zhiyuan", "" ], [ "He", "Jiafan", "" ], [ "Gu", "Quanquan", "" ] ]
We study the constant regret guarantees in reinforcement learning (RL). Our objective is to design an algorithm that incurs only finite regret over infinite episodes with high probability. We introduce an algorithm, Cert-LSVI-UCB, for misspecified linear Markov decision processes (MDPs) where both the transition kernel and the reward function can be approximated by some linear function up to misspecification level $\zeta$. At the core of Cert-LSVI-UCB is an innovative certified estimator, which facilitates a fine-grained concentration analysis for multi-phase value-targeted regression, enabling us to establish an instance-dependent regret bound that is constant w.r.t. the number of episodes. Specifically, we demonstrate that for an MDP characterized by a minimal suboptimality gap $\Delta$, Cert-LSVI-UCB has a cumulative regret of $\tilde{\mathcal{O}}(d^3H^5/\Delta)$ with high probability, provided that the misspecification level $\zeta$ is below $\tilde{\mathcal{O}}(\Delta / (\sqrt{d}H^2))$. Remarkably, this regret bound remains constant relative to the number of episodes $K$. To the best of our knowledge, Cert-LSVI-UCB is the first algorithm to achieve a constant, instance-dependent, high-probability regret bound in RL with linear function approximation for infinite runs without relying on prior distribution assumptions. This not only highlights the robustness of Cert-LSVI-UCB to model misspecification but also introduces novel algorithmic designs and analytical techniques of independent interest.
2110.07009
Luke Bauer
Luke A. Bauer, James K. Howes IV, Sam A. Markelon, Vincent Bindschaedler, Thomas Shrimpton
Leveraging Generative Models for Covert Messaging: Challenges and Tradeoffs for "Dead-Drop" Deployments
null
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State of the art generative models of human-produced content are the focus of many recent papers that explore their use for steganographic communication. In particular, generative models of natural language text. Loosely, these works (invertibly) encode message-carrying bits into a sequence of samples from the model, ultimately yielding a plausible natural language covertext. By focusing on this narrow steganographic piece, prior work has largely ignored the significant algorithmic challenges, and performance-security tradeoffs, that arise when one actually tries to build a messaging pipeline around it. We make these challenges concrete, by considering the natural application of such a pipeline: namely, "dead-drop" covert messaging over large, public internet platforms (e.g. social media sites). We explicate the challenges and describe approaches to overcome them, surfacing in the process important performance and security tradeoffs that must be carefully tuned. We implement a system around this model-based format-transforming encryption pipeline, and give an empirical analysis of its performance and (heuristic) security.
[ { "created": "Wed, 13 Oct 2021 20:05:26 GMT", "version": "v1" }, { "created": "Sat, 13 Aug 2022 08:16:30 GMT", "version": "v2" }, { "created": "Tue, 18 Jun 2024 15:52:51 GMT", "version": "v3" } ]
2024-06-19
[ [ "Bauer", "Luke A.", "" ], [ "Howes", "James K.", "IV" ], [ "Markelon", "Sam A.", "" ], [ "Bindschaedler", "Vincent", "" ], [ "Shrimpton", "Thomas", "" ] ]
State of the art generative models of human-produced content are the focus of many recent papers that explore their use for steganographic communication. In particular, generative models of natural language text. Loosely, these works (invertibly) encode message-carrying bits into a sequence of samples from the model, ultimately yielding a plausible natural language covertext. By focusing on this narrow steganographic piece, prior work has largely ignored the significant algorithmic challenges, and performance-security tradeoffs, that arise when one actually tries to build a messaging pipeline around it. We make these challenges concrete, by considering the natural application of such a pipeline: namely, "dead-drop" covert messaging over large, public internet platforms (e.g. social media sites). We explicate the challenges and describe approaches to overcome them, surfacing in the process important performance and security tradeoffs that must be carefully tuned. We implement a system around this model-based format-transforming encryption pipeline, and give an empirical analysis of its performance and (heuristic) security.
2006.13396
Mojtaba Mahdavi
Mojtaba Mahdavi, Muhammad Umar Farooq, Liang Liu, Ove Edfors, Viktor \"Owall, and Michael Lentmaier
The Effect of Coupling Memory and Block Length on Spatially Coupled Serially Concatenated Codes
Presented at the IEEE 93rd Vehicular Technology Conference (VTC) 2021-Spring
null
10.1109/VTC2021-Spring51267.2021.9448689
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatially coupled serially concatenated codes (SC-SCCs) are a class of spatially coupled turbo-like codes, which have a close-to-capacity performance and low error floor. In this paper we investigate the impact of coupling memory, block length, decoding window size, and number of iterations on the performance, complexity, and latency of SC-SCCs. Several design tradeoffs are presented to see the relation between these parameters in a wide range. Also, our analysis provides design guidelines for SC-SCCs in different scenarios to make the code design independent of block length. As a result, block length and coupling memory can be exchanged flexibly without changing the latency and complexity. Also, we observe that the performance of SC-SCCs is improved with respect to the uncoupled ensembles for a fixed latency and complexity.
[ { "created": "Wed, 24 Jun 2020 00:09:02 GMT", "version": "v1" }, { "created": "Sun, 25 Jul 2021 15:19:02 GMT", "version": "v2" } ]
2021-07-27
[ [ "Mahdavi", "Mojtaba", "" ], [ "Farooq", "Muhammad Umar", "" ], [ "Liu", "Liang", "" ], [ "Edfors", "Ove", "" ], [ "Öwall", "Viktor", "" ], [ "Lentmaier", "Michael", "" ] ]
Spatially coupled serially concatenated codes (SC-SCCs) are a class of spatially coupled turbo-like codes, which have a close-to-capacity performance and low error floor. In this paper we investigate the impact of coupling memory, block length, decoding window size, and number of iterations on the performance, complexity, and latency of SC-SCCs. Several design tradeoffs are presented to see the relation between these parameters in a wide range. Also, our analysis provides design guidelines for SC-SCCs in different scenarios to make the code design independent of block length. As a result, block length and coupling memory can be exchanged flexibly without changing the latency and complexity. Also, we observe that the performance of SC-SCCs is improved with respect to the uncoupled ensembles for a fixed latency and complexity.
2405.01584
Li Wan
Li Wan, Tansu Alpcan, Margreta Kuijper, Emanuele Viterbo
Lightweight Conceptual Dictionary Learning for Text Classification Using Information Compression
12 pages, TKDE format
null
null
null
cs.CL cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel, lightweight supervised dictionary learning framework for text classification based on data compression and representation. This two-phase algorithm initially employs the Lempel-Ziv-Welch (LZW) algorithm to construct a dictionary from text datasets, focusing on the conceptual significance of dictionary elements. Subsequently, dictionaries are refined considering label data, optimizing dictionary atoms to enhance discriminative power based on mutual information and class distribution. This process generates discriminative numerical representations, facilitating the training of simple classifiers such as SVMs and neural networks. We evaluate our algorithm's information-theoretic performance using information bottleneck principles and introduce the information plane area rank (IPAR) as a novel metric to quantify the information-theoretic performance. Tested on six benchmark text datasets, our algorithm competes closely with top models, especially in limited-vocabulary contexts, using significantly fewer parameters. \review{Our algorithm closely matches top-performing models, deviating by only ~2\% on limited-vocabulary datasets, using just 10\% of their parameters. However, it falls short on diverse-vocabulary datasets, likely due to the LZW algorithm's constraints with low-repetition data. This contrast highlights its efficiency and limitations across different dataset types.
[ { "created": "Sun, 28 Apr 2024 10:11:52 GMT", "version": "v1" } ]
2024-05-06
[ [ "Wan", "Li", "" ], [ "Alpcan", "Tansu", "" ], [ "Kuijper", "Margreta", "" ], [ "Viterbo", "Emanuele", "" ] ]
We propose a novel, lightweight supervised dictionary learning framework for text classification based on data compression and representation. This two-phase algorithm initially employs the Lempel-Ziv-Welch (LZW) algorithm to construct a dictionary from text datasets, focusing on the conceptual significance of dictionary elements. Subsequently, dictionaries are refined considering label data, optimizing dictionary atoms to enhance discriminative power based on mutual information and class distribution. This process generates discriminative numerical representations, facilitating the training of simple classifiers such as SVMs and neural networks. We evaluate our algorithm's information-theoretic performance using information bottleneck principles and introduce the information plane area rank (IPAR) as a novel metric to quantify the information-theoretic performance. Tested on six benchmark text datasets, our algorithm competes closely with top models, especially in limited-vocabulary contexts, using significantly fewer parameters. \review{Our algorithm closely matches top-performing models, deviating by only ~2\% on limited-vocabulary datasets, using just 10\% of their parameters. However, it falls short on diverse-vocabulary datasets, likely due to the LZW algorithm's constraints with low-repetition data. This contrast highlights its efficiency and limitations across different dataset types.
2301.04447
Hemraj Singh
Hemraj Singh, Mridula Verma, Ramalingaswamy Cheruku
VS-Net: Multiscale Spatiotemporal Features for Lightweight Video Salient Document Detection
null
https://ictai.computer.org/2022/
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video Salient Document Detection (VSDD) is an essential task of practical computer vision, which aims to highlight visually salient document regions in video frames. Previous techniques for VSDD focus on learning features without considering the cooperation among and across the appearance and motion cues and thus fail to perform in practical scenarios. Moreover, most of the previous techniques demand high computational resources, which limits the usage of such systems in resource-constrained settings. To handle these issues, we propose VS-Net, which captures multi-scale spatiotemporal information with the help of dilated depth-wise separable convolution and Approximation Rank Pooling. VS-Net extracts the key features locally from each frame across embedding sub-spaces and forwards the features between adjacent and parallel nodes, enhancing model performance globally. Our model generates saliency maps considering both the background and foreground simultaneously, making it perform better in challenging scenarios. The immense experiments regulated on the benchmark MIDV-500 dataset show that the VS-Net model outperforms state-of-the-art approaches in both time and robustness measures.
[ { "created": "Wed, 11 Jan 2023 13:07:31 GMT", "version": "v1" } ]
2023-01-12
[ [ "Singh", "Hemraj", "" ], [ "Verma", "Mridula", "" ], [ "Cheruku", "Ramalingaswamy", "" ] ]
Video Salient Document Detection (VSDD) is an essential task of practical computer vision, which aims to highlight visually salient document regions in video frames. Previous techniques for VSDD focus on learning features without considering the cooperation among and across the appearance and motion cues and thus fail to perform in practical scenarios. Moreover, most of the previous techniques demand high computational resources, which limits the usage of such systems in resource-constrained settings. To handle these issues, we propose VS-Net, which captures multi-scale spatiotemporal information with the help of dilated depth-wise separable convolution and Approximation Rank Pooling. VS-Net extracts the key features locally from each frame across embedding sub-spaces and forwards the features between adjacent and parallel nodes, enhancing model performance globally. Our model generates saliency maps considering both the background and foreground simultaneously, making it perform better in challenging scenarios. The immense experiments regulated on the benchmark MIDV-500 dataset show that the VS-Net model outperforms state-of-the-art approaches in both time and robustness measures.
cs/0605003
Amitabh Saxena
Amitabh Saxena and Ben Soh
A New Cryptosystem Based On Hidden Order Groups
removed examples for multiparty key agreement and join protocols, since they are redundant
null
null
null
cs.CR cs.CC
null
Let $G_1$ be a cyclic multiplicative group of order $n$. It is known that the Diffie-Hellman problem is random self-reducible in $G_1$ with respect to a fixed generator $g$ if $\phi(n)$ is known. That is, given $g, g^x\in G_1$ and having oracle access to a `Diffie-Hellman Problem' solver with fixed generator $g$, it is possible to compute $g^{1/x} \in G_1$ in polynomial time (see theorem 3.2). On the other hand, it is not known if such a reduction exists when $\phi(n)$ is unknown (see conjuncture 3.1). We exploit this ``gap'' to construct a cryptosystem based on hidden order groups and present a practical implementation of a novel cryptographic primitive called an \emph{Oracle Strong Associative One-Way Function} (O-SAOWF). O-SAOWFs have applications in multiparty protocols. We demonstrate this by presenting a key agreement protocol for dynamic ad-hoc groups.
[ { "created": "Sun, 30 Apr 2006 18:13:10 GMT", "version": "v1" }, { "created": "Tue, 2 May 2006 16:59:08 GMT", "version": "v2" }, { "created": "Wed, 3 May 2006 17:55:02 GMT", "version": "v3" }, { "created": "Wed, 3 May 2006 21:23:55 GMT", "version": "v4" } ]
2007-05-23
[ [ "Saxena", "Amitabh", "" ], [ "Soh", "Ben", "" ] ]
Let $G_1$ be a cyclic multiplicative group of order $n$. It is known that the Diffie-Hellman problem is random self-reducible in $G_1$ with respect to a fixed generator $g$ if $\phi(n)$ is known. That is, given $g, g^x\in G_1$ and having oracle access to a `Diffie-Hellman Problem' solver with fixed generator $g$, it is possible to compute $g^{1/x} \in G_1$ in polynomial time (see theorem 3.2). On the other hand, it is not known if such a reduction exists when $\phi(n)$ is unknown (see conjuncture 3.1). We exploit this ``gap'' to construct a cryptosystem based on hidden order groups and present a practical implementation of a novel cryptographic primitive called an \emph{Oracle Strong Associative One-Way Function} (O-SAOWF). O-SAOWFs have applications in multiparty protocols. We demonstrate this by presenting a key agreement protocol for dynamic ad-hoc groups.
2212.07603
Lechao Cheng
Zerun Liu, Fan Zhang, Jingxuan He, Jin Wang, Zhangye Wang, Lechao Cheng
Text-Guided Mask-free Local Image Retouching
7 pages, 6 figures, 1 table
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In the realm of multi-modality, text-guided image retouching techniques emerged with the advent of deep learning. Most currently available text-guided methods, however, rely on object-level supervision to constrain the region that may be modified. This not only makes it more challenging to develop these algorithms, but it also limits how widely deep learning can be used for image retouching. In this paper, we offer a text-guided mask-free image retouching approach that yields consistent results to address this concern. In order to perform image retouching without mask supervision, our technique can construct plausible and edge-sharp masks based on the text for each object in the image. Extensive experiments have shown that our method can produce high-quality, accurate images based on spoken language. The source code will be released soon.
[ { "created": "Thu, 15 Dec 2022 03:26:53 GMT", "version": "v1" }, { "created": "Fri, 24 Feb 2023 05:46:02 GMT", "version": "v2" } ]
2023-02-27
[ [ "Liu", "Zerun", "" ], [ "Zhang", "Fan", "" ], [ "He", "Jingxuan", "" ], [ "Wang", "Jin", "" ], [ "Wang", "Zhangye", "" ], [ "Cheng", "Lechao", "" ] ]
In the realm of multi-modality, text-guided image retouching techniques emerged with the advent of deep learning. Most currently available text-guided methods, however, rely on object-level supervision to constrain the region that may be modified. This not only makes it more challenging to develop these algorithms, but it also limits how widely deep learning can be used for image retouching. In this paper, we offer a text-guided mask-free image retouching approach that yields consistent results to address this concern. In order to perform image retouching without mask supervision, our technique can construct plausible and edge-sharp masks based on the text for each object in the image. Extensive experiments have shown that our method can produce high-quality, accurate images based on spoken language. The source code will be released soon.
2108.09952
Lorena P\'erez-Garc\'ia
Lorena P\'erez-Garc\'ia
The ICT-Buen Vivir Paradox: Using Digital Tools to Defend Indigenous Cultures
In proceedings of the 1st Virtual Conference on Implications of Information and Digital Technologies for Development, 2021
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Arguably shaped by political economy perspectives from the Global North, ICT4D aims to reduce socioeconomic disparities across countries and regions through ICT implementations, as well as to open up opportunities for empowerment and human development. Despite these aims, ICT4D has been criticized because 1) although ICT and internet have positive effects on societies across the Global North, their positive impact on people's lives in the Global South cannot be easily proved; 2) ICT4D's primary focus seems to be on ICT's series of artefacts rather than on ICT's positive transformative potential of living conditions in the world; 3) the type of development ICT4D aims for could mask global hegemonic interests and seek neoliberal restructuring within less socioeconomically favoured communities within the Global South. For these reasons, claim scholars, ICT4D should be revised. By presenting ICT appropriations among Wixarika peoples in Mexico to protect their sacred land, this paper aims to 1) shed a light on the need for postcolonial critical frameworks on what 'development' associated with ICT should be and 2) to foster discussions on whether ICT can enable alternative voices from the Global South to be heard, despite tensions between traditional views and contemporary technologies.
[ { "created": "Mon, 23 Aug 2021 05:49:55 GMT", "version": "v1" } ]
2021-08-24
[ [ "Pérez-García", "Lorena", "" ] ]
Arguably shaped by political economy perspectives from the Global North, ICT4D aims to reduce socioeconomic disparities across countries and regions through ICT implementations, as well as to open up opportunities for empowerment and human development. Despite these aims, ICT4D has been criticized because 1) although ICT and internet have positive effects on societies across the Global North, their positive impact on people's lives in the Global South cannot be easily proved; 2) ICT4D's primary focus seems to be on ICT's series of artefacts rather than on ICT's positive transformative potential of living conditions in the world; 3) the type of development ICT4D aims for could mask global hegemonic interests and seek neoliberal restructuring within less socioeconomically favoured communities within the Global South. For these reasons, claim scholars, ICT4D should be revised. By presenting ICT appropriations among Wixarika peoples in Mexico to protect their sacred land, this paper aims to 1) shed a light on the need for postcolonial critical frameworks on what 'development' associated with ICT should be and 2) to foster discussions on whether ICT can enable alternative voices from the Global South to be heard, despite tensions between traditional views and contemporary technologies.
2108.03978
Sadhana Shanmuga Sundaram
Aebel Joe Shibu, Sadhana S, Shilpa N, Pratyush Kumar
VeRLPy: Python Library for Verification of Digital Designs with Reinforcement Learning
submitted to The first international conference on AI-ML Systems
null
null
null
cs.AR cs.LG cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Digital hardware is verified by comparing its behavior against a reference model on a range of randomly generated input signals. The random generation of the inputs hopes to achieve sufficient coverage of the different parts of the design. However, such coverage is often difficult to achieve, amounting to large verification efforts and delays. An alternative is to use Reinforcement Learning (RL) to generate the inputs by learning to prioritize those inputs which can more efficiently explore the design under test. In this work, we present VeRLPy an open-source library to allow RL-driven verification with limited additional engineering overhead. This contributes to two broad movements within the EDA community of (a) moving to open-source toolchains and (b) reducing barriers for development with Python support. We also demonstrate the use of VeRLPy for a few designs and establish its value over randomly generated input signals.
[ { "created": "Mon, 9 Aug 2021 12:27:31 GMT", "version": "v1" } ]
2021-08-10
[ [ "Shibu", "Aebel Joe", "" ], [ "S", "Sadhana", "" ], [ "N", "Shilpa", "" ], [ "Kumar", "Pratyush", "" ] ]
Digital hardware is verified by comparing its behavior against a reference model on a range of randomly generated input signals. The random generation of the inputs hopes to achieve sufficient coverage of the different parts of the design. However, such coverage is often difficult to achieve, amounting to large verification efforts and delays. An alternative is to use Reinforcement Learning (RL) to generate the inputs by learning to prioritize those inputs which can more efficiently explore the design under test. In this work, we present VeRLPy an open-source library to allow RL-driven verification with limited additional engineering overhead. This contributes to two broad movements within the EDA community of (a) moving to open-source toolchains and (b) reducing barriers for development with Python support. We also demonstrate the use of VeRLPy for a few designs and establish its value over randomly generated input signals.
2311.11123
Chenyang Yang
Wanqin Ma, Chenyang Yang, Christian K\"astner
(Why) Is My Prompt Getting Worse? Rethinking Regression Testing for Evolving LLM APIs
conference version
null
null
null
cs.SE cs.CL
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) are increasingly integrated into software applications. Downstream application developers often access LLMs through APIs provided as a service. However, LLM APIs are often updated silently and scheduled to be deprecated, forcing users to continuously adapt to evolving models. This can cause performance regression and affect prompt design choices, as evidenced by our case study on toxicity detection. Based on our case study, we emphasize the need for and re-examine the concept of regression testing for evolving LLM APIs. We argue that regression testing LLMs requires fundamental changes to traditional testing approaches, due to different correctness notions, prompting brittleness, and non-determinism in LLM APIs.
[ { "created": "Sat, 18 Nov 2023 17:11:12 GMT", "version": "v1" }, { "created": "Tue, 6 Feb 2024 20:32:41 GMT", "version": "v2" } ]
2024-02-08
[ [ "Ma", "Wanqin", "" ], [ "Yang", "Chenyang", "" ], [ "Kästner", "Christian", "" ] ]
Large Language Models (LLMs) are increasingly integrated into software applications. Downstream application developers often access LLMs through APIs provided as a service. However, LLM APIs are often updated silently and scheduled to be deprecated, forcing users to continuously adapt to evolving models. This can cause performance regression and affect prompt design choices, as evidenced by our case study on toxicity detection. Based on our case study, we emphasize the need for and re-examine the concept of regression testing for evolving LLM APIs. We argue that regression testing LLMs requires fundamental changes to traditional testing approaches, due to different correctness notions, prompting brittleness, and non-determinism in LLM APIs.
1601.01410
Geoffrey Gamble
Geoffrey George Gamble and Mehrdad Yazdani
Sparse signals for the control of human movements using the infinity norm
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimal control models have been successful in describing many aspects of human movement. The interpretation of such models regarding neuronal implementation of the human motor system is not clear. An important aspects of optimal control policies is the notion of cost. Optimal control seeks to minimize a notion of cost, while meeting certain goals. We offer a method to transform current methods in the literature from their traditional form by changing the norm by which cost is assessed. We show how sparsity can be introduced into current optimal approaches that use continuous control signals. We assess cost using the infinity norm. This results in optimal signals which can be represented by a small amount of Dirac delta functions. Sparsity has played an important role in theoretical neuroscience for information processing (such as vision). In this work, to obtain sparse control signals, the infinity norm is used as a penalty on the control signal, which is then encoded with Dirac delta functions. We show, for a basic physical system, a point mass can be moved between two points in a way that resembles human fast reaching movements. Despite the sparse nature of the control signal, the movements that result are continuous and smooth. These control signals are simpler than their non-sparse counterparts, yet yield comparable results when applied towards modeling reaching movements. In addition, such sparse control signals resemble a sequence of spikes, giving this approach a biological interpretation. Actual neuronal implementations are more complex. However, this work shows, in principle, that sparsely encoded control signals are a plausible implementation for the control of reaching movements. Leading techniques for modeling human movements can easily be adjusted in order to introduce sparsity, with a biological interpretation and the simplified information content of the control signal.
[ { "created": "Thu, 7 Jan 2016 06:20:37 GMT", "version": "v1" } ]
2016-01-08
[ [ "Gamble", "Geoffrey George", "" ], [ "Yazdani", "Mehrdad", "" ] ]
Optimal control models have been successful in describing many aspects of human movement. The interpretation of such models regarding neuronal implementation of the human motor system is not clear. An important aspects of optimal control policies is the notion of cost. Optimal control seeks to minimize a notion of cost, while meeting certain goals. We offer a method to transform current methods in the literature from their traditional form by changing the norm by which cost is assessed. We show how sparsity can be introduced into current optimal approaches that use continuous control signals. We assess cost using the infinity norm. This results in optimal signals which can be represented by a small amount of Dirac delta functions. Sparsity has played an important role in theoretical neuroscience for information processing (such as vision). In this work, to obtain sparse control signals, the infinity norm is used as a penalty on the control signal, which is then encoded with Dirac delta functions. We show, for a basic physical system, a point mass can be moved between two points in a way that resembles human fast reaching movements. Despite the sparse nature of the control signal, the movements that result are continuous and smooth. These control signals are simpler than their non-sparse counterparts, yet yield comparable results when applied towards modeling reaching movements. In addition, such sparse control signals resemble a sequence of spikes, giving this approach a biological interpretation. Actual neuronal implementations are more complex. However, this work shows, in principle, that sparsely encoded control signals are a plausible implementation for the control of reaching movements. Leading techniques for modeling human movements can easily be adjusted in order to introduce sparsity, with a biological interpretation and the simplified information content of the control signal.
2005.06739
Ali Khajegili Mirabadi
Ali Khajegili Mirabadi and Stefano Rini
The Information & Mutual Information Ratio for Counting Image Features and Their Matches
8-th Iran Workshop on Communication and Information Theory, 2020
null
null
null
cs.CV cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feature extraction and description is an important topic of computer vision, as it is the starting point of a number of tasks such as image reconstruction, stitching, registration, and recognition among many others. In this paper, two new image features are proposed: the Information Ratio (IR) and the Mutual Information Ratio (MIR). The IR is a feature of a single image, while the MIR describes features common across two or more images.We begin by introducing the IR and the MIR and motivate these features in an information theoretical context as the ratio of the self-information of an intensity level over the information contained over the pixels of the same intensity. Notably, the relationship of the IR and MIR with the image entropy and mutual information, classic information measures, are discussed. Finally, the effectiveness of these features is tested through feature extraction over INRIA Copydays datasets and feature matching over the Oxfords Affine Covariant Regions. These numerical evaluations validate the relevance of the IR and MIR in practical computer vision tasks
[ { "created": "Thu, 14 May 2020 06:27:01 GMT", "version": "v1" } ]
2020-05-15
[ [ "Mirabadi", "Ali Khajegili", "" ], [ "Rini", "Stefano", "" ] ]
Feature extraction and description is an important topic of computer vision, as it is the starting point of a number of tasks such as image reconstruction, stitching, registration, and recognition among many others. In this paper, two new image features are proposed: the Information Ratio (IR) and the Mutual Information Ratio (MIR). The IR is a feature of a single image, while the MIR describes features common across two or more images.We begin by introducing the IR and the MIR and motivate these features in an information theoretical context as the ratio of the self-information of an intensity level over the information contained over the pixels of the same intensity. Notably, the relationship of the IR and MIR with the image entropy and mutual information, classic information measures, are discussed. Finally, the effectiveness of these features is tested through feature extraction over INRIA Copydays datasets and feature matching over the Oxfords Affine Covariant Regions. These numerical evaluations validate the relevance of the IR and MIR in practical computer vision tasks
1211.2126
Hela Ltifi Ms
Hela Ltifi, Ghada Trabelsi, Mounir Ben Ayed, Adel M. Alimi
Dynamic Decision Support System Based on Bayesian Networks Application to fight against the Nosocomial Infections
8 pages, 6 figures, 43 references
International Journal of Advanced Research in Artificial Intelligence (IJARAI), vol 1(1), pp. 22-29, 2012
null
null
cs.AI cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The improvement of medical care quality is a significant interest for the future years. The fight against nosocomial infections (NI) in the intensive care units (ICU) is a good example. We will focus on a set of observations which reflect the dynamic aspect of the decision, result of the application of a Medical Decision Support System (MDSS). This system has to make dynamic decision on temporal data. We use dynamic Bayesian network (DBN) to model this dynamic process. It is a temporal reasoning within a real-time environment; we are interested in the Dynamic Decision Support Systems in healthcare domain (MDDSS).
[ { "created": "Fri, 9 Nov 2012 13:36:44 GMT", "version": "v1" } ]
2012-11-12
[ [ "Ltifi", "Hela", "" ], [ "Trabelsi", "Ghada", "" ], [ "Ayed", "Mounir Ben", "" ], [ "Alimi", "Adel M.", "" ] ]
The improvement of medical care quality is a significant interest for the future years. The fight against nosocomial infections (NI) in the intensive care units (ICU) is a good example. We will focus on a set of observations which reflect the dynamic aspect of the decision, result of the application of a Medical Decision Support System (MDSS). This system has to make dynamic decision on temporal data. We use dynamic Bayesian network (DBN) to model this dynamic process. It is a temporal reasoning within a real-time environment; we are interested in the Dynamic Decision Support Systems in healthcare domain (MDDSS).
2008.00500
Zhide Wang
Yanling Chang and Alfredo Garcia and Zhide Wang and Lu Sun
Structural Estimation of Partially Observable Markov Decision Processes
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many practical settings control decisions must be made under partial/imperfect information about the evolution of a relevant state variable. Partially Observable Markov Decision Processes (POMDPs) is a relatively well-developed framework for modeling and analyzing such problems. In this paper we consider the structural estimation of the primitives of a POMDP model based upon the observable history of the process. We analyze the structural properties of POMDP model with random rewards and specify conditions under which the model is identifiable without knowledge of the state dynamics. We consider a soft policy gradient algorithm to compute a maximum likelihood estimator and provide a finite-time characterization of convergence to a stationary point. We illustrate the estimation methodology with an application to optimal equipment replacement. In this context, replacement decisions must be made under partial/imperfect information on the true state (i.e. condition of the equipment). We use synthetic and real data to highlight the robustness of the proposed methodology and characterize the potential for misspecification when partial state observability is ignored.
[ { "created": "Sun, 2 Aug 2020 15:04:27 GMT", "version": "v1" }, { "created": "Fri, 27 Nov 2020 20:22:07 GMT", "version": "v2" }, { "created": "Tue, 28 Dec 2021 18:58:40 GMT", "version": "v3" } ]
2021-12-30
[ [ "Chang", "Yanling", "" ], [ "Garcia", "Alfredo", "" ], [ "Wang", "Zhide", "" ], [ "Sun", "Lu", "" ] ]
In many practical settings control decisions must be made under partial/imperfect information about the evolution of a relevant state variable. Partially Observable Markov Decision Processes (POMDPs) is a relatively well-developed framework for modeling and analyzing such problems. In this paper we consider the structural estimation of the primitives of a POMDP model based upon the observable history of the process. We analyze the structural properties of POMDP model with random rewards and specify conditions under which the model is identifiable without knowledge of the state dynamics. We consider a soft policy gradient algorithm to compute a maximum likelihood estimator and provide a finite-time characterization of convergence to a stationary point. We illustrate the estimation methodology with an application to optimal equipment replacement. In this context, replacement decisions must be made under partial/imperfect information on the true state (i.e. condition of the equipment). We use synthetic and real data to highlight the robustness of the proposed methodology and characterize the potential for misspecification when partial state observability is ignored.
2212.06463
Van-Dinh Nguyen
Nguyen Cong Luong, Quoc-Viet Pham, Thien Huynh-The, Van-Dinh Nguyen, Derrick Wing Kwan Ng, and Symeon Chatzinotas
Edge Computing for Semantic Communication Enabled Metaverse: An Incentive Mechanism Design
7 pages, 5 figures
null
null
null
cs.GT cs.LG
http://creativecommons.org/licenses/by/4.0/
Semantic communication (SemCom) and edge computing are two disruptive solutions to address emerging requirements of huge data communication, bandwidth efficiency and low latency data processing in Metaverse. However, edge computing resources are often provided by computing service providers and thus it is essential to design appealingly incentive mechanisms for the provision of limited resources. Deep learning (DL)- based auction has recently proposed as an incentive mechanism that maximizes the revenue while holding important economic properties, i.e., individual rationality and incentive compatibility. Therefore, in this work, we introduce the design of the DLbased auction for the computing resource allocation in SemComenabled Metaverse. First, we briefly introduce the fundamentals and challenges of Metaverse. Second, we present the preliminaries of SemCom and edge computing. Third, we review various incentive mechanisms for edge computing resource trading. Fourth, we present the design of the DL-based auction for edge resource allocation in SemCom-enabled Metaverse. Simulation results demonstrate that the DL-based auction improves the revenue while nearly satisfying the individual rationality and incentive compatibility constraints.
[ { "created": "Tue, 13 Dec 2022 10:29:41 GMT", "version": "v1" } ]
2022-12-14
[ [ "Luong", "Nguyen Cong", "" ], [ "Pham", "Quoc-Viet", "" ], [ "Huynh-The", "Thien", "" ], [ "Nguyen", "Van-Dinh", "" ], [ "Ng", "Derrick Wing Kwan", "" ], [ "Chatzinotas", "Symeon", "" ] ]
Semantic communication (SemCom) and edge computing are two disruptive solutions to address emerging requirements of huge data communication, bandwidth efficiency and low latency data processing in Metaverse. However, edge computing resources are often provided by computing service providers and thus it is essential to design appealingly incentive mechanisms for the provision of limited resources. Deep learning (DL)- based auction has recently proposed as an incentive mechanism that maximizes the revenue while holding important economic properties, i.e., individual rationality and incentive compatibility. Therefore, in this work, we introduce the design of the DLbased auction for the computing resource allocation in SemComenabled Metaverse. First, we briefly introduce the fundamentals and challenges of Metaverse. Second, we present the preliminaries of SemCom and edge computing. Third, we review various incentive mechanisms for edge computing resource trading. Fourth, we present the design of the DL-based auction for edge resource allocation in SemCom-enabled Metaverse. Simulation results demonstrate that the DL-based auction improves the revenue while nearly satisfying the individual rationality and incentive compatibility constraints.
2210.17409
Xingyi Yang
Xingyi Yang, Daquan Zhou, Songhua Liu, Jingwen Ye, Xinchao Wang
Deep Model Reassembly
NeurIPS 2022
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we explore a novel knowledge-transfer task, termed as Deep Model Reassembly (DeRy), for general-purpose model reuse. Given a collection of heterogeneous models pre-trained from distinct sources and with diverse architectures, the goal of DeRy, as its name implies, is to first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks under both the hardware resource and performance constraints. Such ambitious nature of DeRy inevitably imposes significant challenges, including, in the first place, the feasibility of its solution. We strive to showcase that, through a dedicated paradigm proposed in this paper, DeRy can be made not only possibly but practically efficiently. Specifically, we conduct the partitions of all pre-trained networks jointly via a cover set optimization, and derive a number of equivalence set, within each of which the network blocks are treated as functionally equivalent and hence interchangeable. The equivalence sets learned in this way, in turn, enable picking and assembling blocks to customize networks subject to certain constraints, which is achieved via solving an integer program backed up with a training-free proxy to estimate the task performance. The reassembled models, give rise to gratifying performances with the user-specified constraints satisfied. We demonstrate that on ImageNet, the best reassemble model achieves 78.6% top-1 accuracy without fine-tuning, which could be further elevated to 83.2% with end-to-end training. Our code is available at https://github.com/Adamdad/DeRy
[ { "created": "Mon, 24 Oct 2022 10:16:13 GMT", "version": "v1" }, { "created": "Wed, 2 Nov 2022 16:16:28 GMT", "version": "v2" } ]
2022-11-03
[ [ "Yang", "Xingyi", "" ], [ "Zhou", "Daquan", "" ], [ "Liu", "Songhua", "" ], [ "Ye", "Jingwen", "" ], [ "Wang", "Xinchao", "" ] ]
In this paper, we explore a novel knowledge-transfer task, termed as Deep Model Reassembly (DeRy), for general-purpose model reuse. Given a collection of heterogeneous models pre-trained from distinct sources and with diverse architectures, the goal of DeRy, as its name implies, is to first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks under both the hardware resource and performance constraints. Such ambitious nature of DeRy inevitably imposes significant challenges, including, in the first place, the feasibility of its solution. We strive to showcase that, through a dedicated paradigm proposed in this paper, DeRy can be made not only possibly but practically efficiently. Specifically, we conduct the partitions of all pre-trained networks jointly via a cover set optimization, and derive a number of equivalence set, within each of which the network blocks are treated as functionally equivalent and hence interchangeable. The equivalence sets learned in this way, in turn, enable picking and assembling blocks to customize networks subject to certain constraints, which is achieved via solving an integer program backed up with a training-free proxy to estimate the task performance. The reassembled models, give rise to gratifying performances with the user-specified constraints satisfied. We demonstrate that on ImageNet, the best reassemble model achieves 78.6% top-1 accuracy without fine-tuning, which could be further elevated to 83.2% with end-to-end training. Our code is available at https://github.com/Adamdad/DeRy
2101.05473
Ben Hermans
Alessandro Agnetis, Ben Hermans, Roel Leus, and Salim Rostami
Time-critical testing and search problems
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a problem in which the state of a system needs to be determined through costly tests of its components by a limited number of testing units and before a given deadline. We also consider a closely related search problem in which there are multiple searchers to find a target before a given deadline. These natural generalizations of the classical sequential testing problem and search problem are applicable in a wide range of time-critical operations such as machine maintenance, diagnosing a patient, and new product development. We show that both problems are NP-hard, develop a pseudo-polynomial dynamic program for the special case of two time slots, and describe a partial-order-based as well as an assignment-based mixed integer program for the general case. Based on extensive computational experiments, we find that the assignment-based formulation performs better than the partial-order-based formulation for the testing variant, but that this is the other way round for the search variant. Finally, we propose a pairwise-interchange-based local search procedure and show that, empirically, it performs very well in finding near-optimal solutions.
[ { "created": "Thu, 14 Jan 2021 06:54:39 GMT", "version": "v1" } ]
2021-01-15
[ [ "Agnetis", "Alessandro", "" ], [ "Hermans", "Ben", "" ], [ "Leus", "Roel", "" ], [ "Rostami", "Salim", "" ] ]
This paper introduces a problem in which the state of a system needs to be determined through costly tests of its components by a limited number of testing units and before a given deadline. We also consider a closely related search problem in which there are multiple searchers to find a target before a given deadline. These natural generalizations of the classical sequential testing problem and search problem are applicable in a wide range of time-critical operations such as machine maintenance, diagnosing a patient, and new product development. We show that both problems are NP-hard, develop a pseudo-polynomial dynamic program for the special case of two time slots, and describe a partial-order-based as well as an assignment-based mixed integer program for the general case. Based on extensive computational experiments, we find that the assignment-based formulation performs better than the partial-order-based formulation for the testing variant, but that this is the other way round for the search variant. Finally, we propose a pairwise-interchange-based local search procedure and show that, empirically, it performs very well in finding near-optimal solutions.
2006.10022
Forough Arabshahi
Forough Arabshahi, Jennifer Lee, Mikayla Gawarecki, Kathryn Mazaitis, Amos Azaria, Tom Mitchell
Conversational Neuro-Symbolic Commonsense Reasoning
Appearing in the 35th AAAI international Conference on Artificial Intelligence, 2021
null
null
null
cs.AI cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order for conversational AI systems to hold more natural and broad-ranging conversations, they will require much more commonsense, including the ability to identify unstated presumptions of their conversational partners. For example, in the command "If it snows at night then wake me up early because I don't want to be late for work" the speaker relies on commonsense reasoning of the listener to infer the implicit presumption that they wish to be woken only if it snows enough to cause traffic slowdowns. We consider here the problem of understanding such imprecisely stated natural language commands given in the form of "if-(state), then-(action), because-(goal)" statements. More precisely, we consider the problem of identifying the unstated presumptions of the speaker that allow the requested action to achieve the desired goal from the given state (perhaps elaborated by making the implicit presumptions explicit). We release a benchmark data set for this task, collected from humans and annotated with commonsense presumptions. We present a neuro-symbolic theorem prover that extracts multi-hop reasoning chains, and apply it to this problem. Furthermore, to accommodate the reality that current AI commonsense systems lack full coverage, we also present an interactive conversational framework built on our neuro-symbolic system, that conversationally evokes commonsense knowledge from humans to complete its reasoning chains.
[ { "created": "Wed, 17 Jun 2020 17:28:38 GMT", "version": "v1" }, { "created": "Fri, 19 Jun 2020 18:24:40 GMT", "version": "v2" }, { "created": "Tue, 2 Feb 2021 07:37:41 GMT", "version": "v3" } ]
2021-02-03
[ [ "Arabshahi", "Forough", "" ], [ "Lee", "Jennifer", "" ], [ "Gawarecki", "Mikayla", "" ], [ "Mazaitis", "Kathryn", "" ], [ "Azaria", "Amos", "" ], [ "Mitchell", "Tom", "" ] ]
In order for conversational AI systems to hold more natural and broad-ranging conversations, they will require much more commonsense, including the ability to identify unstated presumptions of their conversational partners. For example, in the command "If it snows at night then wake me up early because I don't want to be late for work" the speaker relies on commonsense reasoning of the listener to infer the implicit presumption that they wish to be woken only if it snows enough to cause traffic slowdowns. We consider here the problem of understanding such imprecisely stated natural language commands given in the form of "if-(state), then-(action), because-(goal)" statements. More precisely, we consider the problem of identifying the unstated presumptions of the speaker that allow the requested action to achieve the desired goal from the given state (perhaps elaborated by making the implicit presumptions explicit). We release a benchmark data set for this task, collected from humans and annotated with commonsense presumptions. We present a neuro-symbolic theorem prover that extracts multi-hop reasoning chains, and apply it to this problem. Furthermore, to accommodate the reality that current AI commonsense systems lack full coverage, we also present an interactive conversational framework built on our neuro-symbolic system, that conversationally evokes commonsense knowledge from humans to complete its reasoning chains.
2004.01800
Ping Hu
Ping Hu, Fabian Caba Heilbron, Oliver Wang, Zhe Lin, Stan Sclaroff and Federico Perazzi
Temporally Distributed Networks for Fast Video Semantic Segmentation
[CVPR2020] Project: https://github.com/feinanshan/TDNet
null
null
null
cs.CV cs.LG cs.MM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present TDNet, a temporally distributed network designed for fast and accurate video semantic segmentation. We observe that features extracted from a certain high-level layer of a deep CNN can be approximated by composing features extracted from several shallower sub-networks. Leveraging the inherent temporal continuity in videos, we distribute these sub-networks over sequential frames. Therefore, at each time step, we only need to perform a lightweight computation to extract a sub-features group from a single sub-network. The full features used for segmentation are then recomposed by application of a novel attention propagation module that compensates for geometry deformation between frames. A grouped knowledge distillation loss is also introduced to further improve the representation power at both full and sub-feature levels. Experiments on Cityscapes, CamVid, and NYUD-v2 demonstrate that our method achieves state-of-the-art accuracy with significantly faster speed and lower latency.
[ { "created": "Fri, 3 Apr 2020 22:43:32 GMT", "version": "v1" }, { "created": "Tue, 7 Apr 2020 00:44:51 GMT", "version": "v2" } ]
2020-04-08
[ [ "Hu", "Ping", "" ], [ "Heilbron", "Fabian Caba", "" ], [ "Wang", "Oliver", "" ], [ "Lin", "Zhe", "" ], [ "Sclaroff", "Stan", "" ], [ "Perazzi", "Federico", "" ] ]
We present TDNet, a temporally distributed network designed for fast and accurate video semantic segmentation. We observe that features extracted from a certain high-level layer of a deep CNN can be approximated by composing features extracted from several shallower sub-networks. Leveraging the inherent temporal continuity in videos, we distribute these sub-networks over sequential frames. Therefore, at each time step, we only need to perform a lightweight computation to extract a sub-features group from a single sub-network. The full features used for segmentation are then recomposed by application of a novel attention propagation module that compensates for geometry deformation between frames. A grouped knowledge distillation loss is also introduced to further improve the representation power at both full and sub-feature levels. Experiments on Cityscapes, CamVid, and NYUD-v2 demonstrate that our method achieves state-of-the-art accuracy with significantly faster speed and lower latency.
2012.12743
Qingtian Zou
Qingtian Zou (1), Anoop Singhal (2), Xiaoyan Sun (3), Peng Liu (1) ((1) The Pennsylvania State University, (2) National Institute of Standards and Technology, (3) California State University, Sacramento)
Generating Comprehensive Data with Protocol Fuzzing for Applying Deep Learning to Detect Network Attacks
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network attacks have become a major security concern for organizations worldwide and have also drawn attention in the academics. Recently, researchers have applied neural networks to detect network attacks with network logs. However, public network data sets have major drawbacks such as limited data sample variations and unbalanced data with respect to malicious and benign samples. In this paper, we present a new approach, protocol fuzzing, to automatically generate high-quality network data, on which deep learning models can be trained. Our findings show that fuzzing generates data samples that cover real-world data and deep learning models trained with fuzzed data can successfully detect real network attacks.
[ { "created": "Wed, 23 Dec 2020 15:24:45 GMT", "version": "v1" } ]
2020-12-24
[ [ "Zou", "Qingtian", "" ], [ "Singhal", "Anoop", "" ], [ "Sun", "Xiaoyan", "" ], [ "Liu", "Peng", "" ] ]
Network attacks have become a major security concern for organizations worldwide and have also drawn attention in the academics. Recently, researchers have applied neural networks to detect network attacks with network logs. However, public network data sets have major drawbacks such as limited data sample variations and unbalanced data with respect to malicious and benign samples. In this paper, we present a new approach, protocol fuzzing, to automatically generate high-quality network data, on which deep learning models can be trained. Our findings show that fuzzing generates data samples that cover real-world data and deep learning models trained with fuzzed data can successfully detect real network attacks.