id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1712.05244
Hamdi Joudeh
Enrico Piovano, Hamdi Joudeh, Bruno Clerckx
Generalized Degrees of Freedom of the Symmetric Cache-Aided MISO Broadcast Channel with Partial CSIT
first revision
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the cache-aided MISO broadcast channel (BC) in which a multi-antenna transmitter serves $K$ single-antenna receivers, each equipped with a cache memory. The transmitter has access to partial knowledge of the channel state information. For a symmetric setting, in terms of channel strength levels, partial channel knowledge levels and cache sizes, we characterize the generalized degrees of freedom (GDoF) up to a constant multiplicative factor. The achievability scheme exploits the interplay between spatial multiplexing gains and coded-multicasting gain. On the other hand, a cut-set-based argument in conjunction with a GDoF outer bound for a parallel MISO BC under channel uncertainty are used for the converse. We further show that the characterized order-optimal GDoF is also attained in a decentralized setting, where no coordination is required for content placement in the caches.
[ { "created": "Thu, 14 Dec 2017 14:24:50 GMT", "version": "v1" }, { "created": "Tue, 4 Dec 2018 16:18:31 GMT", "version": "v2" } ]
2018-12-05
[ [ "Piovano", "Enrico", "" ], [ "Joudeh", "Hamdi", "" ], [ "Clerckx", "Bruno", "" ] ]
We consider the cache-aided MISO broadcast channel (BC) in which a multi-antenna transmitter serves $K$ single-antenna receivers, each equipped with a cache memory. The transmitter has access to partial knowledge of the channel state information. For a symmetric setting, in terms of channel strength levels, partial channel knowledge levels and cache sizes, we characterize the generalized degrees of freedom (GDoF) up to a constant multiplicative factor. The achievability scheme exploits the interplay between spatial multiplexing gains and coded-multicasting gain. On the other hand, a cut-set-based argument in conjunction with a GDoF outer bound for a parallel MISO BC under channel uncertainty are used for the converse. We further show that the characterized order-optimal GDoF is also attained in a decentralized setting, where no coordination is required for content placement in the caches.
1911.11856
Jonathan Kuck
Jonathan Kuck and Tri Dao and Hamid Rezatofighi and Ashish Sabharwal and Stefano Ermon
Approximating the Permanent by Sampling from Adaptive Partitions
19 pages
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computing the permanent of a non-negative matrix is a core problem with practical applications ranging from target tracking to statistical thermodynamics. However, this problem is also #P-complete, which leaves little hope for finding an exact solution that can be computed efficiently. While the problem admits a fully polynomial randomized approximation scheme, this method has seen little use because it is both inefficient in practice and difficult to implement. We present AdaPart, a simple and efficient method for drawing exact samples from an unnormalized distribution. Using AdaPart, we show how to construct tight bounds on the permanent which hold with high probability, with guaranteed polynomial runtime for dense matrices. We find that AdaPart can provide empirical speedups exceeding 25x over prior sampling methods on matrices that are challenging for variational based approaches. Finally, in the context of multi-target tracking, exact sampling from the distribution defined by the matrix permanent allows us to use the optimal proposal distribution during particle filtering. Using AdaPart, we show that this leads to improved tracking performance using an order of magnitude fewer samples.
[ { "created": "Tue, 26 Nov 2019 22:05:28 GMT", "version": "v1" } ]
2019-11-28
[ [ "Kuck", "Jonathan", "" ], [ "Dao", "Tri", "" ], [ "Rezatofighi", "Hamid", "" ], [ "Sabharwal", "Ashish", "" ], [ "Ermon", "Stefano", "" ] ]
Computing the permanent of a non-negative matrix is a core problem with practical applications ranging from target tracking to statistical thermodynamics. However, this problem is also #P-complete, which leaves little hope for finding an exact solution that can be computed efficiently. While the problem admits a fully polynomial randomized approximation scheme, this method has seen little use because it is both inefficient in practice and difficult to implement. We present AdaPart, a simple and efficient method for drawing exact samples from an unnormalized distribution. Using AdaPart, we show how to construct tight bounds on the permanent which hold with high probability, with guaranteed polynomial runtime for dense matrices. We find that AdaPart can provide empirical speedups exceeding 25x over prior sampling methods on matrices that are challenging for variational based approaches. Finally, in the context of multi-target tracking, exact sampling from the distribution defined by the matrix permanent allows us to use the optimal proposal distribution during particle filtering. Using AdaPart, we show that this leads to improved tracking performance using an order of magnitude fewer samples.
2408.04205
Xinwei Chen
Xinwei Chen, Xiaofeng Zhong, Zijian Zhang, Linglong Dai and Shidong Zhou
High-Efficiency Urban 3D Radio Map Estimation Based on Sparse Measurements
5 pages,7 figures
null
null
null
cs.IT math.IT
http://creativecommons.org/publicdomain/zero/1.0/
Recent widespread applications for unmanned aerial vehicles (UAVs) -- from infrastructure inspection to urban logistics -- have prompted an urgent need for high-accuracy three-dimensional (3D) radio maps. However, existing methods designed for two-dimensional radio maps face challenges of high measurement costs and limited data availability when extended to 3D scenarios. To tackle these challenges, we first build a real-world large-scale 3D radio map dataset, covering over 4.2 million m^3 and over 4 thousand data points in complex urban environments. We propose a Gaussian Process Regression-based scheme for 3D radio map estimation, allowing us to realize more accurate map recovery with a lower RMSE than state-of-the-art schemes by over 2.5 dB. To further enhance data efficiency, we propose two methods for training point selection, including an offline clustering-based method and an online maximum a posterior (MAP)-based method. Extensive experiments demonstrate that the proposed scheme not only achieves full-map recovery with only 2% of UAV measurements, but also sheds light on future studies on 3D radio maps.
[ { "created": "Thu, 8 Aug 2024 04:05:18 GMT", "version": "v1" } ]
2024-08-09
[ [ "Chen", "Xinwei", "" ], [ "Zhong", "Xiaofeng", "" ], [ "Zhang", "Zijian", "" ], [ "Dai", "Linglong", "" ], [ "Zhou", "Shidong", "" ] ]
Recent widespread applications for unmanned aerial vehicles (UAVs) -- from infrastructure inspection to urban logistics -- have prompted an urgent need for high-accuracy three-dimensional (3D) radio maps. However, existing methods designed for two-dimensional radio maps face challenges of high measurement costs and limited data availability when extended to 3D scenarios. To tackle these challenges, we first build a real-world large-scale 3D radio map dataset, covering over 4.2 million m^3 and over 4 thousand data points in complex urban environments. We propose a Gaussian Process Regression-based scheme for 3D radio map estimation, allowing us to realize more accurate map recovery with a lower RMSE than state-of-the-art schemes by over 2.5 dB. To further enhance data efficiency, we propose two methods for training point selection, including an offline clustering-based method and an online maximum a posterior (MAP)-based method. Extensive experiments demonstrate that the proposed scheme not only achieves full-map recovery with only 2% of UAV measurements, but also sheds light on future studies on 3D radio maps.
2407.10011
Joel Sol
Joel Sol, Jamil Fayyad, Shadi Alijani and Homayoun Najjaran
Sim-to-Real Domain Adaptation for Deformation Classification
7 pages, 5 figures, submitted to SMC
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Deformation detection is vital for enabling accurate assessment and prediction of structural changes in materials, ensuring timely and effective interventions to maintain safety and integrity. Automating deformation detection through computer vision is crucial for efficient monitoring, but it faces significant challenges in creating a comprehensive dataset of both deformed and non-deformed objects, which can be difficult to obtain in many scenarios. In this paper, we introduce a novel framework for generating controlled synthetic data that simulates deformed objects. This approach allows for the realistic modeling of object deformations under various conditions. Our framework integrates an intelligent adapter network that facilitates sim-to-real domain adaptation, enhancing classification results without requiring real data from deformed objects. We conduct experiments on domain adaptation and classification tasks and demonstrate that our framework improves sim-to-real classification results compared to simulation baseline.
[ { "created": "Sat, 13 Jul 2024 21:35:13 GMT", "version": "v1" } ]
2024-07-16
[ [ "Sol", "Joel", "" ], [ "Fayyad", "Jamil", "" ], [ "Alijani", "Shadi", "" ], [ "Najjaran", "Homayoun", "" ] ]
Deformation detection is vital for enabling accurate assessment and prediction of structural changes in materials, ensuring timely and effective interventions to maintain safety and integrity. Automating deformation detection through computer vision is crucial for efficient monitoring, but it faces significant challenges in creating a comprehensive dataset of both deformed and non-deformed objects, which can be difficult to obtain in many scenarios. In this paper, we introduce a novel framework for generating controlled synthetic data that simulates deformed objects. This approach allows for the realistic modeling of object deformations under various conditions. Our framework integrates an intelligent adapter network that facilitates sim-to-real domain adaptation, enhancing classification results without requiring real data from deformed objects. We conduct experiments on domain adaptation and classification tasks and demonstrate that our framework improves sim-to-real classification results compared to simulation baseline.
2102.08157
Erixhen Sula
Erixhen Sula and Michael Gastpar
Lower bound on Wyner's Common Information
6 pages, 3 figures
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
An important notion of common information between two random variables is due to Wyner. In this paper, we derive a lower bound on Wyner's common information for continuous random variables. The new bound improves on the only other general lower bound on Wyner's common information, which is the mutual information. We also show that the new lower bound is tight for the so-called "Gaussian channels" case, namely, when the joint distribution of the random variables can be written as the sum of a single underlying random variable and Gaussian noises. We motivate this work from the recent variations of Wyner's common information and applications to network data compression problems such as the Gray-Wyner network.
[ { "created": "Tue, 16 Feb 2021 13:56:45 GMT", "version": "v1" } ]
2021-02-17
[ [ "Sula", "Erixhen", "" ], [ "Gastpar", "Michael", "" ] ]
An important notion of common information between two random variables is due to Wyner. In this paper, we derive a lower bound on Wyner's common information for continuous random variables. The new bound improves on the only other general lower bound on Wyner's common information, which is the mutual information. We also show that the new lower bound is tight for the so-called "Gaussian channels" case, namely, when the joint distribution of the random variables can be written as the sum of a single underlying random variable and Gaussian noises. We motivate this work from the recent variations of Wyner's common information and applications to network data compression problems such as the Gray-Wyner network.
2207.10797
Hidetoshi Kawaguchi
Hidetoshi Kawaguchi, Yuichi Nakatani and Shogo Okada
IDPS Signature Classification with a Reject Option and the Incorporation of Expert Knowledge
9 pages, 5 figures, 3 tables
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the importance of intrusion detection and prevention systems (IDPSs) increases, great costs are incurred to manage the signatures that are generated by malicious communication pattern files. Experts in network security need to classify signatures by importance for an IDPS to work. We propose and evaluate a machine learning signature classification model with a reject option (RO) to reduce the cost of setting up an IDPS. To train the proposed model, it is essential to design features that are effective for signature classification. Experts classify signatures with predefined if-then rules. An if-then rule returns a label of low, medium, high, or unknown importance based on keyword matching of the elements in the signature. Therefore, we first design two types of features, symbolic features (SFs) and keyword features (KFs), which are used in keyword matching for the if-then rules. Next, we design web information and message features (WMFs) to capture the properties of signatures that do not match the if-then rules. The WMFs are extracted as term frequency-inverse document frequency (TF-IDF) features of the message text in the signatures. The features are obtained by web scraping from the referenced external attack identification systems described in the signature. Because failure needs to be minimized in the classification of IDPS signatures, as in the medical field, we consider introducing a RO in our proposed model. The effectiveness of the proposed classification model is evaluated in experiments with two real datasets composed of signatures labeled by experts: a dataset that can be classified with if-then rules and a dataset with elements that do not match an if-then rule. In the experiment, the proposed model is evaluated. In both cases, the combined SFs and WMFs performed better than the combined SFs and KFs. In addition, we also performed feature analysis.
[ { "created": "Tue, 19 Jul 2022 06:09:33 GMT", "version": "v1" } ]
2022-07-25
[ [ "Kawaguchi", "Hidetoshi", "" ], [ "Nakatani", "Yuichi", "" ], [ "Okada", "Shogo", "" ] ]
As the importance of intrusion detection and prevention systems (IDPSs) increases, great costs are incurred to manage the signatures that are generated by malicious communication pattern files. Experts in network security need to classify signatures by importance for an IDPS to work. We propose and evaluate a machine learning signature classification model with a reject option (RO) to reduce the cost of setting up an IDPS. To train the proposed model, it is essential to design features that are effective for signature classification. Experts classify signatures with predefined if-then rules. An if-then rule returns a label of low, medium, high, or unknown importance based on keyword matching of the elements in the signature. Therefore, we first design two types of features, symbolic features (SFs) and keyword features (KFs), which are used in keyword matching for the if-then rules. Next, we design web information and message features (WMFs) to capture the properties of signatures that do not match the if-then rules. The WMFs are extracted as term frequency-inverse document frequency (TF-IDF) features of the message text in the signatures. The features are obtained by web scraping from the referenced external attack identification systems described in the signature. Because failure needs to be minimized in the classification of IDPS signatures, as in the medical field, we consider introducing a RO in our proposed model. The effectiveness of the proposed classification model is evaluated in experiments with two real datasets composed of signatures labeled by experts: a dataset that can be classified with if-then rules and a dataset with elements that do not match an if-then rule. In the experiment, the proposed model is evaluated. In both cases, the combined SFs and WMFs performed better than the combined SFs and KFs. In addition, we also performed feature analysis.
1804.05772
Nils Gessert
Kaori V. Laino, Thore Saathoff, Thiusius R. Savarimuthu, Kim Lindberg Schwaner, Nils Gessert, Alexander Schlaefer
Design and implementation of a wireless instrument adapter
Published at CURAC 2017
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evaluation of new methods for control and manipulation in minimally invasive robotic surgery requires a realistic setup. To decouple the evaluation of methods from overall clinical systems, we propose an instrument adapter for the S line EndoWrist\c{opyright} instruments of the da Vinci surgical system. The adapter is small and lightweight and can be mounted to any robot to mimic motion. We describe its design and implementation, as well as a setup to calibrate instruments to study precise motion control. Our results indicate that each instrument requires individual calibration. The calibration shows that the system is not fully linear. The repeatability of poses in the same sense of rotation has an RMSE of 0.27{\deg}/ and a standard deviation below 0.3{\deg} for pitching and 4.7{\deg} for yawing averaged over three measurements. When comparing the same poses in clockwise and counter-clockwise direction the RMSE is 12.8{\deg} and 5.7{\deg} for pitching and yawing, respectively. This is likely due to motor hysteresis.
[ { "created": "Mon, 16 Apr 2018 16:22:08 GMT", "version": "v1" }, { "created": "Tue, 17 Apr 2018 10:33:39 GMT", "version": "v2" } ]
2018-04-18
[ [ "Laino", "Kaori V.", "" ], [ "Saathoff", "Thore", "" ], [ "Savarimuthu", "Thiusius R.", "" ], [ "Schwaner", "Kim Lindberg", "" ], [ "Gessert", "Nils", "" ], [ "Schlaefer", "Alexander", "" ] ]
The evaluation of new methods for control and manipulation in minimally invasive robotic surgery requires a realistic setup. To decouple the evaluation of methods from overall clinical systems, we propose an instrument adapter for the S line EndoWrist\c{opyright} instruments of the da Vinci surgical system. The adapter is small and lightweight and can be mounted to any robot to mimic motion. We describe its design and implementation, as well as a setup to calibrate instruments to study precise motion control. Our results indicate that each instrument requires individual calibration. The calibration shows that the system is not fully linear. The repeatability of poses in the same sense of rotation has an RMSE of 0.27{\deg}/ and a standard deviation below 0.3{\deg} for pitching and 4.7{\deg} for yawing averaged over three measurements. When comparing the same poses in clockwise and counter-clockwise direction the RMSE is 12.8{\deg} and 5.7{\deg} for pitching and yawing, respectively. This is likely due to motor hysteresis.
2211.09790
James Smith
James Seale Smith, Paola Cascante-Bonilla, Assaf Arbelle, Donghyun Kim, Rameswar Panda, David Cox, Diyi Yang, Zsolt Kira, Rogerio Feris, Leonid Karlinsky
ConStruct-VL: Data-Free Continual Structured VL Concepts Learning
Accepted by the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2023)
null
null
null
cs.LG cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, large-scale pre-trained Vision-and-Language (VL) foundation models have demonstrated remarkable capabilities in many zero-shot downstream tasks, achieving competitive results for recognizing objects defined by as little as short text prompts. However, it has also been shown that VL models are still brittle in Structured VL Concept (SVLC) reasoning, such as the ability to recognize object attributes, states, and inter-object relations. This leads to reasoning mistakes, which need to be corrected as they occur by teaching VL models the missing SVLC skills; often this must be done using private data where the issue was found, which naturally leads to a data-free continual (no task-id) VL learning setting. In this work, we introduce the first Continual Data-Free Structured VL Concepts Learning (ConStruct-VL) benchmark and show it is challenging for many existing data-free CL strategies. We, therefore, propose a data-free method comprised of a new approach of Adversarial Pseudo-Replay (APR) which generates adversarial reminders of past tasks from past task models. To use this method efficiently, we also propose a continual parameter-efficient Layered-LoRA (LaLo) neural architecture allowing no-memory-cost access to all past models at train time. We show this approach outperforms all data-free methods by as much as ~7% while even matching some levels of experience-replay (prohibitive for applications where data-privacy must be preserved). Our code is publicly available at https://github.com/jamessealesmith/ConStruct-VL
[ { "created": "Thu, 17 Nov 2022 18:57:03 GMT", "version": "v1" }, { "created": "Thu, 30 Mar 2023 17:59:16 GMT", "version": "v2" } ]
2023-03-31
[ [ "Smith", "James Seale", "" ], [ "Cascante-Bonilla", "Paola", "" ], [ "Arbelle", "Assaf", "" ], [ "Kim", "Donghyun", "" ], [ "Panda", "Rameswar", "" ], [ "Cox", "David", "" ], [ "Yang", "Diyi", "" ], [ "Kira", "Zsolt", "" ], [ "Feris", "Rogerio", "" ], [ "Karlinsky", "Leonid", "" ] ]
Recently, large-scale pre-trained Vision-and-Language (VL) foundation models have demonstrated remarkable capabilities in many zero-shot downstream tasks, achieving competitive results for recognizing objects defined by as little as short text prompts. However, it has also been shown that VL models are still brittle in Structured VL Concept (SVLC) reasoning, such as the ability to recognize object attributes, states, and inter-object relations. This leads to reasoning mistakes, which need to be corrected as they occur by teaching VL models the missing SVLC skills; often this must be done using private data where the issue was found, which naturally leads to a data-free continual (no task-id) VL learning setting. In this work, we introduce the first Continual Data-Free Structured VL Concepts Learning (ConStruct-VL) benchmark and show it is challenging for many existing data-free CL strategies. We, therefore, propose a data-free method comprised of a new approach of Adversarial Pseudo-Replay (APR) which generates adversarial reminders of past tasks from past task models. To use this method efficiently, we also propose a continual parameter-efficient Layered-LoRA (LaLo) neural architecture allowing no-memory-cost access to all past models at train time. We show this approach outperforms all data-free methods by as much as ~7% while even matching some levels of experience-replay (prohibitive for applications where data-privacy must be preserved). Our code is publicly available at https://github.com/jamessealesmith/ConStruct-VL
1510.06595
Angela Yao
Bj\"orn Kr\"uger, Anna V\"ogele, Tobias Willig, Angela Yao, Reinhard Klein, Andreas Weber
Efficient Unsupervised Temporal Segmentation of Motion Data
15 pages, submitted to TPAMI
null
10.1109/TMM.2016.2635030
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a method for automated temporal segmentation of human motion data into distinct actions and compositing motion primitives based on self-similar structures in the motion sequence. We use neighbourhood graphs for the partitioning and the similarity information in the graph is further exploited to cluster the motion primitives into larger entities of semantic significance. The method requires no assumptions about the motion sequences at hand and no user interaction is required for the segmentation or clustering. In addition, we introduce a feature bundling preprocessing technique to make the segmentation more robust to noise, as well as a notion of motion symmetry for more refined primitive detection. We test our method on several sensor modalities, including markered and markerless motion capture as well as on electromyograph and accelerometer recordings. The results highlight our system's capabilities for both segmentation and for analysis of the finer structures of motion data, all in a completely unsupervised manner.
[ { "created": "Thu, 22 Oct 2015 12:20:04 GMT", "version": "v1" } ]
2021-12-07
[ [ "Krüger", "Björn", "" ], [ "Vögele", "Anna", "" ], [ "Willig", "Tobias", "" ], [ "Yao", "Angela", "" ], [ "Klein", "Reinhard", "" ], [ "Weber", "Andreas", "" ] ]
We introduce a method for automated temporal segmentation of human motion data into distinct actions and compositing motion primitives based on self-similar structures in the motion sequence. We use neighbourhood graphs for the partitioning and the similarity information in the graph is further exploited to cluster the motion primitives into larger entities of semantic significance. The method requires no assumptions about the motion sequences at hand and no user interaction is required for the segmentation or clustering. In addition, we introduce a feature bundling preprocessing technique to make the segmentation more robust to noise, as well as a notion of motion symmetry for more refined primitive detection. We test our method on several sensor modalities, including markered and markerless motion capture as well as on electromyograph and accelerometer recordings. The results highlight our system's capabilities for both segmentation and for analysis of the finer structures of motion data, all in a completely unsupervised manner.
1303.5508
George Chen
George H. Chen, Christian Wachinger, Polina Golland
Sparse Projections of Medical Images onto Manifolds
International Conference on Information Processing in Medical Imaging (IPMI 2013)
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Manifold learning has been successfully applied to a variety of medical imaging problems. Its use in real-time applications requires fast projection onto the low-dimensional space. To this end, out-of-sample extensions are applied by constructing an interpolation function that maps from the input space to the low-dimensional manifold. Commonly used approaches such as the Nystr\"{o}m extension and kernel ridge regression require using all training points. We propose an interpolation function that only depends on a small subset of the input training data. Consequently, in the testing phase each new point only needs to be compared against a small number of input training data in order to project the point onto the low-dimensional space. We interpret our method as an out-of-sample extension that approximates kernel ridge regression. Our method involves solving a simple convex optimization problem and has the attractive property of guaranteeing an upper bound on the approximation error, which is crucial for medical applications. Tuning this error bound controls the sparsity of the resulting interpolation function. We illustrate our method in two clinical applications that require fast mapping of input images onto a low-dimensional space.
[ { "created": "Fri, 22 Mar 2013 03:24:10 GMT", "version": "v1" }, { "created": "Thu, 28 Mar 2013 19:21:33 GMT", "version": "v2" } ]
2013-03-29
[ [ "Chen", "George H.", "" ], [ "Wachinger", "Christian", "" ], [ "Golland", "Polina", "" ] ]
Manifold learning has been successfully applied to a variety of medical imaging problems. Its use in real-time applications requires fast projection onto the low-dimensional space. To this end, out-of-sample extensions are applied by constructing an interpolation function that maps from the input space to the low-dimensional manifold. Commonly used approaches such as the Nystr\"{o}m extension and kernel ridge regression require using all training points. We propose an interpolation function that only depends on a small subset of the input training data. Consequently, in the testing phase each new point only needs to be compared against a small number of input training data in order to project the point onto the low-dimensional space. We interpret our method as an out-of-sample extension that approximates kernel ridge regression. Our method involves solving a simple convex optimization problem and has the attractive property of guaranteeing an upper bound on the approximation error, which is crucial for medical applications. Tuning this error bound controls the sparsity of the resulting interpolation function. We illustrate our method in two clinical applications that require fast mapping of input images onto a low-dimensional space.
2403.01827
Ankur Singh
Ankur Singh, Sanghyeon Choi, Gunuk Wang, Maryaradhiya Daimari, and Byung-Geun Lee
Analysis and Fully Memristor-based Reservoir Computing for Temporal Data Classification
22 pages, 20 figures, Journal, Typo corrected and updated reference
null
null
null
cs.NE cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Reservoir computing (RC) offers a neuromorphic framework that is particularly effective for processing spatiotemporal signals. Known for its temporal processing prowess, RC significantly lowers training costs compared to conventional recurrent neural networks. A key component in its hardware deployment is the ability to generate dynamic reservoir states. Our research introduces a novel dual-memory RC system, integrating a short-term memory via a WOx-based memristor, capable of achieving 16 distinct states encoded over 4 bits, and a long-term memory component using a TiOx-based memristor within the readout layer. We thoroughly examine both memristor types and leverage the RC system to process temporal data sets. The performance of the proposed RC system is validated through two benchmark tasks: isolated spoken digit recognition with incomplete inputs and Mackey-Glass time series prediction. The system delivered an impressive 98.84% accuracy in digit recognition and sustained a low normalized root mean square error (NRMSE) of 0.036 in the time series prediction task, underscoring its capability. This study illuminates the adeptness of memristor-based RC systems in managing intricate temporal challenges, laying the groundwork for further innovations in neuromorphic computing.
[ { "created": "Mon, 4 Mar 2024 08:22:29 GMT", "version": "v1" }, { "created": "Sat, 16 Mar 2024 15:43:04 GMT", "version": "v2" } ]
2024-03-19
[ [ "Singh", "Ankur", "" ], [ "Choi", "Sanghyeon", "" ], [ "Wang", "Gunuk", "" ], [ "Daimari", "Maryaradhiya", "" ], [ "Lee", "Byung-Geun", "" ] ]
Reservoir computing (RC) offers a neuromorphic framework that is particularly effective for processing spatiotemporal signals. Known for its temporal processing prowess, RC significantly lowers training costs compared to conventional recurrent neural networks. A key component in its hardware deployment is the ability to generate dynamic reservoir states. Our research introduces a novel dual-memory RC system, integrating a short-term memory via a WOx-based memristor, capable of achieving 16 distinct states encoded over 4 bits, and a long-term memory component using a TiOx-based memristor within the readout layer. We thoroughly examine both memristor types and leverage the RC system to process temporal data sets. The performance of the proposed RC system is validated through two benchmark tasks: isolated spoken digit recognition with incomplete inputs and Mackey-Glass time series prediction. The system delivered an impressive 98.84% accuracy in digit recognition and sustained a low normalized root mean square error (NRMSE) of 0.036 in the time series prediction task, underscoring its capability. This study illuminates the adeptness of memristor-based RC systems in managing intricate temporal challenges, laying the groundwork for further innovations in neuromorphic computing.
2312.02521
Haoran Tang
Haoran Tang, Xin Zhou, Jieren Deng, Zhihong Pan, Hao Tian, Pratik Chaudhari
Retrieving Conditions from Reference Images for Diffusion Models
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Newly developed diffusion-based techniques have showcased phenomenal abilities in producing a wide range of high-quality images, sparking considerable interest in various applications. A prevalent scenario is to generate new images based on a subject from reference images. This subject could be face identity for styled avatars, body and clothing for virtual try-on and so on. Satisfying this requirement is evolving into a field called Subject-Driven Generation. In this paper, we consider Subject-Driven Generation as a unified retrieval problem with diffusion models. We introduce a novel diffusion model architecture, named RetriNet, designed to address and solve these problems by retrieving subject attributes from reference images precisely, and filter out irrelevant information. RetriNet demonstrates impressive performance when compared to existing state-of-the-art approaches in face generation. We further propose a research and iteration friendly dataset, RetriBooru, to study a more difficult problem, concept composition. Finally, to better evaluate alignment between similarity and diversity or measure diversity that have been previously unaccounted for, we introduce a novel class of metrics named Similarity Weighted Diversity (SWD).
[ { "created": "Tue, 5 Dec 2023 06:04:16 GMT", "version": "v1" }, { "created": "Fri, 15 Mar 2024 04:37:32 GMT", "version": "v2" } ]
2024-03-18
[ [ "Tang", "Haoran", "" ], [ "Zhou", "Xin", "" ], [ "Deng", "Jieren", "" ], [ "Pan", "Zhihong", "" ], [ "Tian", "Hao", "" ], [ "Chaudhari", "Pratik", "" ] ]
Newly developed diffusion-based techniques have showcased phenomenal abilities in producing a wide range of high-quality images, sparking considerable interest in various applications. A prevalent scenario is to generate new images based on a subject from reference images. This subject could be face identity for styled avatars, body and clothing for virtual try-on and so on. Satisfying this requirement is evolving into a field called Subject-Driven Generation. In this paper, we consider Subject-Driven Generation as a unified retrieval problem with diffusion models. We introduce a novel diffusion model architecture, named RetriNet, designed to address and solve these problems by retrieving subject attributes from reference images precisely, and filter out irrelevant information. RetriNet demonstrates impressive performance when compared to existing state-of-the-art approaches in face generation. We further propose a research and iteration friendly dataset, RetriBooru, to study a more difficult problem, concept composition. Finally, to better evaluate alignment between similarity and diversity or measure diversity that have been previously unaccounted for, we introduce a novel class of metrics named Similarity Weighted Diversity (SWD).
2102.10695
Luis Puche Rondon
Luis Puche Rondon, Leonardo Babun, Ahmet Aris, Kemal Akkaya, and A. Selcuk Uluagac
Survey on Enterprise Internet-of-Things Systems (E-IoT): A Security Perspective
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
As technology becomes more widely available, millions of users worldwide have installed some form of smart device in their homes or workplaces. These devices are often off-the-shelf commodity systems, such as Google Home or Samsung SmartThings, that are installed by end-users looking to automate a small deployment. In contrast to these "plug-and-play" systems, purpose-built Enterprise Internet-of-Things (E-IoT) systems such as Crestron, Control4, RTI, Savant offer a smart solution for more sophisticated applications (e.g., complete lighting control, A/V management, security). In contrast to commodity systems, E-IoT systems are usually closed source, costly, require certified installers, and are overall more robust for their use cases. Due to this, E-IoT systems are often found in expensive smart homes, government and academic conference rooms, yachts, and smart private offices. However, while there has been plenty of research on the topic of commodity systems, no current study exists that provides a complete picture of E-IoT systems, their components, and relevant threats. As such, lack of knowledge of E-IoT system threats, coupled with the cost of E-IoT systems has led many to assume that E-IoT systems are secure. To address this research gap, raise awareness on E-IoT security, and motivate further research, this work emphasizes E-IoT system components, E-IoT vulnerabilities, solutions, and their security implications. In order to systematically analyze the security of E-IoT systems, we divide E-IoT systems into four layers: E-IoT Devices Layer, Communications Layer, Monitoring and Applications Layer, and Business Layer. We survey attacks and defense mechanisms, considering the E-IoT components at each layer and the associated threats. In addition, we present key observations in state-of-the-art E-IoT security and provide a list of open research problems that need further research.
[ { "created": "Sun, 21 Feb 2021 21:51:11 GMT", "version": "v1" } ]
2021-02-23
[ [ "Rondon", "Luis Puche", "" ], [ "Babun", "Leonardo", "" ], [ "Aris", "Ahmet", "" ], [ "Akkaya", "Kemal", "" ], [ "Uluagac", "A. Selcuk", "" ] ]
As technology becomes more widely available, millions of users worldwide have installed some form of smart device in their homes or workplaces. These devices are often off-the-shelf commodity systems, such as Google Home or Samsung SmartThings, that are installed by end-users looking to automate a small deployment. In contrast to these "plug-and-play" systems, purpose-built Enterprise Internet-of-Things (E-IoT) systems such as Crestron, Control4, RTI, Savant offer a smart solution for more sophisticated applications (e.g., complete lighting control, A/V management, security). In contrast to commodity systems, E-IoT systems are usually closed source, costly, require certified installers, and are overall more robust for their use cases. Due to this, E-IoT systems are often found in expensive smart homes, government and academic conference rooms, yachts, and smart private offices. However, while there has been plenty of research on the topic of commodity systems, no current study exists that provides a complete picture of E-IoT systems, their components, and relevant threats. As such, lack of knowledge of E-IoT system threats, coupled with the cost of E-IoT systems has led many to assume that E-IoT systems are secure. To address this research gap, raise awareness on E-IoT security, and motivate further research, this work emphasizes E-IoT system components, E-IoT vulnerabilities, solutions, and their security implications. In order to systematically analyze the security of E-IoT systems, we divide E-IoT systems into four layers: E-IoT Devices Layer, Communications Layer, Monitoring and Applications Layer, and Business Layer. We survey attacks and defense mechanisms, considering the E-IoT components at each layer and the associated threats. In addition, we present key observations in state-of-the-art E-IoT security and provide a list of open research problems that need further research.
2007.07990
Thodoris Lykouris
Shuchi Chawla, Nikhil Devanur, Thodoris Lykouris
Static pricing for multi-unit prophet inequalities
null
null
null
null
cs.GT cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a pricing problem where a seller has $k$ identical copies of a product, buyers arrive sequentially, and the seller prices the items aiming to maximize social welfare. When $k=1$, this is the so called "prophet inequality" problem for which there is a simple pricing scheme achieving a competitive ratio of $1/2$. On the other end of the spectrum, as $k$ goes to infinity, the asymptotic performance of both static and adaptive pricing is well understood. We provide a static pricing scheme for the small-supply regime: where $k$ is small but larger than $1$. Prior to our work, the best competitive ratio known for this setting was the $1/2$ that follows from the single-unit prophet inequality. Our pricing scheme is easy to describe as well as practical -- it is anonymous, non-adaptive, and order-oblivious. We pick a single price that equalizes the expected fraction of items sold and the probability that the supply does not sell out before all customers are served; this price is then offered to each customer while supply lasts. This extends an approach introduced by Samuel-Cahn for the case of $k=1$. This pricing scheme achieves a competitive ratio that increases gradually with the supply. Subsequent work by Jiang, Ma, and Zhang shows that our pricing scheme is the optimal static pricing for every value of $k$.
[ { "created": "Wed, 15 Jul 2020 20:57:29 GMT", "version": "v1" }, { "created": "Wed, 22 Dec 2021 02:21:18 GMT", "version": "v2" }, { "created": "Wed, 18 Jan 2023 15:53:21 GMT", "version": "v3" }, { "created": "Tue, 20 Jun 2023 11:01:00 GMT", "version": "v4" } ]
2023-06-21
[ [ "Chawla", "Shuchi", "" ], [ "Devanur", "Nikhil", "" ], [ "Lykouris", "Thodoris", "" ] ]
We study a pricing problem where a seller has $k$ identical copies of a product, buyers arrive sequentially, and the seller prices the items aiming to maximize social welfare. When $k=1$, this is the so called "prophet inequality" problem for which there is a simple pricing scheme achieving a competitive ratio of $1/2$. On the other end of the spectrum, as $k$ goes to infinity, the asymptotic performance of both static and adaptive pricing is well understood. We provide a static pricing scheme for the small-supply regime: where $k$ is small but larger than $1$. Prior to our work, the best competitive ratio known for this setting was the $1/2$ that follows from the single-unit prophet inequality. Our pricing scheme is easy to describe as well as practical -- it is anonymous, non-adaptive, and order-oblivious. We pick a single price that equalizes the expected fraction of items sold and the probability that the supply does not sell out before all customers are served; this price is then offered to each customer while supply lasts. This extends an approach introduced by Samuel-Cahn for the case of $k=1$. This pricing scheme achieves a competitive ratio that increases gradually with the supply. Subsequent work by Jiang, Ma, and Zhang shows that our pricing scheme is the optimal static pricing for every value of $k$.
2406.14446
Shruthi K. Hiremath
Shruthi K. Hiremath and Thomas Ploetz
Maintenance Required: Updating and Extending Bootstrapped Human Activity Recognition Systems for Smart Homes
12 pages, 5 figures, accepted at The 6th International Conference on Activity and Behavior Computing, under print at IEEE Explore
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Developing human activity recognition (HAR) systems for smart homes is not straightforward due to varied layouts of the homes and their personalized settings, as well as idiosyncratic behaviors of residents. As such, off-the-shelf HAR systems are effective in limited capacity for an individual home, and HAR systems often need to be derived "from scratch", which comes with substantial efforts and often is burdensome to the resident. Previous work has successfully targeted the initial phase. At the end of this initial phase, we identify seed points. We build on bootstrapped HAR systems and introduce an effective updating and extension procedure for continuous improvement of HAR systems with the aim of keeping up with ever changing life circumstances. Our method makes use of the seed points identified at the end of the initial bootstrapping phase. A contrastive learning framework is trained using these seed points and labels obtained for the same. This model is then used to improve the segmentation accuracy of the identified prominent activities. Improvements in the activity recognition system through this procedure help model the majority of the routine activities in the smart home. We demonstrate the effectiveness of our procedure through experiments on the CASAS datasets that show the practical value of our approach.
[ { "created": "Thu, 20 Jun 2024 16:08:40 GMT", "version": "v1" } ]
2024-06-21
[ [ "Hiremath", "Shruthi K.", "" ], [ "Ploetz", "Thomas", "" ] ]
Developing human activity recognition (HAR) systems for smart homes is not straightforward due to varied layouts of the homes and their personalized settings, as well as idiosyncratic behaviors of residents. As such, off-the-shelf HAR systems are effective in limited capacity for an individual home, and HAR systems often need to be derived "from scratch", which comes with substantial efforts and often is burdensome to the resident. Previous work has successfully targeted the initial phase. At the end of this initial phase, we identify seed points. We build on bootstrapped HAR systems and introduce an effective updating and extension procedure for continuous improvement of HAR systems with the aim of keeping up with ever changing life circumstances. Our method makes use of the seed points identified at the end of the initial bootstrapping phase. A contrastive learning framework is trained using these seed points and labels obtained for the same. This model is then used to improve the segmentation accuracy of the identified prominent activities. Improvements in the activity recognition system through this procedure help model the majority of the routine activities in the smart home. We demonstrate the effectiveness of our procedure through experiments on the CASAS datasets that show the practical value of our approach.
2203.15589
Xingyu Zhou
Xingyu Zhou and Bo Ji
On Kernelized Multi-Armed Bandits with Constraints
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We study a stochastic bandit problem with a general unknown reward function and a general unknown constraint function. Both functions can be non-linear (even non-convex) and are assumed to lie in a reproducing kernel Hilbert space (RKHS) with a bounded norm. This kernelized bandit setup strictly generalizes standard multi-armed bandits and linear bandits. In contrast to safety-type hard constraints studied in prior works, we consider soft constraints that may be violated in any round as long as the cumulative violations are small, which is motivated by various practical applications. Our ultimate goal is to study how to utilize the nature of soft constraints to attain a finer complexity-regret-constraint trade-off in the kernelized bandit setting. To this end, leveraging primal-dual optimization, we propose a general framework for both algorithm design and performance analysis. This framework builds upon a novel sufficient condition, which not only is satisfied under general exploration strategies, including \emph{upper confidence bound} (UCB), \emph{Thompson sampling} (TS), and new ones based on \emph{random exploration}, but also enables a unified analysis for showing both sublinear regret and sublinear or even zero constraint violation. We demonstrate the superior performance of our proposed algorithms via numerical experiments based on both synthetic and real-world datasets. Along the way, we also make the first detailed comparison between two popular methods for analyzing constrained bandits and Markov decision processes (MDPs) by discussing the key difference and some subtleties in the analysis, which could be of independent interest to the communities.
[ { "created": "Tue, 29 Mar 2022 14:02:03 GMT", "version": "v1" } ]
2022-03-30
[ [ "Zhou", "Xingyu", "" ], [ "Ji", "Bo", "" ] ]
We study a stochastic bandit problem with a general unknown reward function and a general unknown constraint function. Both functions can be non-linear (even non-convex) and are assumed to lie in a reproducing kernel Hilbert space (RKHS) with a bounded norm. This kernelized bandit setup strictly generalizes standard multi-armed bandits and linear bandits. In contrast to safety-type hard constraints studied in prior works, we consider soft constraints that may be violated in any round as long as the cumulative violations are small, which is motivated by various practical applications. Our ultimate goal is to study how to utilize the nature of soft constraints to attain a finer complexity-regret-constraint trade-off in the kernelized bandit setting. To this end, leveraging primal-dual optimization, we propose a general framework for both algorithm design and performance analysis. This framework builds upon a novel sufficient condition, which not only is satisfied under general exploration strategies, including \emph{upper confidence bound} (UCB), \emph{Thompson sampling} (TS), and new ones based on \emph{random exploration}, but also enables a unified analysis for showing both sublinear regret and sublinear or even zero constraint violation. We demonstrate the superior performance of our proposed algorithms via numerical experiments based on both synthetic and real-world datasets. Along the way, we also make the first detailed comparison between two popular methods for analyzing constrained bandits and Markov decision processes (MDPs) by discussing the key difference and some subtleties in the analysis, which could be of independent interest to the communities.
2310.04778
Edgar Martinez-Moro
Yang Li, Shixin Zhu and Edgar Mart\'inez-Moro
On $\ell$-MDS codes and a conjecture on infinite families of $1$-MDS codes
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
The class of $\ell$-maximum distance separable ($\ell$-MDS) codes {is a} generalization of maximum distance separable (MDS) codes {that} has attracted a lot of attention due to its applications in several areas such as secret sharing schemes, index coding problems, informed source coding problems, and combinatorial $t$-designs. In this paper, for $\ell=1$, we completely solve a conjecture recently proposed by Heng $et~al.$ (Discrete Mathematics, 346(10): 113538, 2023) and obtain infinite families of $1$-MDS codes with general dimensions holding $2$-designs. These later codes are also been proven to be optimal locally recoverable codes. For general {positive integers} $\ell$ and $\ell'$, we construct new $\ell$-MDS codes from known $\ell'$-MDS codes via some classical propagation rules involving the extended, expurgated, and $(u,u+v)$ constructions. Finally, we study some general results including characterization, weight distributions, and bounds on maximum lengths of $\ell$-MDS codes, which generalize, simplify, or improve some known results in the literature.
[ { "created": "Sat, 7 Oct 2023 11:19:51 GMT", "version": "v1" } ]
2023-10-10
[ [ "Li", "Yang", "" ], [ "Zhu", "Shixin", "" ], [ "Martínez-Moro", "Edgar", "" ] ]
The class of $\ell$-maximum distance separable ($\ell$-MDS) codes {is a} generalization of maximum distance separable (MDS) codes {that} has attracted a lot of attention due to its applications in several areas such as secret sharing schemes, index coding problems, informed source coding problems, and combinatorial $t$-designs. In this paper, for $\ell=1$, we completely solve a conjecture recently proposed by Heng $et~al.$ (Discrete Mathematics, 346(10): 113538, 2023) and obtain infinite families of $1$-MDS codes with general dimensions holding $2$-designs. These later codes are also been proven to be optimal locally recoverable codes. For general {positive integers} $\ell$ and $\ell'$, we construct new $\ell$-MDS codes from known $\ell'$-MDS codes via some classical propagation rules involving the extended, expurgated, and $(u,u+v)$ constructions. Finally, we study some general results including characterization, weight distributions, and bounds on maximum lengths of $\ell$-MDS codes, which generalize, simplify, or improve some known results in the literature.
1705.02257
Sascha Witt
Michael Axtmann, Sascha Witt, Daniel Ferizovic, Peter Sanders
In-place Parallel Super Scalar Samplesort (IPS$^4$o)
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
We present a sorting algorithm that works in-place, executes in parallel, is cache-efficient, avoids branch-mispredictions, and performs work O(n log n) for arbitrary inputs with high probability. The main algorithmic contributions are new ways to make distribution-based algorithms in-place: On the practical side, by using coarse-grained block-based permutations, and on the theoretical side, we show how to eliminate the recursion stack. Extensive experiments show that our algorithm IPS$^4$o scales well on a variety of multi-core machines. We outperform our closest in-place competitor by a factor of up to 3. Even as a sequential algorithm, we are up to 1.5 times faster than the closest sequential competitor, BlockQuicksort.
[ { "created": "Fri, 5 May 2017 15:18:08 GMT", "version": "v1" }, { "created": "Thu, 29 Jun 2017 18:27:34 GMT", "version": "v2" } ]
2017-07-03
[ [ "Axtmann", "Michael", "" ], [ "Witt", "Sascha", "" ], [ "Ferizovic", "Daniel", "" ], [ "Sanders", "Peter", "" ] ]
We present a sorting algorithm that works in-place, executes in parallel, is cache-efficient, avoids branch-mispredictions, and performs work O(n log n) for arbitrary inputs with high probability. The main algorithmic contributions are new ways to make distribution-based algorithms in-place: On the practical side, by using coarse-grained block-based permutations, and on the theoretical side, we show how to eliminate the recursion stack. Extensive experiments show that our algorithm IPS$^4$o scales well on a variety of multi-core machines. We outperform our closest in-place competitor by a factor of up to 3. Even as a sequential algorithm, we are up to 1.5 times faster than the closest sequential competitor, BlockQuicksort.
1309.1780
Anshu Dubey
A. Dubey, S. Brandt, R. Brower, M. Giles, P. Hovland, D.Q. Lamb, F. Loffler, B. Norris, B. OShea, C. Rebbi, M. Snir, R. Thakur
Software Abstractions and Methodologies for HPC Simulation Codes on Future Architectures
Position Paper
null
10.5334/jors.aw
null
cs.CE cs.MS cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large, complex, multi-scale, multi-physics simulation codes, running on high performance com-puting (HPC) platforms, have become essential to advancing science and engineering. These codes simulate multi-scale, multi-physics phenomena with unprecedented fidelity on petascale platforms, and are used by large communities. Continued ability of these codes to run on future platforms is as crucial to their communities as continued improvements in instruments and facilities are to experimental scientists. However, the ability of code developers to do these things faces a serious challenge with the paradigm shift underway in platform architecture. The complexity and uncertainty of the future platforms makes it essential to approach this challenge cooperatively as a community. We need to develop common abstractions, frameworks, programming models and software development methodologies that can be applied across a broad range of complex simulation codes, and common software infrastructure to support them. In this position paper we express and discuss our belief that such an infrastructure is critical to the deployment of existing and new large, multi-scale, multi-physics codes on future HPC platforms.
[ { "created": "Fri, 6 Sep 2013 21:41:20 GMT", "version": "v1" } ]
2014-10-24
[ [ "Dubey", "A.", "" ], [ "Brandt", "S.", "" ], [ "Brower", "R.", "" ], [ "Giles", "M.", "" ], [ "Hovland", "P.", "" ], [ "Lamb", "D. Q.", "" ], [ "Loffler", "F.", "" ], [ "Norris", "B.", "" ], [ "OShea", "B.", "" ], [ "Rebbi", "C.", "" ], [ "Snir", "M.", "" ], [ "Thakur", "R.", "" ] ]
Large, complex, multi-scale, multi-physics simulation codes, running on high performance com-puting (HPC) platforms, have become essential to advancing science and engineering. These codes simulate multi-scale, multi-physics phenomena with unprecedented fidelity on petascale platforms, and are used by large communities. Continued ability of these codes to run on future platforms is as crucial to their communities as continued improvements in instruments and facilities are to experimental scientists. However, the ability of code developers to do these things faces a serious challenge with the paradigm shift underway in platform architecture. The complexity and uncertainty of the future platforms makes it essential to approach this challenge cooperatively as a community. We need to develop common abstractions, frameworks, programming models and software development methodologies that can be applied across a broad range of complex simulation codes, and common software infrastructure to support them. In this position paper we express and discuss our belief that such an infrastructure is critical to the deployment of existing and new large, multi-scale, multi-physics codes on future HPC platforms.
2102.05456
Leo Laugier
Leo Laugier, John Pavlopoulos, Jeffrey Sorensen, Lucas Dixon
Civil Rephrases Of Toxic Texts With Self-Supervised Transformers
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Platforms that support online commentary, from social networks to news sites, are increasingly leveraging machine learning to assist their moderation efforts. But this process does not typically provide feedback to the author that would help them contribute according to the community guidelines. This is prohibitively time-consuming for human moderators to do, and computational approaches are still nascent. This work focuses on models that can help suggest rephrasings of toxic comments in a more civil manner. Inspired by recent progress in unpaired sequence-to-sequence tasks, a self-supervised learning model is introduced, called CAE-T5. CAE-T5 employs a pre-trained text-to-text transformer, which is fine tuned with a denoising and cyclic auto-encoder loss. Experimenting with the largest toxicity detection dataset to date (Civil Comments) our model generates sentences that are more fluent and better at preserving the initial content compared to earlier text style transfer systems which we compare with using several scoring systems and human evaluation.
[ { "created": "Mon, 1 Feb 2021 15:27:52 GMT", "version": "v1" }, { "created": "Thu, 11 Feb 2021 14:11:35 GMT", "version": "v2" } ]
2021-02-12
[ [ "Laugier", "Leo", "" ], [ "Pavlopoulos", "John", "" ], [ "Sorensen", "Jeffrey", "" ], [ "Dixon", "Lucas", "" ] ]
Platforms that support online commentary, from social networks to news sites, are increasingly leveraging machine learning to assist their moderation efforts. But this process does not typically provide feedback to the author that would help them contribute according to the community guidelines. This is prohibitively time-consuming for human moderators to do, and computational approaches are still nascent. This work focuses on models that can help suggest rephrasings of toxic comments in a more civil manner. Inspired by recent progress in unpaired sequence-to-sequence tasks, a self-supervised learning model is introduced, called CAE-T5. CAE-T5 employs a pre-trained text-to-text transformer, which is fine tuned with a denoising and cyclic auto-encoder loss. Experimenting with the largest toxicity detection dataset to date (Civil Comments) our model generates sentences that are more fluent and better at preserving the initial content compared to earlier text style transfer systems which we compare with using several scoring systems and human evaluation.
1204.6563
Prabhu Kaliamoorthi Mr
Prabhu Kaliamoorthi and Ramakrishna Kakarala
Parametric annealing: a stochastic search method for human pose tracking
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Model based methods to marker-free motion capture have a very high computational overhead that make them unattractive. In this paper we describe a method that improves on existing global optimization techniques to tracking articulated objects. Our method improves on the state-of-the-art Annealed Particle Filter (APF) by reusing samples across annealing layers and by using an adaptive parametric density for diffusion. We compare the proposed method with APF on a scalable problem and study how the two methods scale with the dimensionality, multi-modality and the range of search. Then we perform sensitivity analysis on the parameters of our algorithm and show that it tolerates a wide range of parameter settings. We also show results on tracking human pose from the widely-used Human Eva I dataset. Our results show that the proposed method reduces the tracking error despite using less than 50% of the computational resources as APF. The tracked output also shows a significant qualitative improvement over APF as demonstrated through image and video results.
[ { "created": "Mon, 30 Apr 2012 07:04:08 GMT", "version": "v1" }, { "created": "Wed, 2 May 2012 04:37:03 GMT", "version": "v2" } ]
2012-05-03
[ [ "Kaliamoorthi", "Prabhu", "" ], [ "Kakarala", "Ramakrishna", "" ] ]
Model based methods to marker-free motion capture have a very high computational overhead that make them unattractive. In this paper we describe a method that improves on existing global optimization techniques to tracking articulated objects. Our method improves on the state-of-the-art Annealed Particle Filter (APF) by reusing samples across annealing layers and by using an adaptive parametric density for diffusion. We compare the proposed method with APF on a scalable problem and study how the two methods scale with the dimensionality, multi-modality and the range of search. Then we perform sensitivity analysis on the parameters of our algorithm and show that it tolerates a wide range of parameter settings. We also show results on tracking human pose from the widely-used Human Eva I dataset. Our results show that the proposed method reduces the tracking error despite using less than 50% of the computational resources as APF. The tracked output also shows a significant qualitative improvement over APF as demonstrated through image and video results.
2403.01792
Kuan-Hsun Ho
Kuan-Hsun Ho, Jeih-weih Hung, and Berlin Chen
ConSep: a Noise- and Reverberation-Robust Speech Separation Framework by Magnitude Conditioning
null
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by-sa/4.0/
Speech separation has recently made significant progress thanks to the fine-grained vision used in time-domain methods. However, several studies have shown that adopting Short-Time Fourier Transform (STFT) for feature extraction could be beneficial when encountering harsher conditions, such as noise or reverberation. Therefore, we propose a magnitude-conditioned time-domain framework, ConSep, to inherit the beneficial characteristics. The experiment shows that ConSep promotes performance in anechoic, noisy, and reverberant settings compared to two celebrated methods, SepFormer and Bi-Sep. Furthermore, we visualize the components of ConSep to strengthen the advantages and cohere with the actualities we have found in preliminary studies.
[ { "created": "Mon, 4 Mar 2024 07:34:24 GMT", "version": "v1" } ]
2024-03-05
[ [ "Ho", "Kuan-Hsun", "" ], [ "Hung", "Jeih-weih", "" ], [ "Chen", "Berlin", "" ] ]
Speech separation has recently made significant progress thanks to the fine-grained vision used in time-domain methods. However, several studies have shown that adopting Short-Time Fourier Transform (STFT) for feature extraction could be beneficial when encountering harsher conditions, such as noise or reverberation. Therefore, we propose a magnitude-conditioned time-domain framework, ConSep, to inherit the beneficial characteristics. The experiment shows that ConSep promotes performance in anechoic, noisy, and reverberant settings compared to two celebrated methods, SepFormer and Bi-Sep. Furthermore, we visualize the components of ConSep to strengthen the advantages and cohere with the actualities we have found in preliminary studies.
2010.05675
Patrick Lambein-Monette
Bernadette Charron-Bost and Patrick Lambein-Monette
Average Consensus: A Little Learning Goes A Long Way
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by-nc-sa/4.0/
When networked systems of autonomous agents carry out complex tasks, the control and coordination sought after generally depend on a few fundamental control primitives. Chief among these primitives is consensus, where agents are to converge to a common estimate within the range of initial values, which becomes average consensus when the joint limit should be the average of the initial values. To provide reliable services that are easy to deploy, these primitives should operate even when the network is subject to frequent and unpredictable changes. Moreover, they should mobilize few computational resources so that low powered, deterministic, and anonymous agents can partake in the network. In this stringent adversarial context, we investigate the distributed implementation of these primitives over networks with bidirectional, but potentially short-lived, communication links. Inspired by the classic EqualNeighbor and Metropolis agreement rules for multi-agent systems, we design distributed algorithms for consensus and average consensus, which we show to operate in polynomial time in a synchronous temporal model. These algorithms are fully distributed, requiring neither symmetry-breaking devices such as unique identifiers, nor global control or knowledge of the network. Our strategy consists in making agents learn simple structural parameters of the network -- namely, their largest degrees -- which constitutes enough information to build simple update rules, implementable locally with little computational and memory overhead.
[ { "created": "Mon, 12 Oct 2020 13:15:33 GMT", "version": "v1" } ]
2020-10-13
[ [ "Charron-Bost", "Bernadette", "" ], [ "Lambein-Monette", "Patrick", "" ] ]
When networked systems of autonomous agents carry out complex tasks, the control and coordination sought after generally depend on a few fundamental control primitives. Chief among these primitives is consensus, where agents are to converge to a common estimate within the range of initial values, which becomes average consensus when the joint limit should be the average of the initial values. To provide reliable services that are easy to deploy, these primitives should operate even when the network is subject to frequent and unpredictable changes. Moreover, they should mobilize few computational resources so that low powered, deterministic, and anonymous agents can partake in the network. In this stringent adversarial context, we investigate the distributed implementation of these primitives over networks with bidirectional, but potentially short-lived, communication links. Inspired by the classic EqualNeighbor and Metropolis agreement rules for multi-agent systems, we design distributed algorithms for consensus and average consensus, which we show to operate in polynomial time in a synchronous temporal model. These algorithms are fully distributed, requiring neither symmetry-breaking devices such as unique identifiers, nor global control or knowledge of the network. Our strategy consists in making agents learn simple structural parameters of the network -- namely, their largest degrees -- which constitutes enough information to build simple update rules, implementable locally with little computational and memory overhead.
2304.06335
Chien-Pin Liu
Chien-Pin Liu, Ju-Hsuan Li, En-Ping Chu, Chia-Yeh Hsieh, Kai-Chun Liu, Chia-Tai Chan, Yu Tsao
Deep Learning-based Fall Detection Algorithm Using Ensemble Model of Coarse-fine CNN and GRU Networks
null
null
null
null
cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Falls are the public health issue for the elderly all over the world since the fall-induced injuries are associated with a large amount of healthcare cost. Falls can cause serious injuries, even leading to death if the elderly suffers a "long-lie". Hence, a reliable fall detection (FD) system is required to provide an emergency alarm for first aid. Due to the advances in wearable device technology and artificial intelligence, some fall detection systems have been developed using machine learning and deep learning methods to analyze the signal collected from accelerometer and gyroscopes. In order to achieve better fall detection performance, an ensemble model that combines a coarse-fine convolutional neural network and gated recurrent unit is proposed in this study. The parallel structure design used in this model restores the different grains of spatial characteristics and capture temporal dependencies for feature representation. This study applies the FallAllD public dataset to validate the reliability of the proposed model, which achieves a recall, precision, and F-score of 92.54%, 96.13%, and 94.26%, respectively. The results demonstrate the reliability of the proposed ensemble model in discriminating falls from daily living activities and its superior performance compared to the state-of-the-art convolutional neural network long short-term memory (CNN-LSTM) for FD.
[ { "created": "Thu, 13 Apr 2023 08:30:46 GMT", "version": "v1" } ]
2023-04-14
[ [ "Liu", "Chien-Pin", "" ], [ "Li", "Ju-Hsuan", "" ], [ "Chu", "En-Ping", "" ], [ "Hsieh", "Chia-Yeh", "" ], [ "Liu", "Kai-Chun", "" ], [ "Chan", "Chia-Tai", "" ], [ "Tsao", "Yu", "" ] ]
Falls are the public health issue for the elderly all over the world since the fall-induced injuries are associated with a large amount of healthcare cost. Falls can cause serious injuries, even leading to death if the elderly suffers a "long-lie". Hence, a reliable fall detection (FD) system is required to provide an emergency alarm for first aid. Due to the advances in wearable device technology and artificial intelligence, some fall detection systems have been developed using machine learning and deep learning methods to analyze the signal collected from accelerometer and gyroscopes. In order to achieve better fall detection performance, an ensemble model that combines a coarse-fine convolutional neural network and gated recurrent unit is proposed in this study. The parallel structure design used in this model restores the different grains of spatial characteristics and capture temporal dependencies for feature representation. This study applies the FallAllD public dataset to validate the reliability of the proposed model, which achieves a recall, precision, and F-score of 92.54%, 96.13%, and 94.26%, respectively. The results demonstrate the reliability of the proposed ensemble model in discriminating falls from daily living activities and its superior performance compared to the state-of-the-art convolutional neural network long short-term memory (CNN-LSTM) for FD.
1812.09298
Shibashis Guha
Benjamin Bordais, Shibashis Guha, Jean-Fran\c{c}ois Raskin
Expected Window Mean-Payoff
Replaced PP-hardness of direct fixed window objective with PSPACE-hardness, added alternative definition of window mean-payoff
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the window mean-payoff objective, given an infinite path, instead of considering a long run average, we consider the minimum payoff that can be ensured at every position of the path over a finite window that slides over the entire path. Chatterjee et al. studied the problem to decide if in a two-player game, Player 1 has a strategy to ensure a window mean-payoff of at least 0. In this work, we consider a function that given a path returns the supremum value of the window mean-payoff that can be ensured over the path and we show how to compute its expected value in Markov chains and Markov decision processes. We consider two variants of the function: Fixed window mean-payoff in which a fixed window length $l_{max}$ is provided; and Bounded window mean-payoff in which we compute the maximum possible value of the window mean-payoff over all possible window lengths. Further, for both variants, we consider (i) a direct version of the problem where for each path, the payoff that can be ensured from its very beginning and (ii) a non-direct version that is the prefix independent counterpart of the direct version of the problem.
[ { "created": "Fri, 21 Dec 2018 18:19:00 GMT", "version": "v1" }, { "created": "Thu, 5 Dec 2019 20:24:26 GMT", "version": "v2" } ]
2019-12-09
[ [ "Bordais", "Benjamin", "" ], [ "Guha", "Shibashis", "" ], [ "Raskin", "Jean-François", "" ] ]
In the window mean-payoff objective, given an infinite path, instead of considering a long run average, we consider the minimum payoff that can be ensured at every position of the path over a finite window that slides over the entire path. Chatterjee et al. studied the problem to decide if in a two-player game, Player 1 has a strategy to ensure a window mean-payoff of at least 0. In this work, we consider a function that given a path returns the supremum value of the window mean-payoff that can be ensured over the path and we show how to compute its expected value in Markov chains and Markov decision processes. We consider two variants of the function: Fixed window mean-payoff in which a fixed window length $l_{max}$ is provided; and Bounded window mean-payoff in which we compute the maximum possible value of the window mean-payoff over all possible window lengths. Further, for both variants, we consider (i) a direct version of the problem where for each path, the payoff that can be ensured from its very beginning and (ii) a non-direct version that is the prefix independent counterpart of the direct version of the problem.
2404.03263
Sean Farhat
Sean Farhat, Deming Chen
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
ICLR 2024. 5th Workshop on Practical ML for Low Resource Settings (PML4LRS). Code can be found at https://github.com/sfarhat/dapt
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose that small models may not need to absorb the cost of pre-training to reap its benefits. Instead, they can capitalize on the astonishing results achieved by modern, enormous models to a surprising degree. We observe that, when distilled on a task from a pre-trained teacher model, a small model can achieve or surpass the performance it would achieve if it was pre-trained then finetuned on that task. To allow this phenomenon to be easily leveraged, we establish a connection reducing knowledge distillation to modern contrastive learning, opening two doors: (1) vastly different model architecture pairings can work for the distillation, and (2) most contrastive learning algorithms rooted in the theory of Noise Contrastive Estimation can be easily applied and used. We demonstrate this paradigm using pre-trained teacher models from open-source model hubs, Transformer and convolution based model combinations, and a novel distillation algorithm that massages the Alignment/Uniformity perspective of contrastive learning by Wang & Isola (2020) into a distillation objective. We choose this flavor of contrastive learning due to its low computational cost, an overarching theme of this work. We also observe that this phenomenon tends not to occur if the task is data-limited. However, this can be alleviated by leveraging yet another scale-inspired development: large, pre-trained generative models for dataset augmentation. Again, we use an open-source model, and our rudimentary prompts are sufficient to boost the small model`s performance. Thus, we highlight a training method for small models that is up to 94% faster than the standard pre-training paradigm without sacrificing performance. For practitioners discouraged from fully utilizing modern foundation datasets for their small models due to the prohibitive scale, we believe our work keeps that door open.
[ { "created": "Thu, 4 Apr 2024 07:38:11 GMT", "version": "v1" }, { "created": "Fri, 3 May 2024 06:08:30 GMT", "version": "v2" } ]
2024-05-06
[ [ "Farhat", "Sean", "" ], [ "Chen", "Deming", "" ] ]
In this paper, we propose that small models may not need to absorb the cost of pre-training to reap its benefits. Instead, they can capitalize on the astonishing results achieved by modern, enormous models to a surprising degree. We observe that, when distilled on a task from a pre-trained teacher model, a small model can achieve or surpass the performance it would achieve if it was pre-trained then finetuned on that task. To allow this phenomenon to be easily leveraged, we establish a connection reducing knowledge distillation to modern contrastive learning, opening two doors: (1) vastly different model architecture pairings can work for the distillation, and (2) most contrastive learning algorithms rooted in the theory of Noise Contrastive Estimation can be easily applied and used. We demonstrate this paradigm using pre-trained teacher models from open-source model hubs, Transformer and convolution based model combinations, and a novel distillation algorithm that massages the Alignment/Uniformity perspective of contrastive learning by Wang & Isola (2020) into a distillation objective. We choose this flavor of contrastive learning due to its low computational cost, an overarching theme of this work. We also observe that this phenomenon tends not to occur if the task is data-limited. However, this can be alleviated by leveraging yet another scale-inspired development: large, pre-trained generative models for dataset augmentation. Again, we use an open-source model, and our rudimentary prompts are sufficient to boost the small model`s performance. Thus, we highlight a training method for small models that is up to 94% faster than the standard pre-training paradigm without sacrificing performance. For practitioners discouraged from fully utilizing modern foundation datasets for their small models due to the prohibitive scale, we believe our work keeps that door open.
1710.07991
Mrinal Haloi
Mrinal Haloi
Rethinking Convolutional Semantic Segmentation Learning
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep convolutional semantic segmentation (DCSS) learning doesn't converge to an optimal local minimum with random parameters initializations; a pre-trained model on the same domain becomes necessary to achieve convergence.In this work, we propose a joint cooperative end-to-end learning method for DCSS. It addresses many drawbacks with existing deep semantic segmentation learning; the proposed approach simultaneously learn both segmentation and classification; taking away the essential need of the pre-trained model for learning convergence. We present an improved inception based architecture with partial attention gating (PAG) over encoder information. The PAG also adds to achieve faster convergence and better accuracy for segmentation task. We will show the effectiveness of this learning on a diabetic retinopathy classification and segmentation dataset.
[ { "created": "Sun, 22 Oct 2017 18:13:24 GMT", "version": "v1" } ]
2017-10-24
[ [ "Haloi", "Mrinal", "" ] ]
Deep convolutional semantic segmentation (DCSS) learning doesn't converge to an optimal local minimum with random parameters initializations; a pre-trained model on the same domain becomes necessary to achieve convergence.In this work, we propose a joint cooperative end-to-end learning method for DCSS. It addresses many drawbacks with existing deep semantic segmentation learning; the proposed approach simultaneously learn both segmentation and classification; taking away the essential need of the pre-trained model for learning convergence. We present an improved inception based architecture with partial attention gating (PAG) over encoder information. The PAG also adds to achieve faster convergence and better accuracy for segmentation task. We will show the effectiveness of this learning on a diabetic retinopathy classification and segmentation dataset.
1207.3437
Massimiliano Vasile
Massimiliano Vasile
Robust Mission Design Through Evidence Theory and Multi-Agent Collaborative Search
null
Annals of the New York Academy of Science, Volume 1065, New Trends in Astrodynamics and Applications pages 152-173, December 2005
10.1196/annals.1370.024
null
cs.CE cs.NE cs.SY math.OC math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the preliminary design of a space mission is approached introducing uncertainties on the design parameters and formulating the resulting reliable design problem as a multiobjective optimization problem. Uncertainties are modelled through evidence theory and the belief, or credibility, in the successful achievement of mission goals is maximised along with the reliability of constraint satisfaction. The multiobjective optimisation problem is solved through a novel algorithm based on the collaboration of a population of agents in search for the set of highly reliable solutions. Two typical problems in mission analysis are used to illustrate the proposed methodology.
[ { "created": "Sat, 14 Jul 2012 16:17:52 GMT", "version": "v1" } ]
2015-06-05
[ [ "Vasile", "Massimiliano", "" ] ]
In this paper, the preliminary design of a space mission is approached introducing uncertainties on the design parameters and formulating the resulting reliable design problem as a multiobjective optimization problem. Uncertainties are modelled through evidence theory and the belief, or credibility, in the successful achievement of mission goals is maximised along with the reliability of constraint satisfaction. The multiobjective optimisation problem is solved through a novel algorithm based on the collaboration of a population of agents in search for the set of highly reliable solutions. Two typical problems in mission analysis are used to illustrate the proposed methodology.
2403.11631
Kun Ding
Kun Ding and Xiaohui Li and Qiang Yu and Ying Wang and Haojian Zhang and Shiming Xiang
Compositional Kronecker Context Optimization for Vision-Language Models
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Context Optimization (CoOp) has emerged as a simple yet effective technique for adapting CLIP-like vision-language models to downstream image recognition tasks. Nevertheless, learning compact context with satisfactory base-to-new, domain and cross-task generalization ability while adapting to new tasks is still a challenge. To tackle such a challenge, we propose a lightweight yet generalizable approach termed Compositional Kronecker Context Optimization (CK-CoOp). Technically, the prompt's context words in CK-CoOp are learnable vectors, which are crafted by linearly combining base vectors sourced from a dictionary. These base vectors consist of a non-learnable component obtained by quantizing the weights in the token embedding layer, and a learnable component constructed by applying Kronecker product on several learnable tiny matrices. Intuitively, the compositional structure mitigates the risk of overfitting on training data by remembering more pre-trained knowledge. Meantime, the Kronecker product breaks the non-learnable restrictions of the dictionary, thereby enhancing representation ability with minimal additional parameters. Extensive experiments confirm that CK-CoOp achieves state-of-the-art performance under base-to-new, domain and cross-task generalization evaluation, but also has the metrics of fewer learnable parameters and efficient training and inference speed.
[ { "created": "Mon, 18 Mar 2024 10:09:28 GMT", "version": "v1" } ]
2024-03-19
[ [ "Ding", "Kun", "" ], [ "Li", "Xiaohui", "" ], [ "Yu", "Qiang", "" ], [ "Wang", "Ying", "" ], [ "Zhang", "Haojian", "" ], [ "Xiang", "Shiming", "" ] ]
Context Optimization (CoOp) has emerged as a simple yet effective technique for adapting CLIP-like vision-language models to downstream image recognition tasks. Nevertheless, learning compact context with satisfactory base-to-new, domain and cross-task generalization ability while adapting to new tasks is still a challenge. To tackle such a challenge, we propose a lightweight yet generalizable approach termed Compositional Kronecker Context Optimization (CK-CoOp). Technically, the prompt's context words in CK-CoOp are learnable vectors, which are crafted by linearly combining base vectors sourced from a dictionary. These base vectors consist of a non-learnable component obtained by quantizing the weights in the token embedding layer, and a learnable component constructed by applying Kronecker product on several learnable tiny matrices. Intuitively, the compositional structure mitigates the risk of overfitting on training data by remembering more pre-trained knowledge. Meantime, the Kronecker product breaks the non-learnable restrictions of the dictionary, thereby enhancing representation ability with minimal additional parameters. Extensive experiments confirm that CK-CoOp achieves state-of-the-art performance under base-to-new, domain and cross-task generalization evaluation, but also has the metrics of fewer learnable parameters and efficient training and inference speed.
2010.07804
Xiao Luo
Xiao Luo, Daqing Wu, Zeyu Ma, Chong Chen, Minghua Deng, Jinwen Ma, Zhongming Jin, Jianqiang Huang and Xian-Sheng Hua
CIMON: Towards High-quality Hash Codes
Accepted by IJCAI 21
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, hashing is widely used in approximate nearest neighbor search for its storage and computational efficiency. Most of the unsupervised hashing methods learn to map images into semantic similarity-preserving hash codes by constructing local semantic similarity structure from the pre-trained model as the guiding information, i.e., treating each point pair similar if their distance is small in feature space. However, due to the inefficient representation ability of the pre-trained model, many false positives and negatives in local semantic similarity will be introduced and lead to error propagation during the hash code learning. Moreover, few of the methods consider the robustness of models, which will cause instability of hash codes to disturbance. In this paper, we propose a new method named {\textbf{C}}omprehensive s{\textbf{I}}milarity {\textbf{M}}ining and c{\textbf{O}}nsistency lear{\textbf{N}}ing (CIMON). First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes. Extensive experiments on several benchmark datasets show that the proposed method outperforms a wide range of state-of-the-art methods in both retrieval performance and robustness.
[ { "created": "Thu, 15 Oct 2020 14:47:14 GMT", "version": "v1" }, { "created": "Fri, 16 Oct 2020 09:18:50 GMT", "version": "v2" }, { "created": "Thu, 5 Nov 2020 08:44:26 GMT", "version": "v3" }, { "created": "Sat, 21 Aug 2021 04:13:07 GMT", "version": "v4" } ]
2021-08-24
[ [ "Luo", "Xiao", "" ], [ "Wu", "Daqing", "" ], [ "Ma", "Zeyu", "" ], [ "Chen", "Chong", "" ], [ "Deng", "Minghua", "" ], [ "Ma", "Jinwen", "" ], [ "Jin", "Zhongming", "" ], [ "Huang", "Jianqiang", "" ], [ "Hua", "Xian-Sheng", "" ] ]
Recently, hashing is widely used in approximate nearest neighbor search for its storage and computational efficiency. Most of the unsupervised hashing methods learn to map images into semantic similarity-preserving hash codes by constructing local semantic similarity structure from the pre-trained model as the guiding information, i.e., treating each point pair similar if their distance is small in feature space. However, due to the inefficient representation ability of the pre-trained model, many false positives and negatives in local semantic similarity will be introduced and lead to error propagation during the hash code learning. Moreover, few of the methods consider the robustness of models, which will cause instability of hash codes to disturbance. In this paper, we propose a new method named {\textbf{C}}omprehensive s{\textbf{I}}milarity {\textbf{M}}ining and c{\textbf{O}}nsistency lear{\textbf{N}}ing (CIMON). First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes. Extensive experiments on several benchmark datasets show that the proposed method outperforms a wide range of state-of-the-art methods in both retrieval performance and robustness.
1211.4264
Kunal Narayan Chaudhury
Kunal N. Chaudhury, Amit Singer
Non-Local Patch Regression: Robust Image Denoising in Patch Space
Submitted
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2013
10.1109/ICASSP.2013.6637870
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It was recently demonstrated in [Chaudhury et al.,Non-Local Euclidean Medians,2012] that the denoising performance of Non-Local Means (NLM) can be improved at large noise levels by replacing the mean by the robust Euclidean median. Numerical experiments on synthetic and natural images showed that the latter consistently performed better than NLM beyond a certain noise level, and significantly so for images with sharp edges. The Euclidean mean and median can be put into a common regression (on the patch space) framework, in which the l_2 norm of the residuals is considered in the former, while the l_1 norm is considered in the latter. The natural question then is what happens if we consider l_p (0<p<1) regression? We investigate this possibility in this paper.
[ { "created": "Sun, 18 Nov 2012 22:36:43 GMT", "version": "v1" } ]
2016-11-18
[ [ "Chaudhury", "Kunal N.", "" ], [ "Singer", "Amit", "" ] ]
It was recently demonstrated in [Chaudhury et al.,Non-Local Euclidean Medians,2012] that the denoising performance of Non-Local Means (NLM) can be improved at large noise levels by replacing the mean by the robust Euclidean median. Numerical experiments on synthetic and natural images showed that the latter consistently performed better than NLM beyond a certain noise level, and significantly so for images with sharp edges. The Euclidean mean and median can be put into a common regression (on the patch space) framework, in which the l_2 norm of the residuals is considered in the former, while the l_1 norm is considered in the latter. The natural question then is what happens if we consider l_p (0<p<1) regression? We investigate this possibility in this paper.
2406.08607
Jiabao Ji
Jiabao Ji, Yujian Liu, Yang Zhang, Gaowen Liu, Ramana Rao Kompella, Sijia Liu, Shiyu Chang
Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference
21 pages, 11 figures
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
As Large Language Models (LLMs) demonstrate extensive capability in learning from documents, LLM unlearning becomes an increasingly important research area to address concerns of LLMs in terms of privacy, copyright, etc. A conventional LLM unlearning task typically involves two goals: (1) The target LLM should forget the knowledge in the specified forget documents, and (2) it should retain the other knowledge that the LLM possesses, for which we assume access to a small number of retain documents. To achieve both goals, a mainstream class of LLM unlearning methods introduces an optimization framework with a combination of two objectives - maximizing the prediction loss on the forget documents while minimizing that on the retain documents, which suffers from two challenges, degenerated output and catastrophic forgetting. In this paper, we propose a novel unlearning framework called Unlearning from Logit Difference (ULD), which introduces an assistant LLM that aims to achieve the opposite of the unlearning goals: remembering the forget documents and forgetting the retain knowledge. ULD then derives the unlearned LLM by computing the logit difference between the target and the assistant LLMs. We show that such reversed objectives would naturally resolve both aforementioned challenges while significantly improving the training efficiency. Extensive experiments demonstrate that our method efficiently achieves the intended forgetting while preserving the LLM's overall capabilities, reducing training time by more than threefold. Notably, our method loses 0% of model utility on the ToFU benchmark, whereas baseline methods may sacrifice 17% of utility on average to achieve comparable forget quality. Our code will be publicly available at https://github.com/UCSB-NLP-Chang/ULD.
[ { "created": "Wed, 12 Jun 2024 19:26:35 GMT", "version": "v1" } ]
2024-06-14
[ [ "Ji", "Jiabao", "" ], [ "Liu", "Yujian", "" ], [ "Zhang", "Yang", "" ], [ "Liu", "Gaowen", "" ], [ "Kompella", "Ramana Rao", "" ], [ "Liu", "Sijia", "" ], [ "Chang", "Shiyu", "" ] ]
As Large Language Models (LLMs) demonstrate extensive capability in learning from documents, LLM unlearning becomes an increasingly important research area to address concerns of LLMs in terms of privacy, copyright, etc. A conventional LLM unlearning task typically involves two goals: (1) The target LLM should forget the knowledge in the specified forget documents, and (2) it should retain the other knowledge that the LLM possesses, for which we assume access to a small number of retain documents. To achieve both goals, a mainstream class of LLM unlearning methods introduces an optimization framework with a combination of two objectives - maximizing the prediction loss on the forget documents while minimizing that on the retain documents, which suffers from two challenges, degenerated output and catastrophic forgetting. In this paper, we propose a novel unlearning framework called Unlearning from Logit Difference (ULD), which introduces an assistant LLM that aims to achieve the opposite of the unlearning goals: remembering the forget documents and forgetting the retain knowledge. ULD then derives the unlearned LLM by computing the logit difference between the target and the assistant LLMs. We show that such reversed objectives would naturally resolve both aforementioned challenges while significantly improving the training efficiency. Extensive experiments demonstrate that our method efficiently achieves the intended forgetting while preserving the LLM's overall capabilities, reducing training time by more than threefold. Notably, our method loses 0% of model utility on the ToFU benchmark, whereas baseline methods may sacrifice 17% of utility on average to achieve comparable forget quality. Our code will be publicly available at https://github.com/UCSB-NLP-Chang/ULD.
0910.0317
Rdv Ijcsis
Abbas Karimi, Faraneh Zarafshan, Adznan.b. Jantan, A.R Ramli, M.Iqbal b.Saripan
A New Fuzzy Approach for Dynamic Load Balancing Algorithm
5 pages IEEE format, International Journal of Computer Science and Information Security, IJCSIS 2009, ISSN 1947 5500, Impact Factor 0.423, http://sites.google.com/site/ijcsis/
International Journal of Computer Science and Information Security, IJCSIS, Vol. 6, No. 1, pp. 01-05, October 2009, USA
null
ISSN 1947 5500
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Load balancing is the process of improving the Performance of a parallel and distributed system through is distribution of load among the processors [1-2]. Most of the previous work in load balancing and distributed decision making in general, do not effectively take into account the uncertainty and inconsistency in state information but in fuzzy logic, we have advantage of using crisps inputs. In this paper, we present a new approach for implementing dynamic load balancing algorithm with fuzzy logic, which can face to uncertainty and inconsistency of previous algorithms, further more our algorithm shows better response time than round robin and randomize algorithm respectively 30.84 percent and 45.45 percent.
[ { "created": "Fri, 2 Oct 2009 03:32:09 GMT", "version": "v1" } ]
2009-10-05
[ [ "Karimi", "Abbas", "" ], [ "Zarafshan", "Faraneh", "" ], [ "Jantan", "Adznan. b.", "" ], [ "Ramli", "A. R", "" ], [ "Saripan", "M. Iqbal b.", "" ] ]
Load balancing is the process of improving the Performance of a parallel and distributed system through is distribution of load among the processors [1-2]. Most of the previous work in load balancing and distributed decision making in general, do not effectively take into account the uncertainty and inconsistency in state information but in fuzzy logic, we have advantage of using crisps inputs. In this paper, we present a new approach for implementing dynamic load balancing algorithm with fuzzy logic, which can face to uncertainty and inconsistency of previous algorithms, further more our algorithm shows better response time than round robin and randomize algorithm respectively 30.84 percent and 45.45 percent.
1109.0597
Prateek Mittal
Prateek Mittal, Ahmed Khurshid, Joshua Juen, Matthew Caesar, Nikita Borisov
Stealthy Traffic Analysis of Low-Latency Anonymous Communication Using Throughput Fingerprinting
Accepted for publication in ACM CCS 2011
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anonymity systems such as Tor aim to enable users to communicate in a manner that is untraceable by adversaries that control a small number of machines. To provide efficient service to users, these anonymity systems make full use of forwarding capacity when sending traffic between intermediate relays. In this paper, we show that doing this leaks information about the set of Tor relays in a circuit (path). We present attacks that, with high confidence and based solely on throughput information, can (a) reduce the attacker's uncertainty about the bottleneck relay of any Tor circuit whose throughput can be observed, (b) exactly identify the guard relay(s) of a Tor user when circuit throughput can be observed over multiple connections, and (c) identify whether two concurrent TCP connections belong to the same Tor user, breaking unlinkability. Our attacks are stealthy, and cannot be readily detected by a user or by Tor relays. We validate our attacks using experiments over the live Tor network. We find that the attacker can substantially reduce the entropy of a bottleneck relay distribution of a Tor circuit whose throughput can be observed-the entropy gets reduced by a factor of 2 in the median case. Such information leaks from a single Tor circuit can be combined over multiple connections to exactly identify a user's guard relay(s). Finally, we are also able to link two connections from the same initiator with a crossover error rate of less than 1.5% in under 5 minutes. Our attacks are also more accurate and require fewer resources than previous attacks on Tor.
[ { "created": "Sat, 3 Sep 2011 06:43:53 GMT", "version": "v1" }, { "created": "Tue, 22 Nov 2011 21:05:49 GMT", "version": "v2" } ]
2015-03-19
[ [ "Mittal", "Prateek", "" ], [ "Khurshid", "Ahmed", "" ], [ "Juen", "Joshua", "" ], [ "Caesar", "Matthew", "" ], [ "Borisov", "Nikita", "" ] ]
Anonymity systems such as Tor aim to enable users to communicate in a manner that is untraceable by adversaries that control a small number of machines. To provide efficient service to users, these anonymity systems make full use of forwarding capacity when sending traffic between intermediate relays. In this paper, we show that doing this leaks information about the set of Tor relays in a circuit (path). We present attacks that, with high confidence and based solely on throughput information, can (a) reduce the attacker's uncertainty about the bottleneck relay of any Tor circuit whose throughput can be observed, (b) exactly identify the guard relay(s) of a Tor user when circuit throughput can be observed over multiple connections, and (c) identify whether two concurrent TCP connections belong to the same Tor user, breaking unlinkability. Our attacks are stealthy, and cannot be readily detected by a user or by Tor relays. We validate our attacks using experiments over the live Tor network. We find that the attacker can substantially reduce the entropy of a bottleneck relay distribution of a Tor circuit whose throughput can be observed-the entropy gets reduced by a factor of 2 in the median case. Such information leaks from a single Tor circuit can be combined over multiple connections to exactly identify a user's guard relay(s). Finally, we are also able to link two connections from the same initiator with a crossover error rate of less than 1.5% in under 5 minutes. Our attacks are also more accurate and require fewer resources than previous attacks on Tor.
2105.08758
David Krackhardt
Vineet Kumar, David Krackhardt, Scott Feld
Interventions with Inversity in Unknown Networks Can Help Regulate Contagion
32 pages including supplemental materials
null
null
null
cs.SI physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Network intervention problems often benefit from selecting a highly-connected node to perform interventions using these nodes, e.g. immunization. However, in many network contexts, the structure of network connections is unknown, leading to a challenge. We develop and examine the mathematical properties of two distinct informationally light strategies, a novel global strategy and local strategy, that yield higher degree nodes in virtually any network structure. We further identify a novel network property called Inversity, whose sign determines which of the two strategies, local or global, will be most effective for a network. We demonstrate that local and global strategies obtain a several-fold improvement in node degree relative to a random selection benchmark for generated and real networks (including contact, affiliation and online networks). In some networks, they achieve a 100-fold improvement. We show how these new strategies can be used to control contagion of an epidemic spreading across a set of village networks, finding that the strategies developed here require far fewer ($<50\%$) nodes to be immunized, relative to the random strategy baseline. Prior research has typically used the complete network structure to choose nodes for optimal seeding. The relevant network is often costly to collect, and is privacy-invasive, requiring knowing each person's network neighbors, and might not be possible to obtain for time-sensitive interventions. Our interventions are less invasive of individual privacy, since each selected node only needs to nominate some network neighbors for intervention, while mathematically guaranteed to provide better connected nodes.
[ { "created": "Tue, 18 May 2021 18:14:11 GMT", "version": "v1" } ]
2021-05-20
[ [ "Kumar", "Vineet", "" ], [ "Krackhardt", "David", "" ], [ "Feld", "Scott", "" ] ]
Network intervention problems often benefit from selecting a highly-connected node to perform interventions using these nodes, e.g. immunization. However, in many network contexts, the structure of network connections is unknown, leading to a challenge. We develop and examine the mathematical properties of two distinct informationally light strategies, a novel global strategy and local strategy, that yield higher degree nodes in virtually any network structure. We further identify a novel network property called Inversity, whose sign determines which of the two strategies, local or global, will be most effective for a network. We demonstrate that local and global strategies obtain a several-fold improvement in node degree relative to a random selection benchmark for generated and real networks (including contact, affiliation and online networks). In some networks, they achieve a 100-fold improvement. We show how these new strategies can be used to control contagion of an epidemic spreading across a set of village networks, finding that the strategies developed here require far fewer ($<50\%$) nodes to be immunized, relative to the random strategy baseline. Prior research has typically used the complete network structure to choose nodes for optimal seeding. The relevant network is often costly to collect, and is privacy-invasive, requiring knowing each person's network neighbors, and might not be possible to obtain for time-sensitive interventions. Our interventions are less invasive of individual privacy, since each selected node only needs to nominate some network neighbors for intervention, while mathematically guaranteed to provide better connected nodes.
2210.08316
Animesh Chaturvedi Dr.
Animesh Chaturvedi
Call Graph Evolution Analytics over a Version Series of an Evolving Software System
null
null
10.1145/3551349.3559573
null
cs.SE cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Call Graph evolution analytics can aid a software engineer when maintaining or evolving a software system. This paper proposes Call Graph Evolution Analytics to extract information from an evolving call graph ECG = CG_1, CG_2,... CG_N for their version series VS = V_1, V_2, ... V_N of an evolving software system. This is done using Call Graph Evolution Rules (CGERs) and Call Graph Evolution Subgraphs (CGESs). Similar to association rule mining, the CGERs are used to capture co-occurrences of dependencies in the system. Like subgraph patterns in a call graph, the CGESs are used to capture evolution of dependency patterns in evolving call graphs. Call graph analytics on the evolution in these patterns can identify potentially affected dependencies (or procedure calls) that need attention. The experiments are done on the evolving call graphs of 10 large evolving systems to support dependency evolution management. We also consider results from a detailed study for evolving call graphs of Maven-Core's version series.
[ { "created": "Sat, 15 Oct 2022 15:12:20 GMT", "version": "v1" } ]
2023-05-31
[ [ "Chaturvedi", "Animesh", "" ] ]
Call Graph evolution analytics can aid a software engineer when maintaining or evolving a software system. This paper proposes Call Graph Evolution Analytics to extract information from an evolving call graph ECG = CG_1, CG_2,... CG_N for their version series VS = V_1, V_2, ... V_N of an evolving software system. This is done using Call Graph Evolution Rules (CGERs) and Call Graph Evolution Subgraphs (CGESs). Similar to association rule mining, the CGERs are used to capture co-occurrences of dependencies in the system. Like subgraph patterns in a call graph, the CGESs are used to capture evolution of dependency patterns in evolving call graphs. Call graph analytics on the evolution in these patterns can identify potentially affected dependencies (or procedure calls) that need attention. The experiments are done on the evolving call graphs of 10 large evolving systems to support dependency evolution management. We also consider results from a detailed study for evolving call graphs of Maven-Core's version series.
2201.03702
John Wahlig
John Wahlig
Learning Logic Programs From Noisy Failures
Thesis for MSc in Computer Science
null
null
null
cs.AI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inductive Logic Programming (ILP) is a form of machine learning (ML) which in contrast to many other state of the art ML methods typically produces highly interpretable and reusable models. However, many ILP systems lack the ability to naturally learn from any noisy or partially misclassified training data. We introduce the relaxed learning from failures approach to ILP, a noise handling modification of the previously introduced learning from failures (LFF) approach which is incapable of handling noise. We additionally introduce the novel Noisy Popper ILP system which implements this relaxed approach and is a modification of the existing Popper system. Like Popper, Noisy Popper takes a generate-test-constrain loop to search its hypothesis space wherein failed hypotheses are used to construct hypothesis constraints. These constraints are used to prune the hypothesis space, making the hypothesis search more efficient. However, in the relaxed setting, constraints are generated in a more lax fashion as to avoid allowing noisy training data to lead to hypothesis constraints which prune optimal hypotheses. Constraints unique to the relaxed setting are generated via hypothesis comparison. Additional constraints are generated by weighing the accuracy of hypotheses against their sizes to avoid overfitting through an application of the minimum description length. We support this new setting through theoretical proofs as well as experimental results which suggest that Noisy Popper improves the noise handling capabilities of Popper but at the cost of overall runtime efficiency.
[ { "created": "Tue, 28 Dec 2021 16:48:00 GMT", "version": "v1" }, { "created": "Tue, 25 Jan 2022 01:05:32 GMT", "version": "v2" } ]
2022-01-26
[ [ "Wahlig", "John", "" ] ]
Inductive Logic Programming (ILP) is a form of machine learning (ML) which in contrast to many other state of the art ML methods typically produces highly interpretable and reusable models. However, many ILP systems lack the ability to naturally learn from any noisy or partially misclassified training data. We introduce the relaxed learning from failures approach to ILP, a noise handling modification of the previously introduced learning from failures (LFF) approach which is incapable of handling noise. We additionally introduce the novel Noisy Popper ILP system which implements this relaxed approach and is a modification of the existing Popper system. Like Popper, Noisy Popper takes a generate-test-constrain loop to search its hypothesis space wherein failed hypotheses are used to construct hypothesis constraints. These constraints are used to prune the hypothesis space, making the hypothesis search more efficient. However, in the relaxed setting, constraints are generated in a more lax fashion as to avoid allowing noisy training data to lead to hypothesis constraints which prune optimal hypotheses. Constraints unique to the relaxed setting are generated via hypothesis comparison. Additional constraints are generated by weighing the accuracy of hypotheses against their sizes to avoid overfitting through an application of the minimum description length. We support this new setting through theoretical proofs as well as experimental results which suggest that Noisy Popper improves the noise handling capabilities of Popper but at the cost of overall runtime efficiency.
2305.07366
Maha Riad
Maha Riad, Vinicius Renan de Carvalho and Fatemeh Golpayegani
Multi-Value Alignment in Normative Multi-Agent System: Evolutionary Optimisation Approach
null
null
null
null
cs.MA cs.AI cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Value-alignment in normative multi-agent systems is used to promote a certain value and to ensure the consistent behavior of agents in autonomous intelligent systems with human values. However, the current literature is limited to incorporation of effective norms for single value alignment with no consideration of agents' heterogeneity and the requirement of simultaneous promotion and alignment of multiple values. This research proposes a multi-value promotion model that uses multi-objective evolutionary algorithms to produce the optimum parametric set of norms that is aligned with multiple simultaneous values of heterogeneous agents and the system. To understand various aspects of this complex problem, several evolutionary algorithms were used to find a set of optimised norm parameters considering two toy tax scenarios with two and five values are considered. The results are analysed from different perspectives to show the impact of a selected evolutionary algorithm on the solution, and the importance of understanding the relation between values when prioritising them.
[ { "created": "Fri, 12 May 2023 10:30:20 GMT", "version": "v1" } ]
2023-05-15
[ [ "Riad", "Maha", "" ], [ "de Carvalho", "Vinicius Renan", "" ], [ "Golpayegani", "Fatemeh", "" ] ]
Value-alignment in normative multi-agent systems is used to promote a certain value and to ensure the consistent behavior of agents in autonomous intelligent systems with human values. However, the current literature is limited to incorporation of effective norms for single value alignment with no consideration of agents' heterogeneity and the requirement of simultaneous promotion and alignment of multiple values. This research proposes a multi-value promotion model that uses multi-objective evolutionary algorithms to produce the optimum parametric set of norms that is aligned with multiple simultaneous values of heterogeneous agents and the system. To understand various aspects of this complex problem, several evolutionary algorithms were used to find a set of optimised norm parameters considering two toy tax scenarios with two and five values are considered. The results are analysed from different perspectives to show the impact of a selected evolutionary algorithm on the solution, and the importance of understanding the relation between values when prioritising them.
1508.07343
Sepideh Pourazarm
Sepideh Pourazarm, Christos G. Cassandras
Lifetime Maximization of Wireless Sensor Networks with a Mobile Source Node
A shorter version of this work will be published in Proceedings of 2016 IEEE Conference on Decision and Control
null
10.1109/CDC.2015.7403388
null
cs.NI math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of routing in sensor networks where the goal is to maximize the network's lifetime. Previous work has considered this problem for fixed-topology networks. Here, we add mobility to the source node, which requires a new definition of the network lifetime. In particular, we redefine lifetime to be the time until the source node depletes its energy. When the mobile node's trajectory is unknown in advance, we formulate three versions of an optimal control problem aiming at this lifetime maximization. We show that in all cases, the solution can be reduced to a sequence of Non- Linear Programming (NLP) problems solved on line as the source node trajectory evolves.
[ { "created": "Fri, 28 Aug 2015 20:50:43 GMT", "version": "v1" } ]
2016-11-18
[ [ "Pourazarm", "Sepideh", "" ], [ "Cassandras", "Christos G.", "" ] ]
We study the problem of routing in sensor networks where the goal is to maximize the network's lifetime. Previous work has considered this problem for fixed-topology networks. Here, we add mobility to the source node, which requires a new definition of the network lifetime. In particular, we redefine lifetime to be the time until the source node depletes its energy. When the mobile node's trajectory is unknown in advance, we formulate three versions of an optimal control problem aiming at this lifetime maximization. We show that in all cases, the solution can be reduced to a sequence of Non- Linear Programming (NLP) problems solved on line as the source node trajectory evolves.
1810.08951
Ta-Chung Chi
Ta-Chung Chi, Ching-Yen Shih, Yun-Nung Chen
BCWS: Bilingual Contextual Word Similarity
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces the first dataset for evaluating English-Chinese Bilingual Contextual Word Similarity, namely BCWS (https://github.com/MiuLab/BCWS). The dataset consists of 2,091 English-Chinese word pairs with the corresponding sentential contexts and their similarity scores annotated by the human. Our annotated dataset has higher consistency compared to other similar datasets. We establish several baselines for the bilingual embedding task to benchmark the experiments. Modeling cross-lingual sense representations as provided in this dataset has the potential of moving artificial intelligence from monolingual understanding towards multilingual understanding.
[ { "created": "Sun, 21 Oct 2018 13:57:17 GMT", "version": "v1" } ]
2018-10-23
[ [ "Chi", "Ta-Chung", "" ], [ "Shih", "Ching-Yen", "" ], [ "Chen", "Yun-Nung", "" ] ]
This paper introduces the first dataset for evaluating English-Chinese Bilingual Contextual Word Similarity, namely BCWS (https://github.com/MiuLab/BCWS). The dataset consists of 2,091 English-Chinese word pairs with the corresponding sentential contexts and their similarity scores annotated by the human. Our annotated dataset has higher consistency compared to other similar datasets. We establish several baselines for the bilingual embedding task to benchmark the experiments. Modeling cross-lingual sense representations as provided in this dataset has the potential of moving artificial intelligence from monolingual understanding towards multilingual understanding.
2211.05809
Caner Hazirbas
Caner Hazirbas, Yejin Bang, Tiezheng Yu, Parisa Assar, Bilal Porgali, V\'itor Albiero, Stefan Hermanek, Jacqueline Pan, Emily McReynolds, Miranda Bogen, Pascale Fung, Cristian Canton Ferrer
Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
null
null
null
null
cs.CV cs.AI cs.CL cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Developing robust and fair AI systems require datasets with comprehensive set of labels that can help ensure the validity and legitimacy of relevant measurements. Recent efforts, therefore, focus on collecting person-related datasets that have carefully selected labels, including sensitive characteristics, and consent forms in place to use those attributes for model testing and development. Responsible data collection involves several stages, including but not limited to determining use-case scenarios, selecting categories (annotations) such that the data are fit for the purpose of measuring algorithmic bias for subgroups and most importantly ensure that the selected categories/subcategories are robust to regional diversities and inclusive of as many subgroups as possible. Meta, in a continuation of our efforts to measure AI algorithmic bias and robustness (https://ai.facebook.com/blog/shedding-light-on-fairness-in-ai-with-a-new-data-set), is working on collecting a large consent-driven dataset with a comprehensive list of categories. This paper describes our proposed design of such categories and subcategories for Casual Conversations v2.
[ { "created": "Thu, 10 Nov 2022 19:06:21 GMT", "version": "v1" } ]
2022-11-14
[ [ "Hazirbas", "Caner", "" ], [ "Bang", "Yejin", "" ], [ "Yu", "Tiezheng", "" ], [ "Assar", "Parisa", "" ], [ "Porgali", "Bilal", "" ], [ "Albiero", "Vítor", "" ], [ "Hermanek", "Stefan", "" ], [ "Pan", "Jacqueline", "" ], [ "McReynolds", "Emily", "" ], [ "Bogen", "Miranda", "" ], [ "Fung", "Pascale", "" ], [ "Ferrer", "Cristian Canton", "" ] ]
Developing robust and fair AI systems require datasets with comprehensive set of labels that can help ensure the validity and legitimacy of relevant measurements. Recent efforts, therefore, focus on collecting person-related datasets that have carefully selected labels, including sensitive characteristics, and consent forms in place to use those attributes for model testing and development. Responsible data collection involves several stages, including but not limited to determining use-case scenarios, selecting categories (annotations) such that the data are fit for the purpose of measuring algorithmic bias for subgroups and most importantly ensure that the selected categories/subcategories are robust to regional diversities and inclusive of as many subgroups as possible. Meta, in a continuation of our efforts to measure AI algorithmic bias and robustness (https://ai.facebook.com/blog/shedding-light-on-fairness-in-ai-with-a-new-data-set), is working on collecting a large consent-driven dataset with a comprehensive list of categories. This paper describes our proposed design of such categories and subcategories for Casual Conversations v2.
1502.06470
Eric Tramel
Eric W. Tramel and Ang\'elique Dr\'emeau and Florent Krzakala
Approximate Message Passing with Restricted Boltzmann Machine Priors
null
J. Stat. Mech. (2016) 073401
10.1088/1742-5468/2016/07/073401
null
cs.IT cond-mat.dis-nn math.IT physics.data-an stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Approximate Message Passing (AMP) has been shown to be an excellent statistical approach to signal inference and compressed sensing problem. The AMP framework provides modularity in the choice of signal prior; here we propose a hierarchical form of the Gauss-Bernouilli prior which utilizes a Restricted Boltzmann Machine (RBM) trained on the signal support to push reconstruction performance beyond that of simple iid priors for signals whose support can be well represented by a trained binary RBM. We present and analyze two methods of RBM factorization and demonstrate how these affect signal reconstruction performance within our proposed algorithm. Finally, using the MNIST handwritten digit dataset, we show experimentally that using an RBM allows AMP to approach oracle-support performance.
[ { "created": "Mon, 23 Feb 2015 15:51:07 GMT", "version": "v1" }, { "created": "Tue, 9 Jun 2015 14:05:45 GMT", "version": "v2" }, { "created": "Thu, 10 Dec 2015 03:45:32 GMT", "version": "v3" } ]
2016-07-11
[ [ "Tramel", "Eric W.", "" ], [ "Drémeau", "Angélique", "" ], [ "Krzakala", "Florent", "" ] ]
Approximate Message Passing (AMP) has been shown to be an excellent statistical approach to signal inference and compressed sensing problem. The AMP framework provides modularity in the choice of signal prior; here we propose a hierarchical form of the Gauss-Bernouilli prior which utilizes a Restricted Boltzmann Machine (RBM) trained on the signal support to push reconstruction performance beyond that of simple iid priors for signals whose support can be well represented by a trained binary RBM. We present and analyze two methods of RBM factorization and demonstrate how these affect signal reconstruction performance within our proposed algorithm. Finally, using the MNIST handwritten digit dataset, we show experimentally that using an RBM allows AMP to approach oracle-support performance.
2403.16880
Tianshuai Hu
Tianshuai Hu, Jianhao Jiao, Yucheng Xu, Hongji Liu, Sheng Wang, Ming Liu
DHP-Mapping: A Dense Panoptic Mapping System with Hierarchical World Representation and Label Optimization Techniques
Submit to IROS 2024. Project website https://github.com/hutslib/DHP-Mapping
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Maps provide robots with crucial environmental knowledge, thereby enabling them to perform interactive tasks effectively. Easily accessing accurate abstract-to-detailed geometric and semantic concepts from maps is crucial for robots to make informed and efficient decisions. To comprehensively model the environment and effectively manage the map data structure, we propose DHP-Mapping, a dense mapping system that utilizes multiple Truncated Signed Distance Field (TSDF) submaps and panoptic labels to hierarchically model the environment. The output map is able to maintain both voxel- and submap-level metric and semantic information. Two modules are presented to enhance the mapping efficiency and label consistency: (1) an inter-submaps label fusion strategy to eliminate duplicate points across submaps and (2) a conditional random field (CRF) based approach to enhance panoptic labels through object label comprehension and contextual information. We conducted experiments with two public datasets including indoor and outdoor scenarios. Our system performs comparably to state-of-the-art (SOTA) methods across geometry and label accuracy evaluation metrics. The experiment results highlight the effectiveness and scalability of our system, as it is capable of constructing precise geometry and maintaining consistent panoptic labels. Our code is publicly available at https://github.com/hutslib/DHP-Mapping.
[ { "created": "Mon, 25 Mar 2024 15:47:06 GMT", "version": "v1" } ]
2024-03-26
[ [ "Hu", "Tianshuai", "" ], [ "Jiao", "Jianhao", "" ], [ "Xu", "Yucheng", "" ], [ "Liu", "Hongji", "" ], [ "Wang", "Sheng", "" ], [ "Liu", "Ming", "" ] ]
Maps provide robots with crucial environmental knowledge, thereby enabling them to perform interactive tasks effectively. Easily accessing accurate abstract-to-detailed geometric and semantic concepts from maps is crucial for robots to make informed and efficient decisions. To comprehensively model the environment and effectively manage the map data structure, we propose DHP-Mapping, a dense mapping system that utilizes multiple Truncated Signed Distance Field (TSDF) submaps and panoptic labels to hierarchically model the environment. The output map is able to maintain both voxel- and submap-level metric and semantic information. Two modules are presented to enhance the mapping efficiency and label consistency: (1) an inter-submaps label fusion strategy to eliminate duplicate points across submaps and (2) a conditional random field (CRF) based approach to enhance panoptic labels through object label comprehension and contextual information. We conducted experiments with two public datasets including indoor and outdoor scenarios. Our system performs comparably to state-of-the-art (SOTA) methods across geometry and label accuracy evaluation metrics. The experiment results highlight the effectiveness and scalability of our system, as it is capable of constructing precise geometry and maintaining consistent panoptic labels. Our code is publicly available at https://github.com/hutslib/DHP-Mapping.
2311.00579
Hansika Weerasena
Hansika Weerasena and Prabhat Mishra
Revealing CNN Architectures via Side-Channel Analysis in Dataflow-based Inference Accelerators
null
null
null
null
cs.CR cs.AR cs.LG
http://creativecommons.org/licenses/by/4.0/
Convolution Neural Networks (CNNs) are widely used in various domains. Recent advances in dataflow-based CNN accelerators have enabled CNN inference in resource-constrained edge devices. These dataflow accelerators utilize inherent data reuse of convolution layers to process CNN models efficiently. Concealing the architecture of CNN models is critical for privacy and security. This paper evaluates memory-based side-channel information to recover CNN architectures from dataflow-based CNN inference accelerators. The proposed attack exploits spatial and temporal data reuse of the dataflow mapping on CNN accelerators and architectural hints to recover the structure of CNN models. Experimental results demonstrate that our proposed side-channel attack can recover the structures of popular CNN models, namely Lenet, Alexnet, and VGGnet16.
[ { "created": "Wed, 1 Nov 2023 15:23:04 GMT", "version": "v1" } ]
2023-11-02
[ [ "Weerasena", "Hansika", "" ], [ "Mishra", "Prabhat", "" ] ]
Convolution Neural Networks (CNNs) are widely used in various domains. Recent advances in dataflow-based CNN accelerators have enabled CNN inference in resource-constrained edge devices. These dataflow accelerators utilize inherent data reuse of convolution layers to process CNN models efficiently. Concealing the architecture of CNN models is critical for privacy and security. This paper evaluates memory-based side-channel information to recover CNN architectures from dataflow-based CNN inference accelerators. The proposed attack exploits spatial and temporal data reuse of the dataflow mapping on CNN accelerators and architectural hints to recover the structure of CNN models. Experimental results demonstrate that our proposed side-channel attack can recover the structures of popular CNN models, namely Lenet, Alexnet, and VGGnet16.
2103.13997
Yonatan Alon
Yonatan Alon
Real-time low-resource phoneme recognition on edge devices
The model and code described in this paper are publicly available at https://github.com/yonatankarimish/YonaVox
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
While speech recognition has seen a surge in interest and research over the last decade, most machine learning models for speech recognition either require large training datasets or lots of storage and memory. Combined with the prominence of English as the number one language in which audio data is available, this means most other languages currently lack good speech recognition models. The method presented in this paper shows how to create and train models for speech recognition in any language which are not only highly accurate, but also require very little storage, memory and training data when compared with traditional models. This allows training models to recognize any language and deploying them on edge devices such as mobile phones or car displays for fast real-time speech recognition.
[ { "created": "Thu, 25 Mar 2021 17:34:59 GMT", "version": "v1" } ]
2021-03-26
[ [ "Alon", "Yonatan", "" ] ]
While speech recognition has seen a surge in interest and research over the last decade, most machine learning models for speech recognition either require large training datasets or lots of storage and memory. Combined with the prominence of English as the number one language in which audio data is available, this means most other languages currently lack good speech recognition models. The method presented in this paper shows how to create and train models for speech recognition in any language which are not only highly accurate, but also require very little storage, memory and training data when compared with traditional models. This allows training models to recognize any language and deploying them on edge devices such as mobile phones or car displays for fast real-time speech recognition.
2204.02525
Vamsi Addanki
Vamsi Addanki, Chen Avin, Stefan Schmid
Mars: Near-Optimal Throughput with Shallow Buffers in Reconfigurable Datacenter Networks
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The performance of large-scale computing systems often critically depends on high-performance communication networks. Dynamically reconfigurable topologies, e.g., based on optical circuit switches, are emerging as an innovative new technology to deal with the explosive growth of datacenter traffic. Specifically, \emph{periodic} reconfigurable datacenter networks (RDCNs) such as RotorNet (SIGCOMM 2017), Opera (NSDI 2020) and Sirius (SIGCOMM 2020) have been shown to provide high throughput, by emulating a \emph{complete graph} through fast periodic circuit switch scheduling. However, to achieve such a high throughput, existing reconfigurable network designs pay a high price: in terms of potentially high delays, but also, as we show as a first contribution in this paper, in terms of the high buffer requirements. In particular, we show that under buffer constraints, emulating the high-throughput complete graph is infeasible at scale, and we uncover a spectrum of unvisited and attractive alternative RDCNs, which emulate regular graphs, but with lower node degree than the complete graph. We present Mars, a periodic reconfigurable topology which emulates a $d$-regular graph with near-optimal throughput. In particular, we systematically analyze how the degree~$d$ can be optimized for throughput given the available buffer and delay tolerance of the datacenter. We further show empirically that Mars achieves higher throughput compared to existing systems when buffer sizes are bounded.
[ { "created": "Wed, 6 Apr 2022 00:32:58 GMT", "version": "v1" }, { "created": "Thu, 7 Apr 2022 22:39:11 GMT", "version": "v2" }, { "created": "Wed, 28 Dec 2022 15:32:24 GMT", "version": "v3" } ]
2022-12-29
[ [ "Addanki", "Vamsi", "" ], [ "Avin", "Chen", "" ], [ "Schmid", "Stefan", "" ] ]
The performance of large-scale computing systems often critically depends on high-performance communication networks. Dynamically reconfigurable topologies, e.g., based on optical circuit switches, are emerging as an innovative new technology to deal with the explosive growth of datacenter traffic. Specifically, \emph{periodic} reconfigurable datacenter networks (RDCNs) such as RotorNet (SIGCOMM 2017), Opera (NSDI 2020) and Sirius (SIGCOMM 2020) have been shown to provide high throughput, by emulating a \emph{complete graph} through fast periodic circuit switch scheduling. However, to achieve such a high throughput, existing reconfigurable network designs pay a high price: in terms of potentially high delays, but also, as we show as a first contribution in this paper, in terms of the high buffer requirements. In particular, we show that under buffer constraints, emulating the high-throughput complete graph is infeasible at scale, and we uncover a spectrum of unvisited and attractive alternative RDCNs, which emulate regular graphs, but with lower node degree than the complete graph. We present Mars, a periodic reconfigurable topology which emulates a $d$-regular graph with near-optimal throughput. In particular, we systematically analyze how the degree~$d$ can be optimized for throughput given the available buffer and delay tolerance of the datacenter. We further show empirically that Mars achieves higher throughput compared to existing systems when buffer sizes are bounded.
2310.11867
Junaid Ali
Junaid Ali, Matthaeus Kleindessner, Florian Wenzel, Kailash Budhathoki, Volkan Cevher and Chris Russell
Evaluating the Fairness of Discriminative Foundation Models in Computer Vision
Accepted at AIES'23
null
10.1145/3600211.3604720
null
cs.CV cs.CY cs.LG
http://creativecommons.org/licenses/by/4.0/
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP), that are used for labeling tasks. We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy. Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning. We categorize desired behaviors based around three axes: (i) if the task concerns humans; (ii) how subjective the task is (i.e., how likely it is that people from a diverse range of backgrounds would agree on a labeling); and (iii) the intended purpose of the task and if fairness is better served by impartiality (i.e., making decisions independent of the protected attributes) or representation (i.e., making decisions to maximize diversity). Finally, we provide quantitative fairness evaluations for both binary-valued and multi-valued protected attributes over ten diverse datasets. We find that fair PCA, a post-processing method for fair representations, works very well for debiasing in most of the aforementioned tasks while incurring only minor loss of performance. However, different debiasing approaches vary in their effectiveness depending on the task. Hence, one should choose the debiasing approach depending on the specific use case.
[ { "created": "Wed, 18 Oct 2023 10:32:39 GMT", "version": "v1" } ]
2023-10-19
[ [ "Ali", "Junaid", "" ], [ "Kleindessner", "Matthaeus", "" ], [ "Wenzel", "Florian", "" ], [ "Budhathoki", "Kailash", "" ], [ "Cevher", "Volkan", "" ], [ "Russell", "Chris", "" ] ]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP), that are used for labeling tasks. We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy. Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning. We categorize desired behaviors based around three axes: (i) if the task concerns humans; (ii) how subjective the task is (i.e., how likely it is that people from a diverse range of backgrounds would agree on a labeling); and (iii) the intended purpose of the task and if fairness is better served by impartiality (i.e., making decisions independent of the protected attributes) or representation (i.e., making decisions to maximize diversity). Finally, we provide quantitative fairness evaluations for both binary-valued and multi-valued protected attributes over ten diverse datasets. We find that fair PCA, a post-processing method for fair representations, works very well for debiasing in most of the aforementioned tasks while incurring only minor loss of performance. However, different debiasing approaches vary in their effectiveness depending on the task. Hence, one should choose the debiasing approach depending on the specific use case.
2202.09806
Andrew Cropper
Andrew Cropper and C\'eline Hocquette
Learning logic programs by discovering where not to search
Preprint for AAAI23
null
null
null
cs.LG cs.AI cs.LO
http://creativecommons.org/licenses/by/4.0/
The goal of inductive logic programming (ILP) is to search for a hypothesis that generalises training examples and background knowledge (BK). To improve performance, we introduce an approach that, before searching for a hypothesis, first discovers where not to search. We use given BK to discover constraints on hypotheses, such as that a number cannot be both even and odd. We use the constraints to bootstrap a constraint-driven ILP system. Our experiments on multiple domains (including program synthesis and game playing) show that our approach can (i) substantially reduce learning times by up to 97%, and (ii) scale to domains with millions of facts.
[ { "created": "Sun, 20 Feb 2022 12:32:03 GMT", "version": "v1" }, { "created": "Mon, 5 Dec 2022 09:42:29 GMT", "version": "v2" } ]
2022-12-06
[ [ "Cropper", "Andrew", "" ], [ "Hocquette", "Céline", "" ] ]
The goal of inductive logic programming (ILP) is to search for a hypothesis that generalises training examples and background knowledge (BK). To improve performance, we introduce an approach that, before searching for a hypothesis, first discovers where not to search. We use given BK to discover constraints on hypotheses, such as that a number cannot be both even and odd. We use the constraints to bootstrap a constraint-driven ILP system. Our experiments on multiple domains (including program synthesis and game playing) show that our approach can (i) substantially reduce learning times by up to 97%, and (ii) scale to domains with millions of facts.
1401.3421
Rourab Paul
Sruti Agarwal, Sangeet Saha, Rourab Paul, Amlan Chakrabarti
Performance Evaluation of ECC in Single and Multi Processor Architectures on FPGA Based Embedded System
Published Book Title: Elsevier Science and Technology, ICCN 2013, Bangalore, Page(s): 140 - 147, Volume 3, 03.elsevierst.2013.3.ICCN16, ISBN :9789351071044, Paper link:-http://searchdl.org/index.php/book_series/view/917
null
null
null
cs.AR cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cryptographic algorithms are computationally costly and the challenge is more if we need to execute them in resource constrained embedded systems. Field Programmable Gate Arrays (FPGAs) having programmable logic de- vices and processing cores, have proven to be highly feasible implementation platforms for embedded systems providing lesser design time and reconfig- urability. Design parameters like throughput, resource utilization and power requirements are the key issues. The popular Elliptic Curve Cryptography (ECC), which is superior over other public-key crypto-systems like RSA in many ways, such as providing greater security for a smaller key size, is cho- sen in this work and the possibilities of its implementation in FPGA based embedded systems for both single and dual processor core architectures in- volving task parallelization have been explored. This exploration, which is first of its kind considering the other existing works, is a needed activity for evaluating the best possible architectural environment for ECC implementa- tion on FPGA (Virtex4 XC4VFX12, FF668, -10) based embedded platform.
[ { "created": "Wed, 15 Jan 2014 03:25:41 GMT", "version": "v1" } ]
2014-02-20
[ [ "Agarwal", "Sruti", "" ], [ "Saha", "Sangeet", "" ], [ "Paul", "Rourab", "" ], [ "Chakrabarti", "Amlan", "" ] ]
Cryptographic algorithms are computationally costly and the challenge is more if we need to execute them in resource constrained embedded systems. Field Programmable Gate Arrays (FPGAs) having programmable logic de- vices and processing cores, have proven to be highly feasible implementation platforms for embedded systems providing lesser design time and reconfig- urability. Design parameters like throughput, resource utilization and power requirements are the key issues. The popular Elliptic Curve Cryptography (ECC), which is superior over other public-key crypto-systems like RSA in many ways, such as providing greater security for a smaller key size, is cho- sen in this work and the possibilities of its implementation in FPGA based embedded systems for both single and dual processor core architectures in- volving task parallelization have been explored. This exploration, which is first of its kind considering the other existing works, is a needed activity for evaluating the best possible architectural environment for ECC implementa- tion on FPGA (Virtex4 XC4VFX12, FF668, -10) based embedded platform.
2403.03173
Beiming Yuan
Ruizhuo Song, Beiming Yuan
Solving the Clustering Reasoning Problems by Modeling a Deep-Learning-Based Probabilistic Model
14 pages, 17 figures, 4 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual abstract reasoning problems pose significant challenges to the perception and cognition abilities of artificial intelligence algorithms, demanding deeper pattern recognition and inductive reasoning beyond mere identification of explicit image features. Research advancements in this field often provide insights and technical support for other similar domains. In this study, we introduce PMoC, a deep-learning-based probabilistic model, achieving high reasoning accuracy in the Bongard-Logo, which stands as one of the most challenging clustering reasoning tasks. PMoC is a novel approach for constructing probabilistic models based on deep learning, which is distinctly different from previous techniques. PMoC revitalizes the probabilistic approach, which has been relatively weak in visual abstract reasoning. As a bonus, we also designed Pose-Transformer for complex visual abstract reasoning tasks. Inspired by capsule networks, it focuses on positional relationships in image data, boosting accuracy when combined with PMoC. Our Pose-Transformer effectively addresses reasoning difficulties associated with changes in the position of entities, outperforming previous models on RAVEN dataset, and the PGM dataset. RAVEN and PGM represent two significant progressive pattern reasoning problems. Finally, considering the deployment difficulties of Pose-Transformer, we introduced Straw-Pose-Transformer, a lightweight version. This study contributes to enhancing the capabilities of artificial intelligence in abstract reasoning, cognitive pattern, and probabilistic modeling of complex systems.
[ { "created": "Tue, 5 Mar 2024 18:08:29 GMT", "version": "v1" }, { "created": "Sat, 9 Mar 2024 18:53:21 GMT", "version": "v2" }, { "created": "Mon, 25 Mar 2024 04:42:22 GMT", "version": "v3" }, { "created": "Tue, 7 May 2024 14:34:34 GMT", "version": "v4" }, { "created": "Mon, 20 May 2024 01:54:21 GMT", "version": "v5" }, { "created": "Sat, 25 May 2024 16:01:07 GMT", "version": "v6" }, { "created": "Sun, 2 Jun 2024 16:35:23 GMT", "version": "v7" }, { "created": "Thu, 13 Jun 2024 09:41:55 GMT", "version": "v8" } ]
2024-06-14
[ [ "Song", "Ruizhuo", "" ], [ "Yuan", "Beiming", "" ] ]
Visual abstract reasoning problems pose significant challenges to the perception and cognition abilities of artificial intelligence algorithms, demanding deeper pattern recognition and inductive reasoning beyond mere identification of explicit image features. Research advancements in this field often provide insights and technical support for other similar domains. In this study, we introduce PMoC, a deep-learning-based probabilistic model, achieving high reasoning accuracy in the Bongard-Logo, which stands as one of the most challenging clustering reasoning tasks. PMoC is a novel approach for constructing probabilistic models based on deep learning, which is distinctly different from previous techniques. PMoC revitalizes the probabilistic approach, which has been relatively weak in visual abstract reasoning. As a bonus, we also designed Pose-Transformer for complex visual abstract reasoning tasks. Inspired by capsule networks, it focuses on positional relationships in image data, boosting accuracy when combined with PMoC. Our Pose-Transformer effectively addresses reasoning difficulties associated with changes in the position of entities, outperforming previous models on RAVEN dataset, and the PGM dataset. RAVEN and PGM represent two significant progressive pattern reasoning problems. Finally, considering the deployment difficulties of Pose-Transformer, we introduced Straw-Pose-Transformer, a lightweight version. This study contributes to enhancing the capabilities of artificial intelligence in abstract reasoning, cognitive pattern, and probabilistic modeling of complex systems.
1006.2955
Dinesh Dash
Dinesh Dash and Arijit Bishnu and Arobinda Gupta and Subhas C. Nandy
Approximation Algorithm for Line Segment Coverage for Wireless Sensor Network
16 pages, 5 figures,
Wireless Networks 19(5): 857-870 (2013)
10.1007/s11276-012-0506-4
null
cs.CG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The coverage problem in wireless sensor networks deals with the problem of covering a region or parts of it with sensors. In this paper, we address the problem of covering a set of line segments in sensor networks. A line segment ` is said to be covered if it intersects the sensing regions of at least one sensor distributed in that region. We show that the problem of finding the minimum number of sensors needed to cover each member in a given set of line segments in a rectangular area is NP-hard. Next, we propose a constant factor approximation algorithm for the problem of covering a set of axis-parallel line segments. We also show that a PTAS exists for this problem.
[ { "created": "Tue, 15 Jun 2010 10:51:44 GMT", "version": "v1" } ]
2017-11-16
[ [ "Dash", "Dinesh", "" ], [ "Bishnu", "Arijit", "" ], [ "Gupta", "Arobinda", "" ], [ "Nandy", "Subhas C.", "" ] ]
The coverage problem in wireless sensor networks deals with the problem of covering a region or parts of it with sensors. In this paper, we address the problem of covering a set of line segments in sensor networks. A line segment ` is said to be covered if it intersects the sensing regions of at least one sensor distributed in that region. We show that the problem of finding the minimum number of sensors needed to cover each member in a given set of line segments in a rectangular area is NP-hard. Next, we propose a constant factor approximation algorithm for the problem of covering a set of axis-parallel line segments. We also show that a PTAS exists for this problem.
2308.02400
Felix Staudigl
Felix Staudigl, Mohammed Hossein, Tobias Ziegler, Hazem Al Indari, Rebecca Pelke, Sebastian Siegel, Dirk J. Wouters, Dominik Sisejkovic, Jan Moritz Joseph, and Rainer Leupers
Work-in-Progress: A Universal Instrumentation Platform for Non-Volatile Memories
null
null
null
null
cs.AR cs.CY
http://creativecommons.org/licenses/by/4.0/
Emerging non-volatile memories (NVMs) represent a disruptive technology that allows a paradigm shift from the conventional von Neumann architecture towards more efficient computing-in-memory (CIM) architectures. Several instrumentation platforms have been proposed to interface NVMs allowing the characterization of single cells and crossbar structures. However, these platforms suffer from low flexibility and are not capable of performing CIM operations on NVMs. Therefore, we recently designed and built the NeuroBreakoutBoard, a highly versatile instrumentation platform capable of executing CIM on NVMs. We present our preliminary results demonstrating a relative error < 5% in the range of 1 k$\Omega$ to 1 M$\Omega$ and showcase the switching behavior of a HfO$_2$/Ti-based memristive cell.
[ { "created": "Thu, 3 Aug 2023 14:24:57 GMT", "version": "v1" } ]
2023-08-07
[ [ "Staudigl", "Felix", "" ], [ "Hossein", "Mohammed", "" ], [ "Ziegler", "Tobias", "" ], [ "Indari", "Hazem Al", "" ], [ "Pelke", "Rebecca", "" ], [ "Siegel", "Sebastian", "" ], [ "Wouters", "Dirk J.", "" ], [ "Sisejkovic", "Dominik", "" ], [ "Joseph", "Jan Moritz", "" ], [ "Leupers", "Rainer", "" ] ]
Emerging non-volatile memories (NVMs) represent a disruptive technology that allows a paradigm shift from the conventional von Neumann architecture towards more efficient computing-in-memory (CIM) architectures. Several instrumentation platforms have been proposed to interface NVMs allowing the characterization of single cells and crossbar structures. However, these platforms suffer from low flexibility and are not capable of performing CIM operations on NVMs. Therefore, we recently designed and built the NeuroBreakoutBoard, a highly versatile instrumentation platform capable of executing CIM on NVMs. We present our preliminary results demonstrating a relative error < 5% in the range of 1 k$\Omega$ to 1 M$\Omega$ and showcase the switching behavior of a HfO$_2$/Ti-based memristive cell.
1603.05214
Tadeusz Litak
Stefan Milius, Tadeusz Litak
Guard Your Daggers and Traces: Properties of Guarded (Co-)recursion
invited to a special issue of Fundamenta Informaticae (FiCS'13). arXiv admin note: text overlap with arXiv:1309.0895
null
10.3233/FI-2017-1475
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the recent interest in models of guarded (co-)recursion, we study their equational properties. We formulate axioms for guarded fixpoint operators generalizing the axioms of iteration theories of Bloom and \'Esik. Models of these axioms include both standard (e.g., cpo-based) models of iteration theories and models of guarded recursion such as complete metric spaces or the topos of trees studied by Birkedal et al. We show that the standard result on the satisfaction of all Conway axioms by a unique dagger operation generalizes to the guarded setting. We also introduce the notion of guarded trace operator on a category, and we prove that guarded trace and guarded fixpoint operators are in one-to-one correspondence. Our results are intended as first steps leading, hopefully, towards future description of classifying theories for guarded recursion.
[ { "created": "Wed, 16 Mar 2016 18:39:53 GMT", "version": "v1" } ]
2018-08-21
[ [ "Milius", "Stefan", "" ], [ "Litak", "Tadeusz", "" ] ]
Motivated by the recent interest in models of guarded (co-)recursion, we study their equational properties. We formulate axioms for guarded fixpoint operators generalizing the axioms of iteration theories of Bloom and \'Esik. Models of these axioms include both standard (e.g., cpo-based) models of iteration theories and models of guarded recursion such as complete metric spaces or the topos of trees studied by Birkedal et al. We show that the standard result on the satisfaction of all Conway axioms by a unique dagger operation generalizes to the guarded setting. We also introduce the notion of guarded trace operator on a category, and we prove that guarded trace and guarded fixpoint operators are in one-to-one correspondence. Our results are intended as first steps leading, hopefully, towards future description of classifying theories for guarded recursion.
1211.6216
Jos\'e Verschae
Nicole Megow and Jos\'e Verschae
Dual techniques for scheduling on a machine with varying speed
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study scheduling problems on a machine with varying speed. Assuming a known speed function we ask for a cost-efficient scheduling solution. Our main result is a PTAS for minimizing the total weighted completion time in this setting. This also implies a PTAS for the closely related problem of scheduling to minimize generalized global cost functions. The key to our results is a re-interpretation of the problem within the well-known two-dimensional Gantt chart: instead of the standard approach of scheduling in the {\em time-dimension}, we construct scheduling solutions in the weight-dimension. We also consider a dynamic problem variant in which deciding upon the speed is part of the scheduling problem and we are interested in the tradeoff between scheduling cost and speed-scaling cost, which is typically the energy consumption. We observe that the optimal order is independent of the energy consumption and that the problem can be reduced to the setting where the speed of the machine is fixed, and thus admits a PTAS. Furthermore, we provide an FPTAS for the NP-hard problem variant in which the machine can run only on a fixed number of discrete speeds. Finally, we show how our results can be used to obtain a~$(2+\eps)$-approximation for scheduling preemptive jobs with release dates on multiple identical parallel machines.
[ { "created": "Tue, 27 Nov 2012 05:45:55 GMT", "version": "v1" }, { "created": "Mon, 11 Feb 2013 23:22:12 GMT", "version": "v2" }, { "created": "Tue, 4 Mar 2014 21:20:25 GMT", "version": "v3" } ]
2014-03-06
[ [ "Megow", "Nicole", "" ], [ "Verschae", "José", "" ] ]
We study scheduling problems on a machine with varying speed. Assuming a known speed function we ask for a cost-efficient scheduling solution. Our main result is a PTAS for minimizing the total weighted completion time in this setting. This also implies a PTAS for the closely related problem of scheduling to minimize generalized global cost functions. The key to our results is a re-interpretation of the problem within the well-known two-dimensional Gantt chart: instead of the standard approach of scheduling in the {\em time-dimension}, we construct scheduling solutions in the weight-dimension. We also consider a dynamic problem variant in which deciding upon the speed is part of the scheduling problem and we are interested in the tradeoff between scheduling cost and speed-scaling cost, which is typically the energy consumption. We observe that the optimal order is independent of the energy consumption and that the problem can be reduced to the setting where the speed of the machine is fixed, and thus admits a PTAS. Furthermore, we provide an FPTAS for the NP-hard problem variant in which the machine can run only on a fixed number of discrete speeds. Finally, we show how our results can be used to obtain a~$(2+\eps)$-approximation for scheduling preemptive jobs with release dates on multiple identical parallel machines.
2407.00717
Xikun Zhang
Xikun Zhang, Dongjin Song, Yushan Jiang, Yixin Chen, Dacheng Tao
Learning System Dynamics without Forgetting
null
null
null
null
cs.LG cs.AI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting the trajectories of systems with unknown dynamics (\textit{i.e.} the governing rules) is crucial in various research fields, including physics and biology. This challenge has gathered significant attention from diverse communities. Most existing works focus on learning fixed system dynamics within one single system. However, real-world applications often involve multiple systems with different types of dynamics or evolving systems with non-stationary dynamics (dynamics shifts). When data from those systems are continuously collected and sequentially fed to machine learning models for training, these models tend to be biased toward the most recently learned dynamics, leading to catastrophic forgetting of previously observed/learned system dynamics. To this end, we aim to learn system dynamics via continual learning. Specifically, we present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics and encode the system-specific dynamics into binary masks over the model parameters. During the inference stage, the model can select the most confident mask based on the observational data to identify the system and predict future trajectories accordingly. Empirically, we systematically investigate the task configurations and compare the proposed MS-GODE with state-of-the-art techniques. More importantly, we construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics and significantly enriching the research field of machine learning for dynamic systems.
[ { "created": "Sun, 30 Jun 2024 14:55:18 GMT", "version": "v1" } ]
2024-07-02
[ [ "Zhang", "Xikun", "" ], [ "Song", "Dongjin", "" ], [ "Jiang", "Yushan", "" ], [ "Chen", "Yixin", "" ], [ "Tao", "Dacheng", "" ] ]
Predicting the trajectories of systems with unknown dynamics (\textit{i.e.} the governing rules) is crucial in various research fields, including physics and biology. This challenge has gathered significant attention from diverse communities. Most existing works focus on learning fixed system dynamics within one single system. However, real-world applications often involve multiple systems with different types of dynamics or evolving systems with non-stationary dynamics (dynamics shifts). When data from those systems are continuously collected and sequentially fed to machine learning models for training, these models tend to be biased toward the most recently learned dynamics, leading to catastrophic forgetting of previously observed/learned system dynamics. To this end, we aim to learn system dynamics via continual learning. Specifically, we present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics and encode the system-specific dynamics into binary masks over the model parameters. During the inference stage, the model can select the most confident mask based on the observational data to identify the system and predict future trajectories accordingly. Empirically, we systematically investigate the task configurations and compare the proposed MS-GODE with state-of-the-art techniques. More importantly, we construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics and significantly enriching the research field of machine learning for dynamic systems.
2105.01697
Noel Csomay-Shanklin
Noel Csomay-Shanklin, Ryan K. Cosner, Min Dai, Andrew J. Taylor, Aaron D. Ames
Episodic Learning for Safe Bipedal Locomotion with Control Barrier Functions and Projection-to-State Safety
13 pages, 4 figures, to appear at the Conference on Learning for Dynamics and Control 2021
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper combines episodic learning and control barrier functions in the setting of bipedal locomotion. The safety guarantees that control barrier functions provide are only valid with perfect model knowledge; however, this assumption cannot be met on hardware platforms. To address this, we utilize the notion of projection-to-state safety paired with a machine learning framework in an attempt to learn the model uncertainty as it affects the barrier functions. The proposed approach is demonstrated both in simulation and on hardware for the AMBER-3M bipedal robot in the context of the stepping-stone problem, which requires precise foot placement while walking dynamically.
[ { "created": "Tue, 4 May 2021 18:33:28 GMT", "version": "v1" } ]
2021-05-06
[ [ "Csomay-Shanklin", "Noel", "" ], [ "Cosner", "Ryan K.", "" ], [ "Dai", "Min", "" ], [ "Taylor", "Andrew J.", "" ], [ "Ames", "Aaron D.", "" ] ]
This paper combines episodic learning and control barrier functions in the setting of bipedal locomotion. The safety guarantees that control barrier functions provide are only valid with perfect model knowledge; however, this assumption cannot be met on hardware platforms. To address this, we utilize the notion of projection-to-state safety paired with a machine learning framework in an attempt to learn the model uncertainty as it affects the barrier functions. The proposed approach is demonstrated both in simulation and on hardware for the AMBER-3M bipedal robot in the context of the stepping-stone problem, which requires precise foot placement while walking dynamically.
2302.08062
Chengbin Hou
Chengbin Hou, Xinyu Lin, Hanhui Huang, Sheng Xu, Junxuan Fan, Yukun Shi, Hairong Lv
Fossil Image Identification using Deep Learning Ensembles of Data Augmented Multiviews
published in Methods in Ecology and Evolution
Methods in Ecology and Evolution, 14, 3020-3034 (2023)
10.1111/2041-210X.14229
null
cs.CV cs.AI q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identification of fossil species is crucial to evolutionary studies. Recent advances from deep learning have shown promising prospects in fossil image identification. However, the quantity and quality of labeled fossil images are often limited due to fossil preservation, conditioned sampling, and expensive and inconsistent label annotation by domain experts, which pose great challenges to training deep learning based image classification models. To address these challenges, we follow the idea of the wisdom of crowds and propose a multiview ensemble framework, which collects Original (O), Gray (G), and Skeleton (S) views of each fossil image reflecting its different characteristics to train multiple base models, and then makes the final decision via soft voting. Experiments on the largest fusulinid dataset with 2400 images show that the proposed OGS consistently outperforms baselines (using a single model for each view), and obtains superior or comparable performance compared to OOO (using three base models for three the same Original views). Besides, as the training data decreases, the proposed framework achieves more gains. While considering the identification consistency estimation with respect to human experts, OGS receives the highest agreement with the original labels of dataset and with the re-identifications of two human experts. The validation performance provides a quantitative estimation of consistency across different experts and genera. We conclude that the proposed framework can present state-of-the-art performance in the fusulinid fossil identification case study. This framework is designed for general fossil identification and it is expected to see applications to other fossil datasets in future work. The source code is publicly available at https://github.com/houchengbin/Fossil-Image-Identification to benefit future research in fossil image identification.
[ { "created": "Thu, 16 Feb 2023 03:57:21 GMT", "version": "v1" }, { "created": "Wed, 20 Sep 2023 08:53:59 GMT", "version": "v2" }, { "created": "Fri, 2 Feb 2024 02:04:32 GMT", "version": "v3" } ]
2024-02-05
[ [ "Hou", "Chengbin", "" ], [ "Lin", "Xinyu", "" ], [ "Huang", "Hanhui", "" ], [ "Xu", "Sheng", "" ], [ "Fan", "Junxuan", "" ], [ "Shi", "Yukun", "" ], [ "Lv", "Hairong", "" ] ]
Identification of fossil species is crucial to evolutionary studies. Recent advances from deep learning have shown promising prospects in fossil image identification. However, the quantity and quality of labeled fossil images are often limited due to fossil preservation, conditioned sampling, and expensive and inconsistent label annotation by domain experts, which pose great challenges to training deep learning based image classification models. To address these challenges, we follow the idea of the wisdom of crowds and propose a multiview ensemble framework, which collects Original (O), Gray (G), and Skeleton (S) views of each fossil image reflecting its different characteristics to train multiple base models, and then makes the final decision via soft voting. Experiments on the largest fusulinid dataset with 2400 images show that the proposed OGS consistently outperforms baselines (using a single model for each view), and obtains superior or comparable performance compared to OOO (using three base models for three the same Original views). Besides, as the training data decreases, the proposed framework achieves more gains. While considering the identification consistency estimation with respect to human experts, OGS receives the highest agreement with the original labels of dataset and with the re-identifications of two human experts. The validation performance provides a quantitative estimation of consistency across different experts and genera. We conclude that the proposed framework can present state-of-the-art performance in the fusulinid fossil identification case study. This framework is designed for general fossil identification and it is expected to see applications to other fossil datasets in future work. The source code is publicly available at https://github.com/houchengbin/Fossil-Image-Identification to benefit future research in fossil image identification.
2112.06068
Peter Plantinga
Peter Plantinga, Deblin Bagchi, Eric Fosler-Lussier
Perceptual Loss with Recognition Model for Single-Channel Enhancement and Robust ASR
null
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Single-channel speech enhancement approaches do not always improve automatic recognition rates in the presence of noise, because they can introduce distortions unhelpful for recognition. Following a trend towards end-to-end training of sequential neural network models, several research groups have addressed this problem with joint training of front-end enhancement module with back-end recognition module. While this approach ensures enhancement outputs are helpful for recognition, the enhancement model can overfit to the training data, weakening the recognition model in the presence of unseen noise. To address this, we used a pre-trained acoustic model to generate a perceptual loss that makes speech enhancement more aware of the phonetic properties of the signal. This approach keeps some benefits of joint training, while alleviating the overfitting problem. Experiments on Voicebank + DEMAND dataset for enhancement show that this approach achieves a new state of the art for some objective enhancement scores. In combination with distortion-independent training, our approach gets a WER of 2.80\% on the test set, which is more than 20\% relative better recognition performance than joint training, and 14\% relative better than distortion-independent mask training.
[ { "created": "Sat, 11 Dec 2021 20:44:26 GMT", "version": "v1" } ]
2021-12-14
[ [ "Plantinga", "Peter", "" ], [ "Bagchi", "Deblin", "" ], [ "Fosler-Lussier", "Eric", "" ] ]
Single-channel speech enhancement approaches do not always improve automatic recognition rates in the presence of noise, because they can introduce distortions unhelpful for recognition. Following a trend towards end-to-end training of sequential neural network models, several research groups have addressed this problem with joint training of front-end enhancement module with back-end recognition module. While this approach ensures enhancement outputs are helpful for recognition, the enhancement model can overfit to the training data, weakening the recognition model in the presence of unseen noise. To address this, we used a pre-trained acoustic model to generate a perceptual loss that makes speech enhancement more aware of the phonetic properties of the signal. This approach keeps some benefits of joint training, while alleviating the overfitting problem. Experiments on Voicebank + DEMAND dataset for enhancement show that this approach achieves a new state of the art for some objective enhancement scores. In combination with distortion-independent training, our approach gets a WER of 2.80\% on the test set, which is more than 20\% relative better recognition performance than joint training, and 14\% relative better than distortion-independent mask training.
2205.14252
Aditya Vaidya
Aditya R. Vaidya, Shailee Jain, Alexander G. Huth
Self-supervised models of audio effectively explain human cortical responses to speech
Accepted to the International Conference on Machine Learning (ICML) 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Self-supervised language models are very effective at predicting high-level cortical responses during language comprehension. However, the best current models of lower-level auditory processing in the human brain rely on either hand-constructed acoustic filters or representations from supervised audio neural networks. In this work, we capitalize on the progress of self-supervised speech representation learning (SSL) to create new state-of-the-art models of the human auditory system. Compared against acoustic baselines, phonemic features, and supervised models, representations from the middle layers of self-supervised models (APC, wav2vec, wav2vec 2.0, and HuBERT) consistently yield the best prediction performance for fMRI recordings within the auditory cortex (AC). Brain areas involved in low-level auditory processing exhibit a preference for earlier SSL model layers, whereas higher-level semantic areas prefer later layers. We show that these trends are due to the models' ability to encode information at multiple linguistic levels (acoustic, phonetic, and lexical) along their representation depth. Overall, these results show that self-supervised models effectively capture the hierarchy of information relevant to different stages of speech processing in human cortex.
[ { "created": "Fri, 27 May 2022 22:04:02 GMT", "version": "v1" } ]
2022-05-31
[ [ "Vaidya", "Aditya R.", "" ], [ "Jain", "Shailee", "" ], [ "Huth", "Alexander G.", "" ] ]
Self-supervised language models are very effective at predicting high-level cortical responses during language comprehension. However, the best current models of lower-level auditory processing in the human brain rely on either hand-constructed acoustic filters or representations from supervised audio neural networks. In this work, we capitalize on the progress of self-supervised speech representation learning (SSL) to create new state-of-the-art models of the human auditory system. Compared against acoustic baselines, phonemic features, and supervised models, representations from the middle layers of self-supervised models (APC, wav2vec, wav2vec 2.0, and HuBERT) consistently yield the best prediction performance for fMRI recordings within the auditory cortex (AC). Brain areas involved in low-level auditory processing exhibit a preference for earlier SSL model layers, whereas higher-level semantic areas prefer later layers. We show that these trends are due to the models' ability to encode information at multiple linguistic levels (acoustic, phonetic, and lexical) along their representation depth. Overall, these results show that self-supervised models effectively capture the hierarchy of information relevant to different stages of speech processing in human cortex.
2105.05557
Daniel Wiegreffe
Christopher Schr\"oder, Kim B\"urgl, Yves Annanias, Andreas Niekler, Lydia M\"uller, Daniel Wiegreffe, Christian Bender, Christoph Mengs, Gerik Scheuermann, Gerhard Heyer
Supporting Land Reuse of Former Open Pit Mining Sites using Text Classification and Active Learning
null
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021
10.18653/v1/2021.acl-long.320
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Open pit mines left many regions worldwide inhospitable or uninhabitable. To put these regions back into use, entire stretches of land must be renaturalized. For the sustainable subsequent use or transfer to a new primary use, many contaminated sites and soil information have to be permanently managed. In most cases, this information is available in the form of expert reports in unstructured data collections or file folders, which in the best case are digitized. Due to size and complexity of the data, it is difficult for a single person to have an overview of this data in order to be able to make reliable statements. This is one of the most important obstacles to the rapid transfer of these areas to after-use. An information-based approach to this issue supports fulfilling several Sustainable Development Goals regarding environment issues, health and climate action. We use a stack of Optical Character Recognition, Text Classification, Active Learning and Geographic Information System Visualization to effectively mine and visualize this information. Subsequently, we link the extracted information to geographic coordinates and visualize them using a Geographic Information System. Active Learning plays a vital role because our dataset provides no training data. In total, we process nine categories and actively learn their representation in our dataset. We evaluate the OCR, Active Learning and Text Classification separately to report the performance of the system. Active Learning and text classification results are twofold: Whereas our categories about restrictions work sufficient ($>$.85 F1), the seven topic-oriented categories were complicated for human coders and hence the results achieved mediocre evaluation scores ($<$.70 F1).
[ { "created": "Wed, 12 May 2021 10:18:14 GMT", "version": "v1" }, { "created": "Thu, 13 May 2021 10:47:44 GMT", "version": "v2" }, { "created": "Thu, 2 Dec 2021 10:17:25 GMT", "version": "v3" }, { "created": "Tue, 22 Mar 2022 12:02:01 GMT", "version": "v4" } ]
2022-03-23
[ [ "Schröder", "Christopher", "" ], [ "Bürgl", "Kim", "" ], [ "Annanias", "Yves", "" ], [ "Niekler", "Andreas", "" ], [ "Müller", "Lydia", "" ], [ "Wiegreffe", "Daniel", "" ], [ "Bender", "Christian", "" ], [ "Mengs", "Christoph", "" ], [ "Scheuermann", "Gerik", "" ], [ "Heyer", "Gerhard", "" ] ]
Open pit mines left many regions worldwide inhospitable or uninhabitable. To put these regions back into use, entire stretches of land must be renaturalized. For the sustainable subsequent use or transfer to a new primary use, many contaminated sites and soil information have to be permanently managed. In most cases, this information is available in the form of expert reports in unstructured data collections or file folders, which in the best case are digitized. Due to size and complexity of the data, it is difficult for a single person to have an overview of this data in order to be able to make reliable statements. This is one of the most important obstacles to the rapid transfer of these areas to after-use. An information-based approach to this issue supports fulfilling several Sustainable Development Goals regarding environment issues, health and climate action. We use a stack of Optical Character Recognition, Text Classification, Active Learning and Geographic Information System Visualization to effectively mine and visualize this information. Subsequently, we link the extracted information to geographic coordinates and visualize them using a Geographic Information System. Active Learning plays a vital role because our dataset provides no training data. In total, we process nine categories and actively learn their representation in our dataset. We evaluate the OCR, Active Learning and Text Classification separately to report the performance of the system. Active Learning and text classification results are twofold: Whereas our categories about restrictions work sufficient ($>$.85 F1), the seven topic-oriented categories were complicated for human coders and hence the results achieved mediocre evaluation scores ($<$.70 F1).
2210.08976
Tobias Wenzel
Tobias Wenzel
Global technology access in biolabs -- from DIY trend to an open source transformation
null
PLoS Biol 21(1): e3001931 (2023)
10.1371/journal.pbio.3001931
null
cs.CY cs.AR q-bio.OT
http://creativecommons.org/licenses/by/4.0/
This article illustrates how open hardware solutions are implemented by researchers as a strategy to access technology for cutting-edge research. Specifically, it is discussed what kind of open technologies are most enabling in scientific environments characterized by economic and infrastructural constraints. It is demonstrated that do-it-yourself (DIY) technologies are already wide spread, in particular in countries with lower science funding, which in turn is the basis for the development of open technologies. Beyond financial accessibility, open hardware can be transformational to the technology access of laboratories through advantages in local production and direct knowledge transfer. Central drivers of the adoption of appropriate technologies in biolabs globally are open sharing, digital fabrication, local production, standard parts use, and detailed documentation.
[ { "created": "Fri, 30 Sep 2022 16:34:27 GMT", "version": "v1" } ]
2023-01-19
[ [ "Wenzel", "Tobias", "" ] ]
This article illustrates how open hardware solutions are implemented by researchers as a strategy to access technology for cutting-edge research. Specifically, it is discussed what kind of open technologies are most enabling in scientific environments characterized by economic and infrastructural constraints. It is demonstrated that do-it-yourself (DIY) technologies are already wide spread, in particular in countries with lower science funding, which in turn is the basis for the development of open technologies. Beyond financial accessibility, open hardware can be transformational to the technology access of laboratories through advantages in local production and direct knowledge transfer. Central drivers of the adoption of appropriate technologies in biolabs globally are open sharing, digital fabrication, local production, standard parts use, and detailed documentation.
2302.13203
Shengbo Wang
Shengbo Wang, Nian Si, Jose Blanchet, and Zhengyuan Zhou
A Finite Sample Complexity Bound for Distributionally Robust Q-learning
Accepted by AISTATS 2023
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a reinforcement learning setting in which the deployment environment is different from the training environment. Applying a robust Markov decision processes formulation, we extend the distributionally robust $Q$-learning framework studied in Liu et al. [2022]. Further, we improve the design and analysis of their multi-level Monte Carlo estimator. Assuming access to a simulator, we prove that the worst-case expected sample complexity of our algorithm to learn the optimal robust $Q$-function within an $\epsilon$ error in the sup norm is upper bounded by $\tilde O(|S||A|(1-\gamma)^{-5}\epsilon^{-2}p_{\wedge}^{-6}\delta^{-4})$, where $\gamma$ is the discount rate, $p_{\wedge}$ is the non-zero minimal support probability of the transition kernels and $\delta$ is the uncertainty size. This is the first sample complexity result for the model-free robust RL problem. Simulation studies further validate our theoretical results.
[ { "created": "Sun, 26 Feb 2023 01:15:32 GMT", "version": "v1" }, { "created": "Fri, 3 Mar 2023 00:52:20 GMT", "version": "v2" }, { "created": "Wed, 31 Jul 2024 20:59:45 GMT", "version": "v3" } ]
2024-08-02
[ [ "Wang", "Shengbo", "" ], [ "Si", "Nian", "" ], [ "Blanchet", "Jose", "" ], [ "Zhou", "Zhengyuan", "" ] ]
We consider a reinforcement learning setting in which the deployment environment is different from the training environment. Applying a robust Markov decision processes formulation, we extend the distributionally robust $Q$-learning framework studied in Liu et al. [2022]. Further, we improve the design and analysis of their multi-level Monte Carlo estimator. Assuming access to a simulator, we prove that the worst-case expected sample complexity of our algorithm to learn the optimal robust $Q$-function within an $\epsilon$ error in the sup norm is upper bounded by $\tilde O(|S||A|(1-\gamma)^{-5}\epsilon^{-2}p_{\wedge}^{-6}\delta^{-4})$, where $\gamma$ is the discount rate, $p_{\wedge}$ is the non-zero minimal support probability of the transition kernels and $\delta$ is the uncertainty size. This is the first sample complexity result for the model-free robust RL problem. Simulation studies further validate our theoretical results.
2011.14479
Huaxiong Li
Haoxing Chen and Huaxiong Li and Yaohui Li and Chunlin Chen
Multi-scale Adaptive Task Attention Network for Few-Shot Learning
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The goal of few-shot learning is to classify unseen categories with few labeled samples. Recently, the low-level information metric-learning based methods have achieved satisfying performance, since local representations (LRs) are more consistent between seen and unseen classes. However, most of these methods deal with each category in the support set independently, which is not sufficient to measure the relation between features, especially in a certain task. Moreover, the low-level information-based metric learning method suffers when dominant objects of different scales exist in a complex background. To address these issues, this paper proposes a novel Multi-scale Adaptive Task Attention Network (MATANet) for few-shot learning. Specifically, we first use a multi-scale feature generator to generate multiple features at different scales. Then, an adaptive task attention module is proposed to select the most important LRs among the entire task. Afterwards, a similarity-to-class module and a fusion layer are utilized to calculate a joint multi-scale similarity between the query image and the support set. Extensive experiments on popular benchmarks clearly show the effectiveness of the proposed MATANet compared with state-of-the-art methods.
[ { "created": "Mon, 30 Nov 2020 00:36:01 GMT", "version": "v1" } ]
2020-12-01
[ [ "Chen", "Haoxing", "" ], [ "Li", "Huaxiong", "" ], [ "Li", "Yaohui", "" ], [ "Chen", "Chunlin", "" ] ]
The goal of few-shot learning is to classify unseen categories with few labeled samples. Recently, the low-level information metric-learning based methods have achieved satisfying performance, since local representations (LRs) are more consistent between seen and unseen classes. However, most of these methods deal with each category in the support set independently, which is not sufficient to measure the relation between features, especially in a certain task. Moreover, the low-level information-based metric learning method suffers when dominant objects of different scales exist in a complex background. To address these issues, this paper proposes a novel Multi-scale Adaptive Task Attention Network (MATANet) for few-shot learning. Specifically, we first use a multi-scale feature generator to generate multiple features at different scales. Then, an adaptive task attention module is proposed to select the most important LRs among the entire task. Afterwards, a similarity-to-class module and a fusion layer are utilized to calculate a joint multi-scale similarity between the query image and the support set. Extensive experiments on popular benchmarks clearly show the effectiveness of the proposed MATANet compared with state-of-the-art methods.
2408.06537
Mara Finkelstein
Mara Finkelstein, David Vilar, and Markus Freitag
Introducing the NewsPaLM MBR and QE Dataset: LLM-Generated High-Quality Parallel Data Outperforms Traditional Web-Crawled Data
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent research in neural machine translation (NMT) has shown that training on high-quality machine-generated data can outperform training on human-generated data. This work accompanies the first-ever release of a LLM-generated, MBR-decoded and QE-reranked dataset with both sentence-level and multi-sentence examples. We perform extensive experiments to demonstrate the quality of our dataset in terms of its downstream impact on NMT model performance. We find that training from scratch on our (machine-generated) dataset outperforms training on the (web-crawled) WMT'23 training dataset (which is 300 times larger), and also outperforms training on the top-quality subset of the WMT'23 training dataset. We also find that performing self-distillation by finetuning the LLM which generated this dataset outperforms the LLM's strong few-shot baseline. These findings corroborate the quality of our dataset, and demonstrate the value of high-quality machine-generated data in improving performance of NMT models.
[ { "created": "Tue, 13 Aug 2024 00:06:56 GMT", "version": "v1" }, { "created": "Wed, 14 Aug 2024 18:38:11 GMT", "version": "v2" } ]
2024-08-16
[ [ "Finkelstein", "Mara", "" ], [ "Vilar", "David", "" ], [ "Freitag", "Markus", "" ] ]
Recent research in neural machine translation (NMT) has shown that training on high-quality machine-generated data can outperform training on human-generated data. This work accompanies the first-ever release of a LLM-generated, MBR-decoded and QE-reranked dataset with both sentence-level and multi-sentence examples. We perform extensive experiments to demonstrate the quality of our dataset in terms of its downstream impact on NMT model performance. We find that training from scratch on our (machine-generated) dataset outperforms training on the (web-crawled) WMT'23 training dataset (which is 300 times larger), and also outperforms training on the top-quality subset of the WMT'23 training dataset. We also find that performing self-distillation by finetuning the LLM which generated this dataset outperforms the LLM's strong few-shot baseline. These findings corroborate the quality of our dataset, and demonstrate the value of high-quality machine-generated data in improving performance of NMT models.
2406.15126
Lin Long
Lin Long, Rui Wang, Ruixuan Xiao, Junbo Zhao, Xiao Ding, Gang Chen, Haobo Wang
On LLMs-Driven Synthetic Data Generation, Curation, and Evaluation: A Survey
A survey on LLMs-driven synthetic data generation, curation and evaluation
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Within the evolving landscape of deep learning, the dilemma of data quantity and quality has been a long-standing problem. The recent advent of Large Language Models (LLMs) offers a data-centric solution to alleviate the limitations of real-world data with synthetic data generation. However, current investigations into this field lack a unified framework and mostly stay on the surface. Therefore, this paper provides an organization of relevant studies based on a generic workflow of synthetic data generation. By doing so, we highlight the gaps within existing research and outline prospective avenues for future study. This work aims to shepherd the academic and industrial communities towards deeper, more methodical inquiries into the capabilities and applications of LLMs-driven synthetic data generation.
[ { "created": "Fri, 14 Jun 2024 07:47:09 GMT", "version": "v1" } ]
2024-06-24
[ [ "Long", "Lin", "" ], [ "Wang", "Rui", "" ], [ "Xiao", "Ruixuan", "" ], [ "Zhao", "Junbo", "" ], [ "Ding", "Xiao", "" ], [ "Chen", "Gang", "" ], [ "Wang", "Haobo", "" ] ]
Within the evolving landscape of deep learning, the dilemma of data quantity and quality has been a long-standing problem. The recent advent of Large Language Models (LLMs) offers a data-centric solution to alleviate the limitations of real-world data with synthetic data generation. However, current investigations into this field lack a unified framework and mostly stay on the surface. Therefore, this paper provides an organization of relevant studies based on a generic workflow of synthetic data generation. By doing so, we highlight the gaps within existing research and outline prospective avenues for future study. This work aims to shepherd the academic and industrial communities towards deeper, more methodical inquiries into the capabilities and applications of LLMs-driven synthetic data generation.
1806.02377
Ugur Kursuncu
Ugur Kursuncu, Manas Gaur, Usha Lokala, Krishnaprasad Thirunarayan, Amit Sheth and I. Budak Arpinar
Predictive Analysis on Twitter: Techniques and Applications
null
Emerging Research Challenges and Opportunities in Computational Social Network Analysis and Mining. (2019) 67-104
10.1007/978-3-319-94105-9_4
null
cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Predictive analysis of social media data has attracted considerable attention from the research community as well as the business world because of the essential and actionable information it can provide. Over the years, extensive experimentation and analysis for insights have been carried out using Twitter data in various domains such as healthcare, public health, politics, social sciences, and demographics. In this chapter, we discuss techniques, approaches and state-of-the-art applications of predictive analysis of Twitter data. Specifically, we present fine-grained analysis involving aspects such as sentiment, emotion, and the use of domain knowledge in the coarse-grained analysis of Twitter data for making decisions and taking actions, and relate a few success stories.
[ { "created": "Wed, 6 Jun 2018 18:41:32 GMT", "version": "v1" } ]
2023-09-04
[ [ "Kursuncu", "Ugur", "" ], [ "Gaur", "Manas", "" ], [ "Lokala", "Usha", "" ], [ "Thirunarayan", "Krishnaprasad", "" ], [ "Sheth", "Amit", "" ], [ "Arpinar", "I. Budak", "" ] ]
Predictive analysis of social media data has attracted considerable attention from the research community as well as the business world because of the essential and actionable information it can provide. Over the years, extensive experimentation and analysis for insights have been carried out using Twitter data in various domains such as healthcare, public health, politics, social sciences, and demographics. In this chapter, we discuss techniques, approaches and state-of-the-art applications of predictive analysis of Twitter data. Specifically, we present fine-grained analysis involving aspects such as sentiment, emotion, and the use of domain knowledge in the coarse-grained analysis of Twitter data for making decisions and taking actions, and relate a few success stories.
2303.01595
Ellis Solaiman
Adrian Delchev and Ioannis Sfyrakis and Ellis Solaiman
Developing a Compiler for EROP -- A Language for the Specification of Smart Contracts, An Experience Report
null
null
null
null
cs.PL cs.DC cs.SE
http://creativecommons.org/licenses/by/4.0/
A smart contract is a translation of a standard paper-based contract that can be enforced and executed by a contract management system. At a high level of abstraction, a contract is only a document that describes how the signing parties are to behave in different scenarios; nevertheless, the translation of a typical paper-based contract to its electronic counterpart has proved to be both time-consuming and difficult. The requirement for a language capable of capturing the core of a contract in simple phrases and definitions has been a focus of study for many years. EROP (Events, Rights, Obligations, Prohibitions) is a contract specification language that breaks a contract down into sets of events, rights, obligations, and prohibitions.
[ { "created": "Thu, 2 Mar 2023 21:35:25 GMT", "version": "v1" } ]
2023-03-06
[ [ "Delchev", "Adrian", "" ], [ "Sfyrakis", "Ioannis", "" ], [ "Solaiman", "Ellis", "" ] ]
A smart contract is a translation of a standard paper-based contract that can be enforced and executed by a contract management system. At a high level of abstraction, a contract is only a document that describes how the signing parties are to behave in different scenarios; nevertheless, the translation of a typical paper-based contract to its electronic counterpart has proved to be both time-consuming and difficult. The requirement for a language capable of capturing the core of a contract in simple phrases and definitions has been a focus of study for many years. EROP (Events, Rights, Obligations, Prohibitions) is a contract specification language that breaks a contract down into sets of events, rights, obligations, and prohibitions.
1912.07800
Ethan Chi
Bowen Jing, Ethan A. Chi, Jillian Tang
SGVAE: Sequential Graph Variational Autoencoder
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative models of graphs are well-known, but many existing models are limited in scalability and expressivity. We present a novel sequential graphical variational autoencoder operating directly on graphical representations of data. In our model, the encoding and decoding of a graph as is framed as a sequential deconstruction and construction process, respectively, enabling the the learning of a latent space. Experiments on a cycle dataset show promise, but highlight the need for a relaxation of the distribution over node permutations.
[ { "created": "Tue, 17 Dec 2019 03:19:47 GMT", "version": "v1" } ]
2019-12-18
[ [ "Jing", "Bowen", "" ], [ "Chi", "Ethan A.", "" ], [ "Tang", "Jillian", "" ] ]
Generative models of graphs are well-known, but many existing models are limited in scalability and expressivity. We present a novel sequential graphical variational autoencoder operating directly on graphical representations of data. In our model, the encoding and decoding of a graph as is framed as a sequential deconstruction and construction process, respectively, enabling the the learning of a latent space. Experiments on a cycle dataset show promise, but highlight the need for a relaxation of the distribution over node permutations.
1807.01956
Markus M\"uller
Markus M\"uller, Sebastian St\"uker, and Alex Waibel
Neural Language Codes for Multilingual Acoustic Models
5 pages, 3 figures, accepted at Interspeech 2018
null
null
null
cs.CL cs.LG cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multilingual Speech Recognition is one of the most costly AI problems, because each language (7,000+) and even different accents require their own acoustic models to obtain best recognition performance. Even though they all use the same phoneme symbols, each language and accent imposes its own coloring or "twang". Many adaptive approaches have been proposed, but they require further training, additional data and generally are inferior to monolingually trained models. In this paper, we propose a different approach that uses a large multilingual model that is \emph{modulated} by the codes generated by an ancillary network that learns to code useful differences between the "twangs" or human language. We use Meta-Pi networks to have one network (the language code net) gate the activity of neurons in another (the acoustic model nets). Our results show that during recognition multilingual Meta-Pi networks quickly adapt to the proper language coloring without retraining or new data, and perform better than monolingually trained networks. The model was evaluated by training acoustic modeling nets and modulating language code nets jointly and optimize them for best recognition performance.
[ { "created": "Thu, 5 Jul 2018 12:15:34 GMT", "version": "v1" } ]
2018-07-06
[ [ "Müller", "Markus", "" ], [ "Stüker", "Sebastian", "" ], [ "Waibel", "Alex", "" ] ]
Multilingual Speech Recognition is one of the most costly AI problems, because each language (7,000+) and even different accents require their own acoustic models to obtain best recognition performance. Even though they all use the same phoneme symbols, each language and accent imposes its own coloring or "twang". Many adaptive approaches have been proposed, but they require further training, additional data and generally are inferior to monolingually trained models. In this paper, we propose a different approach that uses a large multilingual model that is \emph{modulated} by the codes generated by an ancillary network that learns to code useful differences between the "twangs" or human language. We use Meta-Pi networks to have one network (the language code net) gate the activity of neurons in another (the acoustic model nets). Our results show that during recognition multilingual Meta-Pi networks quickly adapt to the proper language coloring without retraining or new data, and perform better than monolingually trained networks. The model was evaluated by training acoustic modeling nets and modulating language code nets jointly and optimize them for best recognition performance.
2207.01920
Jos\'e Marcelo Fernandes
J. Fernandes, J. S\'a Silva, A. Rodrigues, F. Boavida, R. Gaspar, C. Godinho, R. Francisco
Social Sensing and Human in the Loop Profiling during Pandemics: the Vitoria application
23 pages, 12 figures and 4 tables
null
null
null
cs.HC cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the number of smart devices that surround us increases, so do the opportunities to leverage them to create socially- and context-aware systems. Smart devices can be used for better understanding human behaviour and its societal implications. As an example of a scenario in which the role of socially aware systems is crucial, consider the SARS-CoV-2 pandemic. In this paper we present an innovative Humanin-The-Loop Cyber Physical system that can collect passive data from people, such as physical activity, sleep information, and discrete location, as well as collect self-reported data, and provide individualised user feedback. In this paper, we also present a three and a half months field trial implemented in Portugal. This trial was part of a larger scope project that was supported by the Portuguese National Health System, to evaluate the indicators and effects of the pandemic. Results concerning various applications usage statistics are presented, comparing the most used applications, their objective and their usage pattern in work/non-work periods. Additionally,the time-lagged cross correlation between some of the collected metrics, Covid events, and media news, are explored. This type of applications can be used not only in the context of Covid but also in future pandemics, to assist individuals in self-regulation of their contagion risk, based on personalized information, while also function as a means for raising self-awareness of risks related to psychological wellbeing.
[ { "created": "Tue, 5 Jul 2022 09:55:00 GMT", "version": "v1" } ]
2022-07-06
[ [ "Fernandes", "J.", "" ], [ "Silva", "J. Sá", "" ], [ "Rodrigues", "A.", "" ], [ "Boavida", "F.", "" ], [ "Gaspar", "R.", "" ], [ "Godinho", "C.", "" ], [ "Francisco", "R.", "" ] ]
As the number of smart devices that surround us increases, so do the opportunities to leverage them to create socially- and context-aware systems. Smart devices can be used for better understanding human behaviour and its societal implications. As an example of a scenario in which the role of socially aware systems is crucial, consider the SARS-CoV-2 pandemic. In this paper we present an innovative Humanin-The-Loop Cyber Physical system that can collect passive data from people, such as physical activity, sleep information, and discrete location, as well as collect self-reported data, and provide individualised user feedback. In this paper, we also present a three and a half months field trial implemented in Portugal. This trial was part of a larger scope project that was supported by the Portuguese National Health System, to evaluate the indicators and effects of the pandemic. Results concerning various applications usage statistics are presented, comparing the most used applications, their objective and their usage pattern in work/non-work periods. Additionally,the time-lagged cross correlation between some of the collected metrics, Covid events, and media news, are explored. This type of applications can be used not only in the context of Covid but also in future pandemics, to assist individuals in self-regulation of their contagion risk, based on personalized information, while also function as a means for raising self-awareness of risks related to psychological wellbeing.
1712.03950
Quanquan Gu
Yaodong Yu and Difan Zou and Quanquan Gu
Saving Gradient and Negative Curvature Computations: Finding Local Minima More Efficiently
31 pages, 1 table
null
null
null
cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a family of nonconvex optimization algorithms that are able to save gradient and negative curvature computations to a large extent, and are guaranteed to find an approximate local minimum with improved runtime complexity. At the core of our algorithms is the division of the entire domain of the objective function into small and large gradient regions: our algorithms only perform gradient descent based procedure in the large gradient region, and only perform negative curvature descent in the small gradient region. Our novel analysis shows that the proposed algorithms can escape the small gradient region in only one negative curvature descent step whenever they enter it, and thus they only need to perform at most $N_{\epsilon}$ negative curvature direction computations, where $N_{\epsilon}$ is the number of times the algorithms enter small gradient regions. For both deterministic and stochastic settings, we show that the proposed algorithms can potentially beat the state-of-the-art local minima finding algorithms. For the finite-sum setting, our algorithm can also outperform the best algorithm in a certain regime.
[ { "created": "Mon, 11 Dec 2017 18:59:09 GMT", "version": "v1" } ]
2017-12-12
[ [ "Yu", "Yaodong", "" ], [ "Zou", "Difan", "" ], [ "Gu", "Quanquan", "" ] ]
We propose a family of nonconvex optimization algorithms that are able to save gradient and negative curvature computations to a large extent, and are guaranteed to find an approximate local minimum with improved runtime complexity. At the core of our algorithms is the division of the entire domain of the objective function into small and large gradient regions: our algorithms only perform gradient descent based procedure in the large gradient region, and only perform negative curvature descent in the small gradient region. Our novel analysis shows that the proposed algorithms can escape the small gradient region in only one negative curvature descent step whenever they enter it, and thus they only need to perform at most $N_{\epsilon}$ negative curvature direction computations, where $N_{\epsilon}$ is the number of times the algorithms enter small gradient regions. For both deterministic and stochastic settings, we show that the proposed algorithms can potentially beat the state-of-the-art local minima finding algorithms. For the finite-sum setting, our algorithm can also outperform the best algorithm in a certain regime.
1609.06065
Yuan Cao
Yonglin Cao, Yuan Cao
Complete classification of $(\delta+\alpha u^2)$-constacyclic codes over $\mathbb{F}_{2^m}[u]/\langle u^4\rangle$ of oddly even length
arXiv admin note: text overlap with arXiv:1511.02369
null
null
FFA-16-175
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let $\mathbb{F}_{2^m}$ be a finite field of cardinality $2^m$, $R=\mathbb{F}_{2^m}[u]/\langle u^4\rangle)$ and $n$ is an odd positive integer. For any $\delta,\alpha\in \mathbb{F}_{2^m}^{\times}$, ideals of the ring $R[x]/\langle x^{2n}-(\delta+\alpha u^2)\rangle$ are identified as $(\delta+\alpha u^2)$-constacyclic codes of length $2n$ over $R$. In this paper, an explicit representation and enumeration for all distinct $(\delta+\alpha u^2)$-constacyclic codes of length $2n$ over $R$ are presented.
[ { "created": "Tue, 20 Sep 2016 09:28:04 GMT", "version": "v1" } ]
2016-09-21
[ [ "Cao", "Yonglin", "" ], [ "Cao", "Yuan", "" ] ]
Let $\mathbb{F}_{2^m}$ be a finite field of cardinality $2^m$, $R=\mathbb{F}_{2^m}[u]/\langle u^4\rangle)$ and $n$ is an odd positive integer. For any $\delta,\alpha\in \mathbb{F}_{2^m}^{\times}$, ideals of the ring $R[x]/\langle x^{2n}-(\delta+\alpha u^2)\rangle$ are identified as $(\delta+\alpha u^2)$-constacyclic codes of length $2n$ over $R$. In this paper, an explicit representation and enumeration for all distinct $(\delta+\alpha u^2)$-constacyclic codes of length $2n$ over $R$ are presented.
2210.00949
Skander Karkar
Skander Karkar and Ibrahim Ayed and Emmanuel de B\'ezenac and Patrick Gallinari
Block-wise Training of Residual Networks via the Minimizing Movement Scheme
1st International Workshop on Practical Deep Learning in the Wild at AAAI 2022
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
End-to-end backpropagation has a few shortcomings: it requires loading the entire model during training, which can be impossible in constrained settings, and suffers from three locking problems (forward locking, update locking and backward locking), which prohibit training the layers in parallel. Solving layer-wise optimization problems can address these problems and has been used in on-device training of neural networks. We develop a layer-wise training method, particularly welladapted to ResNets, inspired by the minimizing movement scheme for gradient flows in distribution space. The method amounts to a kinetic energy regularization of each block that makes the blocks optimal transport maps and endows them with regularity. It works by alleviating the stagnation problem observed in layer-wise training, whereby greedily-trained early layers overfit and deeper layers stop increasing test accuracy after a certain depth. We show on classification tasks that the test accuracy of block-wise trained ResNets is improved when using our method, whether the blocks are trained sequentially or in parallel.
[ { "created": "Mon, 3 Oct 2022 14:03:56 GMT", "version": "v1" }, { "created": "Tue, 6 Jun 2023 13:48:11 GMT", "version": "v2" } ]
2023-06-07
[ [ "Karkar", "Skander", "" ], [ "Ayed", "Ibrahim", "" ], [ "de Bézenac", "Emmanuel", "" ], [ "Gallinari", "Patrick", "" ] ]
End-to-end backpropagation has a few shortcomings: it requires loading the entire model during training, which can be impossible in constrained settings, and suffers from three locking problems (forward locking, update locking and backward locking), which prohibit training the layers in parallel. Solving layer-wise optimization problems can address these problems and has been used in on-device training of neural networks. We develop a layer-wise training method, particularly welladapted to ResNets, inspired by the minimizing movement scheme for gradient flows in distribution space. The method amounts to a kinetic energy regularization of each block that makes the blocks optimal transport maps and endows them with regularity. It works by alleviating the stagnation problem observed in layer-wise training, whereby greedily-trained early layers overfit and deeper layers stop increasing test accuracy after a certain depth. We show on classification tasks that the test accuracy of block-wise trained ResNets is improved when using our method, whether the blocks are trained sequentially or in parallel.
2202.06935
Sebastian Gehrmann
Sebastian Gehrmann, Elizabeth Clark, Thibault Sellam
Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural NLG models have improved to the point where they can often no longer be distinguished based on the surface-level features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for NLG evaluation and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 NLG papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo.
[ { "created": "Mon, 14 Feb 2022 18:51:07 GMT", "version": "v1" } ]
2022-02-15
[ [ "Gehrmann", "Sebastian", "" ], [ "Clark", "Elizabeth", "" ], [ "Sellam", "Thibault", "" ] ]
Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural NLG models have improved to the point where they can often no longer be distinguished based on the surface-level features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for NLG evaluation and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 NLG papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo.
2009.03136
Matthew Ciolino
Josh Kalin, Matthew Ciolino, David Noever, Gerry Dozier
Black Box to White Box: Discover Model Characteristics Based on Strategic Probing
4 Pages, 3 Figure, IEEE Format, Ai4i 2020
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Machine Learning, White Box Adversarial Attacks rely on knowing underlying knowledge about the model attributes. This works focuses on discovering to distrinct pieces of model information: the underlying architecture and primary training dataset. With the process in this paper, a structured set of input probes and the output of the model become the training data for a deep classifier. Two subdomains in Machine Learning are explored: image based classifiers and text transformers with GPT-2. With image classification, the focus is on exploring commonly deployed architectures and datasets available in popular public libraries. Using a single transformer architecture with multiple levels of parameters, text generation is explored by fine tuning off different datasets. Each dataset explored in image and text are distinguishable from one another. Diversity in text transformer outputs implies further research is needed to successfully classify architecture attribution in text domain.
[ { "created": "Mon, 7 Sep 2020 14:44:28 GMT", "version": "v1" } ]
2020-09-08
[ [ "Kalin", "Josh", "" ], [ "Ciolino", "Matthew", "" ], [ "Noever", "David", "" ], [ "Dozier", "Gerry", "" ] ]
In Machine Learning, White Box Adversarial Attacks rely on knowing underlying knowledge about the model attributes. This works focuses on discovering to distrinct pieces of model information: the underlying architecture and primary training dataset. With the process in this paper, a structured set of input probes and the output of the model become the training data for a deep classifier. Two subdomains in Machine Learning are explored: image based classifiers and text transformers with GPT-2. With image classification, the focus is on exploring commonly deployed architectures and datasets available in popular public libraries. Using a single transformer architecture with multiple levels of parameters, text generation is explored by fine tuning off different datasets. Each dataset explored in image and text are distinguishable from one another. Diversity in text transformer outputs implies further research is needed to successfully classify architecture attribution in text domain.
2309.07473
Chuanruo Ning
Chuanruo Ning, Ruihai Wu, Haoran Lu, Kaichun Mo, Hao Dong
Where2Explore: Few-shot Affordance Learning for Unseen Novel Categories of Articulated Objects
NeurIPS 2023
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Articulated object manipulation is a fundamental yet challenging task in robotics. Due to significant geometric and semantic variations across object categories, previous manipulation models struggle to generalize to novel categories. Few-shot learning is a promising solution for alleviating this issue by allowing robots to perform a few interactions with unseen objects. However, extant approaches often necessitate costly and inefficient test-time interactions with each unseen instance. Recognizing this limitation, we observe that despite their distinct shapes, different categories often share similar local geometries essential for manipulation, such as pullable handles and graspable edges - a factor typically underutilized in previous few-shot learning works. To harness this commonality, we introduce 'Where2Explore', an affordance learning framework that effectively explores novel categories with minimal interactions on a limited number of instances. Our framework explicitly estimates the geometric similarity across different categories, identifying local areas that differ from shapes in the training categories for efficient exploration while concurrently transferring affordance knowledge to similar parts of the objects. Extensive experiments in simulated and real-world environments demonstrate our framework's capacity for efficient few-shot exploration and generalization.
[ { "created": "Thu, 14 Sep 2023 07:11:58 GMT", "version": "v1" }, { "created": "Fri, 15 Dec 2023 13:36:46 GMT", "version": "v2" } ]
2023-12-18
[ [ "Ning", "Chuanruo", "" ], [ "Wu", "Ruihai", "" ], [ "Lu", "Haoran", "" ], [ "Mo", "Kaichun", "" ], [ "Dong", "Hao", "" ] ]
Articulated object manipulation is a fundamental yet challenging task in robotics. Due to significant geometric and semantic variations across object categories, previous manipulation models struggle to generalize to novel categories. Few-shot learning is a promising solution for alleviating this issue by allowing robots to perform a few interactions with unseen objects. However, extant approaches often necessitate costly and inefficient test-time interactions with each unseen instance. Recognizing this limitation, we observe that despite their distinct shapes, different categories often share similar local geometries essential for manipulation, such as pullable handles and graspable edges - a factor typically underutilized in previous few-shot learning works. To harness this commonality, we introduce 'Where2Explore', an affordance learning framework that effectively explores novel categories with minimal interactions on a limited number of instances. Our framework explicitly estimates the geometric similarity across different categories, identifying local areas that differ from shapes in the training categories for efficient exploration while concurrently transferring affordance knowledge to similar parts of the objects. Extensive experiments in simulated and real-world environments demonstrate our framework's capacity for efficient few-shot exploration and generalization.
2407.12202
Kento Kawaharazuka
Kento Kawaharazuka and Toru Ogawa and Cota Nabeshima
Tool Shape Optimization through Backpropagation of Neural Network
Accepted at IROS2020
null
10.1109/IROS45743.2020.9341583
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When executing a certain task, human beings can choose or make an appropriate tool to achieve the task. This research especially addresses the optimization of tool shape for robotic tool-use. We propose a method in which a robot obtains an optimized tool shape, tool trajectory, or both, depending on a given task. The feature of our method is that a transition of the task state when the robot moves a certain tool along a certain trajectory is represented by a deep neural network. We applied this method to object manipulation tasks on a 2D plane, and verified that appropriate tool shapes are generated by using this novel method.
[ { "created": "Tue, 16 Jul 2024 22:01:59 GMT", "version": "v1" } ]
2024-07-18
[ [ "Kawaharazuka", "Kento", "" ], [ "Ogawa", "Toru", "" ], [ "Nabeshima", "Cota", "" ] ]
When executing a certain task, human beings can choose or make an appropriate tool to achieve the task. This research especially addresses the optimization of tool shape for robotic tool-use. We propose a method in which a robot obtains an optimized tool shape, tool trajectory, or both, depending on a given task. The feature of our method is that a transition of the task state when the robot moves a certain tool along a certain trajectory is represented by a deep neural network. We applied this method to object manipulation tasks on a 2D plane, and verified that appropriate tool shapes are generated by using this novel method.
2004.02546
Erik H\"ark\"onen
Erik H\"ark\"onen, Aaron Hertzmann, Jaakko Lehtinen, Sylvain Paris
GANSpace: Discovering Interpretable GAN Controls
Accepted to NeurIPS 2020
Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 9841-9850
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a simple technique to analyze Generative Adversarial Networks (GANs) and create interpretable controls for image synthesis, such as change of viewpoint, aging, lighting, and time of day. We identify important latent directions based on Principal Components Analysis (PCA) applied either in latent space or feature space. Then, we show that a large number of interpretable controls can be defined by layer-wise perturbation along the principal directions. Moreover, we show that BigGAN can be controlled with layer-wise inputs in a StyleGAN-like manner. We show results on different GANs trained on various datasets, and demonstrate good qualitative matches to edit directions found through earlier supervised approaches.
[ { "created": "Mon, 6 Apr 2020 10:41:44 GMT", "version": "v1" }, { "created": "Fri, 17 Jul 2020 11:10:27 GMT", "version": "v2" }, { "created": "Mon, 14 Dec 2020 10:13:42 GMT", "version": "v3" } ]
2022-07-05
[ [ "Härkönen", "Erik", "" ], [ "Hertzmann", "Aaron", "" ], [ "Lehtinen", "Jaakko", "" ], [ "Paris", "Sylvain", "" ] ]
This paper describes a simple technique to analyze Generative Adversarial Networks (GANs) and create interpretable controls for image synthesis, such as change of viewpoint, aging, lighting, and time of day. We identify important latent directions based on Principal Components Analysis (PCA) applied either in latent space or feature space. Then, we show that a large number of interpretable controls can be defined by layer-wise perturbation along the principal directions. Moreover, we show that BigGAN can be controlled with layer-wise inputs in a StyleGAN-like manner. We show results on different GANs trained on various datasets, and demonstrate good qualitative matches to edit directions found through earlier supervised approaches.
2203.12601
Suraj Nair
Suraj Nair, Aravind Rajeswaran, Vikash Kumar, Chelsea Finn, Abhinav Gupta
R3M: A Universal Visual Representation for Robot Manipulation
Conference on Robot Learning (CoRL) 2022
null
null
null
cs.RO cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study how visual representations pre-trained on diverse human video data can enable data-efficient learning of downstream robotic manipulation tasks. Concretely, we pre-train a visual representation using the Ego4D human video dataset using a combination of time-contrastive learning, video-language alignment, and an L1 penalty to encourage sparse and compact representations. The resulting representation, R3M, can be used as a frozen perception module for downstream policy learning. Across a suite of 12 simulated robot manipulation tasks, we find that R3M improves task success by over 20% compared to training from scratch and by over 10% compared to state-of-the-art visual representations like CLIP and MoCo. Furthermore, R3M enables a Franka Emika Panda arm to learn a range of manipulation tasks in a real, cluttered apartment given just 20 demonstrations. Code and pre-trained models are available at https://tinyurl.com/robotr3m.
[ { "created": "Wed, 23 Mar 2022 17:55:09 GMT", "version": "v1" }, { "created": "Mon, 18 Apr 2022 22:39:13 GMT", "version": "v2" }, { "created": "Fri, 18 Nov 2022 05:57:09 GMT", "version": "v3" } ]
2022-11-21
[ [ "Nair", "Suraj", "" ], [ "Rajeswaran", "Aravind", "" ], [ "Kumar", "Vikash", "" ], [ "Finn", "Chelsea", "" ], [ "Gupta", "Abhinav", "" ] ]
We study how visual representations pre-trained on diverse human video data can enable data-efficient learning of downstream robotic manipulation tasks. Concretely, we pre-train a visual representation using the Ego4D human video dataset using a combination of time-contrastive learning, video-language alignment, and an L1 penalty to encourage sparse and compact representations. The resulting representation, R3M, can be used as a frozen perception module for downstream policy learning. Across a suite of 12 simulated robot manipulation tasks, we find that R3M improves task success by over 20% compared to training from scratch and by over 10% compared to state-of-the-art visual representations like CLIP and MoCo. Furthermore, R3M enables a Franka Emika Panda arm to learn a range of manipulation tasks in a real, cluttered apartment given just 20 demonstrations. Code and pre-trained models are available at https://tinyurl.com/robotr3m.
2106.04963
Yang Tao
Tao Yang, Feifan Yang, Haolan Ouyang, Xiaojun Quan
Psycholinguistic Tripartite Graph Network for Personality Detection
Accepted by ACL 2021
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most of the recent work on personality detection from online posts adopts multifarious deep neural networks to represent the posts and builds predictive models in a data-driven manner, without the exploitation of psycholinguistic knowledge that may unveil the connections between one's language usage and his psychological traits. In this paper, we propose a psycholinguistic knowledge-based tripartite graph network, TrigNet, which consists of a tripartite graph network and a BERT-based graph initializer. The graph network injects structural psycholinguistic knowledge from LIWC, a computerized instrument for psycholinguistic analysis, by constructing a heterogeneous tripartite graph. The graph initializer is employed to provide initial embeddings for the graph nodes. To reduce the computational cost in graph learning, we further propose a novel flow graph attention network (GAT) that only transmits messages between neighboring parties in the tripartite graph. Benefiting from the tripartite graph, TrigNet can aggregate post information from a psychological perspective, which is a novel way of exploiting domain knowledge. Extensive experiments on two datasets show that TrigNet outperforms the existing state-of-art model by 3.47 and 2.10 points in average F1. Moreover, the flow GAT reduces the FLOPS and Memory measures by 38% and 32%, respectively, in comparison to the original GAT in our setting.
[ { "created": "Wed, 9 Jun 2021 10:18:50 GMT", "version": "v1" } ]
2021-06-10
[ [ "Yang", "Tao", "" ], [ "Yang", "Feifan", "" ], [ "Ouyang", "Haolan", "" ], [ "Quan", "Xiaojun", "" ] ]
Most of the recent work on personality detection from online posts adopts multifarious deep neural networks to represent the posts and builds predictive models in a data-driven manner, without the exploitation of psycholinguistic knowledge that may unveil the connections between one's language usage and his psychological traits. In this paper, we propose a psycholinguistic knowledge-based tripartite graph network, TrigNet, which consists of a tripartite graph network and a BERT-based graph initializer. The graph network injects structural psycholinguistic knowledge from LIWC, a computerized instrument for psycholinguistic analysis, by constructing a heterogeneous tripartite graph. The graph initializer is employed to provide initial embeddings for the graph nodes. To reduce the computational cost in graph learning, we further propose a novel flow graph attention network (GAT) that only transmits messages between neighboring parties in the tripartite graph. Benefiting from the tripartite graph, TrigNet can aggregate post information from a psychological perspective, which is a novel way of exploiting domain knowledge. Extensive experiments on two datasets show that TrigNet outperforms the existing state-of-art model by 3.47 and 2.10 points in average F1. Moreover, the flow GAT reduces the FLOPS and Memory measures by 38% and 32%, respectively, in comparison to the original GAT in our setting.
1711.03488
Oscar Carrasco
Shahid Mumtaz, Kazi Saidul, Huq Jonathan Rodriguez, Paulo Marques, Ayman Radwan, Keith Briggs Michael Fitch BT, Andreas Georgakopoulos, Ioannis-Prodromos Belikaidis, Panagiotis Vlacheas, Dimitrios Kelaidonis, Evangelos Kosmatos, Serafim Kotrotsos, Stavroula Vassaki, Yiouli Kritikou, Panagiotis Demestichas, Kostas Tsagkaris, Evangelia Tzifa, Aikaterini Demesticha, Vera Stavroulaki, Athina Ropodi, Evangelos Argoudelis, Marinos Galiatsatos, Aristotelis Margaris, George Paitaris, Dimitrios Kardaris, Ioannis Kaffes, Haeyoung Lee Klaus, Moessner Unis Valerio, Frascolla Bismark, Okyere Intel, Salva D\'iaz, Oscar Carrasco, Federico Miatton, Sistel Antonio, Dedomenico Benoit, Miscopein Cea, Thanasis Oikonomou, Dimitrios Kritharidis, Harald Weigold
D3.2: SPEED-5G enhanced functional and system architecture, scenarios and performance evaluation metrics
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This deliverable contains a detailed description of the use cases considered in SPEED-5G, which will be used as a basis for demonstration in project. These use cases are Dynamic Channel selection, Load balancing, carrier aggregation. This deliverable also explains the SPEED-5G architecture design principles, which is based on software-defined networking and network function virtualisation. The degree of virtualisation is further illustrated by a number of novel contributions from involved partners. In the end, KPIs for each use case are presented, along with the description of how these KPIs can support 5G-PPP KPIs.
[ { "created": "Thu, 9 Nov 2017 17:38:07 GMT", "version": "v1" }, { "created": "Tue, 14 Nov 2017 08:23:40 GMT", "version": "v2" } ]
2017-11-16
[ [ "Mumtaz", "Shahid", "" ], [ "Saidul", "Kazi", "" ], [ "Rodriguez", "Huq Jonathan", "" ], [ "Marques", "Paulo", "" ], [ "Radwan", "Ayman", "" ], [ "BT", "Keith Briggs Michael Fitch", "" ], [ "Georgakopoulos", "Andreas", "" ], [ "Belikaidis", "Ioannis-Prodromos", "" ], [ "Vlacheas", "Panagiotis", "" ], [ "Kelaidonis", "Dimitrios", "" ], [ "Kosmatos", "Evangelos", "" ], [ "Kotrotsos", "Serafim", "" ], [ "Vassaki", "Stavroula", "" ], [ "Kritikou", "Yiouli", "" ], [ "Demestichas", "Panagiotis", "" ], [ "Tsagkaris", "Kostas", "" ], [ "Tzifa", "Evangelia", "" ], [ "Demesticha", "Aikaterini", "" ], [ "Stavroulaki", "Vera", "" ], [ "Ropodi", "Athina", "" ], [ "Argoudelis", "Evangelos", "" ], [ "Galiatsatos", "Marinos", "" ], [ "Margaris", "Aristotelis", "" ], [ "Paitaris", "George", "" ], [ "Kardaris", "Dimitrios", "" ], [ "Kaffes", "Ioannis", "" ], [ "Klaus", "Haeyoung Lee", "" ], [ "Valerio", "Moessner Unis", "" ], [ "Bismark", "Frascolla", "" ], [ "Intel", "Okyere", "" ], [ "Díaz", "Salva", "" ], [ "Carrasco", "Oscar", "" ], [ "Miatton", "Federico", "" ], [ "Antonio", "Sistel", "" ], [ "Benoit", "Dedomenico", "" ], [ "Cea", "Miscopein", "" ], [ "Oikonomou", "Thanasis", "" ], [ "Kritharidis", "Dimitrios", "" ], [ "Weigold", "Harald", "" ] ]
This deliverable contains a detailed description of the use cases considered in SPEED-5G, which will be used as a basis for demonstration in project. These use cases are Dynamic Channel selection, Load balancing, carrier aggregation. This deliverable also explains the SPEED-5G architecture design principles, which is based on software-defined networking and network function virtualisation. The degree of virtualisation is further illustrated by a number of novel contributions from involved partners. In the end, KPIs for each use case are presented, along with the description of how these KPIs can support 5G-PPP KPIs.
1208.1326
Brian Butler
Brian K. Butler and Paul H. Siegel
Numerical Issues Affecting LDPC Error Floors
7 pages, 5 figures. Submitted to IEEE Globecom (Selected Area of Communications Data Storage Track)
null
null
null
cs.IT cs.NA math.IT math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Numerical issues related to the occurrence of error floors in floating-point simulations of belief propagation (BP) decoders are examined. Careful processing of messages corresponding to highly-certain bit values can sometimes reduce error floors by several orders of magnitude. Computational solutions for properly handling such messages are provided for the sum-product algorithm (SPA) and several variants.
[ { "created": "Tue, 7 Aug 2012 02:41:54 GMT", "version": "v1" } ]
2012-08-08
[ [ "Butler", "Brian K.", "" ], [ "Siegel", "Paul H.", "" ] ]
Numerical issues related to the occurrence of error floors in floating-point simulations of belief propagation (BP) decoders are examined. Careful processing of messages corresponding to highly-certain bit values can sometimes reduce error floors by several orders of magnitude. Computational solutions for properly handling such messages are provided for the sum-product algorithm (SPA) and several variants.
2203.11447
Jasper Brown
Jasper Brown, Cameron Clark, Sabrina Lomax, Khalid Rafique, Salah Sukkarieh
Manipulating UAV Imagery for Satellite Model Training, Calibration and Testing
16 pages, 7 figures, 2 tables
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern livestock farming is increasingly data driven and frequently relies on efficient remote sensing to gather data over wide areas. High resolution satellite imagery is one such data source, which is becoming more accessible for farmers as coverage increases and cost falls. Such images can be used to detect and track animals, monitor pasture changes, and understand land use. Many of the data driven models being applied to these tasks require ground truthing at resolutions higher than satellites can provide. Simultaneously, there is a lack of available aerial imagery focused on farmland changes that occur over days or weeks, such as herd movement. With this goal in mind, we present a new multi-temporal dataset of high resolution UAV imagery which is artificially degraded to match satellite data quality. An empirical blurring metric is used to calibrate the degradation process against actual satellite imagery of the area. UAV surveys were flown repeatedly over several weeks, for specific farm locations. This 5cm/pixel data is sufficiently high resolution to accurately ground truth cattle locations, and other factors such as grass cover. From 33 wide area UAV surveys, 1869 patches were extracted and artificially degraded using an accurate satellite optical model to simulate satellite data. Geographic patches from multiple time periods are aligned and presented as sets, providing a multi-temporal dataset that can be used for detecting changes on farms. The geo-referenced images and 27,853 manually annotated cattle labels are made publicly available.
[ { "created": "Tue, 22 Mar 2022 03:57:02 GMT", "version": "v1" } ]
2022-04-12
[ [ "Brown", "Jasper", "" ], [ "Clark", "Cameron", "" ], [ "Lomax", "Sabrina", "" ], [ "Rafique", "Khalid", "" ], [ "Sukkarieh", "Salah", "" ] ]
Modern livestock farming is increasingly data driven and frequently relies on efficient remote sensing to gather data over wide areas. High resolution satellite imagery is one such data source, which is becoming more accessible for farmers as coverage increases and cost falls. Such images can be used to detect and track animals, monitor pasture changes, and understand land use. Many of the data driven models being applied to these tasks require ground truthing at resolutions higher than satellites can provide. Simultaneously, there is a lack of available aerial imagery focused on farmland changes that occur over days or weeks, such as herd movement. With this goal in mind, we present a new multi-temporal dataset of high resolution UAV imagery which is artificially degraded to match satellite data quality. An empirical blurring metric is used to calibrate the degradation process against actual satellite imagery of the area. UAV surveys were flown repeatedly over several weeks, for specific farm locations. This 5cm/pixel data is sufficiently high resolution to accurately ground truth cattle locations, and other factors such as grass cover. From 33 wide area UAV surveys, 1869 patches were extracted and artificially degraded using an accurate satellite optical model to simulate satellite data. Geographic patches from multiple time periods are aligned and presented as sets, providing a multi-temporal dataset that can be used for detecting changes on farms. The geo-referenced images and 27,853 manually annotated cattle labels are made publicly available.
2301.05316
Pedro Enrique Iturria Rivera Mr.
Md Arafat Habib, Hao Zhou, Pedro Enrique Iturria Rivera, Medhat Elsayed, Majid Bavand, Raimundas Gaigalas, Steve Furr, Melike Erol-Kantarci
Traffic Steering for 5G Multi-RAT Deployments using Deep Reinforcement Learning
6 pages, 6 figures and 1 table. Accepted in CCNC'23
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
In 5G non-standalone mode, traffic steering is a critical technique to take full advantage of 5G new radio while optimizing dual connectivity of 5G and LTE networks in multiple radio access technology (RAT). An intelligent traffic steering mechanism can play an important role to maintain seamless user experience by choosing appropriate RAT (5G or LTE) dynamically for a specific user traffic flow with certain QoS requirements. In this paper, we propose a novel traffic steering mechanism based on Deep Q-learning that can automate traffic steering decisions in a dynamic environment having multiple RATs, and maintain diverse QoS requirements for different traffic classes. The proposed method is compared with two baseline algorithms: a heuristic-based algorithm and Q-learningbased traffic steering. Compared to the Q-learning and heuristic baselines, our results show that the proposed algorithm achieves better performance in terms of 6% and 10% higher average system throughput, and 23% and 33% lower network delay, respectively.
[ { "created": "Thu, 12 Jan 2023 22:02:25 GMT", "version": "v1" } ]
2023-01-16
[ [ "Habib", "Md Arafat", "" ], [ "Zhou", "Hao", "" ], [ "Rivera", "Pedro Enrique Iturria", "" ], [ "Elsayed", "Medhat", "" ], [ "Bavand", "Majid", "" ], [ "Gaigalas", "Raimundas", "" ], [ "Furr", "Steve", "" ], [ "Erol-Kantarci", "Melike", "" ] ]
In 5G non-standalone mode, traffic steering is a critical technique to take full advantage of 5G new radio while optimizing dual connectivity of 5G and LTE networks in multiple radio access technology (RAT). An intelligent traffic steering mechanism can play an important role to maintain seamless user experience by choosing appropriate RAT (5G or LTE) dynamically for a specific user traffic flow with certain QoS requirements. In this paper, we propose a novel traffic steering mechanism based on Deep Q-learning that can automate traffic steering decisions in a dynamic environment having multiple RATs, and maintain diverse QoS requirements for different traffic classes. The proposed method is compared with two baseline algorithms: a heuristic-based algorithm and Q-learningbased traffic steering. Compared to the Q-learning and heuristic baselines, our results show that the proposed algorithm achieves better performance in terms of 6% and 10% higher average system throughput, and 23% and 33% lower network delay, respectively.
2112.13340
Baofeng Wu
Baofeng Wu
Proof of a conjecture on a special class of matrices over commutative rings of characteristic 2
null
null
null
null
cs.CR cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this note, we prove the conjecture posed by Keller and Rosemarin at Eurocrypt 2021 on the nullity of a matrix polynomial of a block matrix with Hadamard type blocks over commutative rings of characteristic 2. Therefore, it confirms the conjectural optimal bound on the dimension of invariant subspace of the Starkad cipher using the HADES design strategy. We also give characterizations of the algebraic structure formed by Hadamard matrices over commutative rings.
[ { "created": "Sun, 26 Dec 2021 09:39:32 GMT", "version": "v1" } ]
2021-12-28
[ [ "Wu", "Baofeng", "" ] ]
In this note, we prove the conjecture posed by Keller and Rosemarin at Eurocrypt 2021 on the nullity of a matrix polynomial of a block matrix with Hadamard type blocks over commutative rings of characteristic 2. Therefore, it confirms the conjectural optimal bound on the dimension of invariant subspace of the Starkad cipher using the HADES design strategy. We also give characterizations of the algebraic structure formed by Hadamard matrices over commutative rings.
1610.08309
Edita Pelantova
Christiane Frougny, Marta Pavelka, Edita Pelantova, Milena Svobodova
On-line algorithms for multiplication and division in real and complex numeration systems
Extended version of contribution on 23rd IEEE Symposium on Computer Arithmetic ARITH23
Discrete Mathematics & Theoretical Computer Science, Vol. 21 no. 3 , Discrete Algorithms (June 20, 2019) dmtcs:4313
10.23638/DMTCS-21-3-14
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A positional numeration system is given by a base and by a set of digits. The base is a real or complex number $\beta$ such that $|\beta|>1$, and the digit set $A$ is a finite set of digits including $0$. Thus a number can be seen as a finite or infinite string of digits. An on-line algorithm processes the input piece-by-piece in a serial fashion. On-line arithmetic, introduced by Trivedi and Ercegovac, is a mode of computation where operands and results flow through arithmetic units in a digit serial manner, starting with the most significant digit. In this paper, we first formulate a generalized version of the on-line algorithms for multiplication and division of Trivedi and Ercegovac for the cases that $\beta$ is any real or complex number, and digits are real or complex. We then define the so-called OL Property, and show that if $(\beta, A)$ has the OL Property, then on-line multiplication and division are feasible by the Trivedi-Ercegovac algorithms. For a real base $\beta$ and a digit set $A$ of contiguous integers, the system $(\beta, A)$ has the OL Property if $\# A > |\beta|$. For a complex base $\beta$ and symmetric digit set $A$ of contiguous integers, the system $(\beta, A)$ has the OL Property if $\# A > \beta\overline{\beta} + |\beta + \overline{\beta}|$. Provided that addition and subtraction are realizable in parallel in the system $(\beta, A)$ and that preprocessing of the denominator is possible, our on-line algorithms for multiplication and division have linear time complexity. Three examples are presented in detail: base $\beta=\frac{3+\sqrt{5}}{2}$ with digits $A=\{-1,0,1\}$; base $\beta=2i$ with digits $A = \{-2,-1, 0,1,2\}$; and base $\beta = -\frac{3}{2} + i \frac{\sqrt{3}}{2} = -1 + \omega$, where $\omega = \exp{\frac{2i\pi}{3}}$, with digits $A = \{0, \pm 1, \pm \omega, \pm \omega^2 \}$.
[ { "created": "Wed, 26 Oct 2016 13:05:12 GMT", "version": "v1" }, { "created": "Sun, 18 Feb 2018 11:04:16 GMT", "version": "v2" }, { "created": "Fri, 25 Jan 2019 10:07:38 GMT", "version": "v3" }, { "created": "Mon, 20 May 2019 09:12:15 GMT", "version": "v4" }, { "created": "Tue, 11 Jun 2019 16:16:23 GMT", "version": "v5" } ]
2023-06-22
[ [ "Frougny", "Christiane", "" ], [ "Pavelka", "Marta", "" ], [ "Pelantova", "Edita", "" ], [ "Svobodova", "Milena", "" ] ]
A positional numeration system is given by a base and by a set of digits. The base is a real or complex number $\beta$ such that $|\beta|>1$, and the digit set $A$ is a finite set of digits including $0$. Thus a number can be seen as a finite or infinite string of digits. An on-line algorithm processes the input piece-by-piece in a serial fashion. On-line arithmetic, introduced by Trivedi and Ercegovac, is a mode of computation where operands and results flow through arithmetic units in a digit serial manner, starting with the most significant digit. In this paper, we first formulate a generalized version of the on-line algorithms for multiplication and division of Trivedi and Ercegovac for the cases that $\beta$ is any real or complex number, and digits are real or complex. We then define the so-called OL Property, and show that if $(\beta, A)$ has the OL Property, then on-line multiplication and division are feasible by the Trivedi-Ercegovac algorithms. For a real base $\beta$ and a digit set $A$ of contiguous integers, the system $(\beta, A)$ has the OL Property if $\# A > |\beta|$. For a complex base $\beta$ and symmetric digit set $A$ of contiguous integers, the system $(\beta, A)$ has the OL Property if $\# A > \beta\overline{\beta} + |\beta + \overline{\beta}|$. Provided that addition and subtraction are realizable in parallel in the system $(\beta, A)$ and that preprocessing of the denominator is possible, our on-line algorithms for multiplication and division have linear time complexity. Three examples are presented in detail: base $\beta=\frac{3+\sqrt{5}}{2}$ with digits $A=\{-1,0,1\}$; base $\beta=2i$ with digits $A = \{-2,-1, 0,1,2\}$; and base $\beta = -\frac{3}{2} + i \frac{\sqrt{3}}{2} = -1 + \omega$, where $\omega = \exp{\frac{2i\pi}{3}}$, with digits $A = \{0, \pm 1, \pm \omega, \pm \omega^2 \}$.
0805.3237
Sebastien Collette
S. Collette and L. Cucu and J. Goossens
Integrating Job Parallelism in Real-Time Scheduling Theory
null
null
null
null
cs.OS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the global scheduling of sporadic, implicit deadline, real-time task systems on multiprocessor platforms. We provide a task model which integrates job parallelism. We prove that the time-complexity of the feasibility problem of these systems is linear relatively to the number of (sporadic) tasks for a fixed number of processors. We propose a scheduling algorithm theoretically optimal (i.e., preemptions and migrations neglected). Moreover, we provide an exact feasibility utilization bound. Lastly, we propose a technique to limit the number of migrations and preemptions.
[ { "created": "Wed, 21 May 2008 09:38:15 GMT", "version": "v1" } ]
2008-05-22
[ [ "Collette", "S.", "" ], [ "Cucu", "L.", "" ], [ "Goossens", "J.", "" ] ]
We investigate the global scheduling of sporadic, implicit deadline, real-time task systems on multiprocessor platforms. We provide a task model which integrates job parallelism. We prove that the time-complexity of the feasibility problem of these systems is linear relatively to the number of (sporadic) tasks for a fixed number of processors. We propose a scheduling algorithm theoretically optimal (i.e., preemptions and migrations neglected). Moreover, we provide an exact feasibility utilization bound. Lastly, we propose a technique to limit the number of migrations and preemptions.
1706.03952
Charalambos Themistocleous
Jean-Philippe Bernardy and Charalambos Themistocleous
Modelling prosodic structure using Artificial Neural Networks
4 pages, 3 figures, Experimental linguistics 2017
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to accurately perceive whether a speaker is asking a question or is making a statement is crucial for any successful interaction. However, learning and classifying tonal patterns has been a challenging task for automatic speech recognition and for models of tonal representation, as tonal contours are characterized by significant variation. This paper provides a classification model of Cypriot Greek questions and statements. We evaluate two state-of-the-art network architectures: a Long Short-Term Memory (LSTM) network and a convolutional network (ConvNet). The ConvNet outperforms the LSTM in the classification task and exhibited an excellent performance with 95% classification accuracy.
[ { "created": "Tue, 13 Jun 2017 08:28:39 GMT", "version": "v1" }, { "created": "Thu, 15 Jun 2017 12:49:57 GMT", "version": "v2" } ]
2017-06-16
[ [ "Bernardy", "Jean-Philippe", "" ], [ "Themistocleous", "Charalambos", "" ] ]
The ability to accurately perceive whether a speaker is asking a question or is making a statement is crucial for any successful interaction. However, learning and classifying tonal patterns has been a challenging task for automatic speech recognition and for models of tonal representation, as tonal contours are characterized by significant variation. This paper provides a classification model of Cypriot Greek questions and statements. We evaluate two state-of-the-art network architectures: a Long Short-Term Memory (LSTM) network and a convolutional network (ConvNet). The ConvNet outperforms the LSTM in the classification task and exhibited an excellent performance with 95% classification accuracy.
2112.12142
Priyam Shah Mr
Priyam Shah, Jie Ye, Xian-He Sun
Survey the storage systems used in HPC and BDA ecosystems
13 pages, 10 figures, 7 tables
null
null
null
cs.DC
http://creativecommons.org/licenses/by-nc-sa/4.0/
The advancement in HPC and BDA ecosystem demands a better understanding of the storage systems to plan effective solutions. To make applications access data more efficiently for computation, HPC and BDA ecosystems adopt different storage systems. Each storage system has its pros and cons. Therefore, it is worthwhile and interesting to explore the storage systems used in HPC and BDA respectively. Also, it's inquisitive to understand how such storage systems can handle data consistency and fault tolerance at a massive scale. In this paper, we're surveying four storage systems Lustre, Ceph, HDFS, and CockroachDB. Lustre and HDFS are some of the most prominent file systems in HPC and BDA ecosystem. Ceph is an upcoming filesystem and is being used by supercomputers. CockroachDB is based on NewSQL systems a technique that is being used in the industry for BDA applications. The study helps us to understand the underlying architecture of these storage systems and the building blocks used to create them. The protocols and mechanisms used for data storage, data access, data consistency, fault tolerance, and recovery from failover are also overviewed. The comparative study will help system designers to understand the key features and architectural goals of these storage systems to select better storage system solutions.
[ { "created": "Wed, 22 Dec 2021 18:57:18 GMT", "version": "v1" }, { "created": "Thu, 23 Dec 2021 18:25:38 GMT", "version": "v2" } ]
2021-12-24
[ [ "Shah", "Priyam", "" ], [ "Ye", "Jie", "" ], [ "Sun", "Xian-He", "" ] ]
The advancement in HPC and BDA ecosystem demands a better understanding of the storage systems to plan effective solutions. To make applications access data more efficiently for computation, HPC and BDA ecosystems adopt different storage systems. Each storage system has its pros and cons. Therefore, it is worthwhile and interesting to explore the storage systems used in HPC and BDA respectively. Also, it's inquisitive to understand how such storage systems can handle data consistency and fault tolerance at a massive scale. In this paper, we're surveying four storage systems Lustre, Ceph, HDFS, and CockroachDB. Lustre and HDFS are some of the most prominent file systems in HPC and BDA ecosystem. Ceph is an upcoming filesystem and is being used by supercomputers. CockroachDB is based on NewSQL systems a technique that is being used in the industry for BDA applications. The study helps us to understand the underlying architecture of these storage systems and the building blocks used to create them. The protocols and mechanisms used for data storage, data access, data consistency, fault tolerance, and recovery from failover are also overviewed. The comparative study will help system designers to understand the key features and architectural goals of these storage systems to select better storage system solutions.
2302.04288
Jiaqi Ma
Satyapriya Krishna, Jiaqi Ma, Himabindu Lakkaraju
Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Right to Explanation and the Right to be Forgotten are two important principles outlined to regulate algorithmic decision making and data usage in real-world applications. While the right to explanation allows individuals to request an actionable explanation for an algorithmic decision, the right to be forgotten grants them the right to ask for their data to be deleted from all the databases and models of an organization. Intuitively, enforcing the right to be forgotten may trigger model updates which in turn invalidate previously provided explanations, thus violating the right to explanation. In this work, we investigate the technical implications arising due to the interference between the two aforementioned regulatory principles, and propose the first algorithmic framework to resolve the tension between them. To this end, we formulate a novel optimization problem to generate explanations that are robust to model updates due to the removal of training data instances by data deletion requests. We then derive an efficient approximation algorithm to handle the combinatorial complexity of this optimization problem. We theoretically demonstrate that our method generates explanations that are provably robust to worst-case data deletion requests with bounded costs in case of linear models and certain classes of non-linear models. Extensive experimentation with real-world datasets demonstrates the efficacy of the proposed framework.
[ { "created": "Wed, 8 Feb 2023 19:03:00 GMT", "version": "v1" }, { "created": "Fri, 10 Feb 2023 03:24:50 GMT", "version": "v2" } ]
2023-02-13
[ [ "Krishna", "Satyapriya", "" ], [ "Ma", "Jiaqi", "" ], [ "Lakkaraju", "Himabindu", "" ] ]
The Right to Explanation and the Right to be Forgotten are two important principles outlined to regulate algorithmic decision making and data usage in real-world applications. While the right to explanation allows individuals to request an actionable explanation for an algorithmic decision, the right to be forgotten grants them the right to ask for their data to be deleted from all the databases and models of an organization. Intuitively, enforcing the right to be forgotten may trigger model updates which in turn invalidate previously provided explanations, thus violating the right to explanation. In this work, we investigate the technical implications arising due to the interference between the two aforementioned regulatory principles, and propose the first algorithmic framework to resolve the tension between them. To this end, we formulate a novel optimization problem to generate explanations that are robust to model updates due to the removal of training data instances by data deletion requests. We then derive an efficient approximation algorithm to handle the combinatorial complexity of this optimization problem. We theoretically demonstrate that our method generates explanations that are provably robust to worst-case data deletion requests with bounded costs in case of linear models and certain classes of non-linear models. Extensive experimentation with real-world datasets demonstrates the efficacy of the proposed framework.
2202.09097
Mangal Kothari
Aryan Sharma, Nitik Jain, and Mangal Kothari
Lightweight Multi-Drone Detection and 3D-Localization via YOLO
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this work, we present and evaluate a method to perform real-time multiple drone detection and three-dimensional localization using state-of-the-art tiny-YOLOv4 object detection algorithm and stereo triangulation. Our computer vision approach eliminates the need for computationally expensive stereo matching algorithms, thereby significantly reducing the memory footprint and making it deployable on embedded systems. Our drone detection system is highly modular (with support for various detection algorithms) and capable of identifying multiple drones in a system, with real-time detection accuracy of up to 77\% with an average FPS of 332 (on Nvidia Titan Xp). We also test the complete pipeline in AirSim environment, detecting drones at a maximum distance of 8 meters, with a mean error of $23\%$ of the distance. We also release the source code for the project, with pre-trained models and the curated synthetic stereo dataset.
[ { "created": "Fri, 18 Feb 2022 09:41:23 GMT", "version": "v1" } ]
2022-02-21
[ [ "Sharma", "Aryan", "" ], [ "Jain", "Nitik", "" ], [ "Kothari", "Mangal", "" ] ]
In this work, we present and evaluate a method to perform real-time multiple drone detection and three-dimensional localization using state-of-the-art tiny-YOLOv4 object detection algorithm and stereo triangulation. Our computer vision approach eliminates the need for computationally expensive stereo matching algorithms, thereby significantly reducing the memory footprint and making it deployable on embedded systems. Our drone detection system is highly modular (with support for various detection algorithms) and capable of identifying multiple drones in a system, with real-time detection accuracy of up to 77\% with an average FPS of 332 (on Nvidia Titan Xp). We also test the complete pipeline in AirSim environment, detecting drones at a maximum distance of 8 meters, with a mean error of $23\%$ of the distance. We also release the source code for the project, with pre-trained models and the curated synthetic stereo dataset.
1801.01769
Suichan Li
Suichan Li
3D-DETNet: a Single Stage Video-Based Vehicle Detector
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video-based vehicle detection has received considerable attention over the last ten years and there are many deep learning based detection methods which can be applied to it. However, these methods are devised for still images and applying them for video vehicle detection directly always obtains poor performance. In this work, we propose a new single-stage video-based vehicle detector integrated with 3DCovNet and focal loss, called 3D-DETNet. Draw support from 3D Convolution network and focal loss, our method has ability to capture motion information and is more suitable to detect vehicle in video than other single-stage methods devised for static images. The multiple video frames are initially fed to 3D-DETNet to generate multiple spatial feature maps, then sub-model 3DConvNet takes spatial feature maps as input to capture temporal information which is fed to final fully convolution model for predicting locations of vehicles in video frames. We evaluate our method on UA-DETAC vehicle detection dataset and our 3D-DETNet yields best performance and keeps a higher detection speed of 26 fps compared with other competing methods.
[ { "created": "Fri, 5 Jan 2018 14:38:14 GMT", "version": "v1" }, { "created": "Mon, 15 Jan 2018 09:06:07 GMT", "version": "v2" } ]
2018-01-16
[ [ "Li", "Suichan", "" ] ]
Video-based vehicle detection has received considerable attention over the last ten years and there are many deep learning based detection methods which can be applied to it. However, these methods are devised for still images and applying them for video vehicle detection directly always obtains poor performance. In this work, we propose a new single-stage video-based vehicle detector integrated with 3DCovNet and focal loss, called 3D-DETNet. Draw support from 3D Convolution network and focal loss, our method has ability to capture motion information and is more suitable to detect vehicle in video than other single-stage methods devised for static images. The multiple video frames are initially fed to 3D-DETNet to generate multiple spatial feature maps, then sub-model 3DConvNet takes spatial feature maps as input to capture temporal information which is fed to final fully convolution model for predicting locations of vehicles in video frames. We evaluate our method on UA-DETAC vehicle detection dataset and our 3D-DETNet yields best performance and keeps a higher detection speed of 26 fps compared with other competing methods.
1404.4465
Peter Sanders
Florian Merz and Peter Sanders
PReaCH: A Fast Lightweight Reachability Index using Pruning and Contraction Hierarchies
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop the data structure PReaCH (for Pruned Reachability Contraction Hierarchies) which supports reachability queries in a directed graph, i.e., it supports queries that ask whether two nodes in the graph are connected by a directed path. PReaCH adapts the contraction hierarchy speedup techniques for shortest path queries to the reachability setting. The resulting approach is surprisingly simple and guarantees linear space and near linear preprocessing time. Orthogonally to that, we improve existing pruning techniques for the search by gathering more information from a single DFS-traversal of the graph. PReaCH-indices significantly outperform previous data structures with comparable preprocessing cost. Methods with faster queries need significantly more preprocessing time in particular for the most difficult instances.
[ { "created": "Thu, 17 Apr 2014 09:55:59 GMT", "version": "v1" } ]
2014-04-18
[ [ "Merz", "Florian", "" ], [ "Sanders", "Peter", "" ] ]
We develop the data structure PReaCH (for Pruned Reachability Contraction Hierarchies) which supports reachability queries in a directed graph, i.e., it supports queries that ask whether two nodes in the graph are connected by a directed path. PReaCH adapts the contraction hierarchy speedup techniques for shortest path queries to the reachability setting. The resulting approach is surprisingly simple and guarantees linear space and near linear preprocessing time. Orthogonally to that, we improve existing pruning techniques for the search by gathering more information from a single DFS-traversal of the graph. PReaCH-indices significantly outperform previous data structures with comparable preprocessing cost. Methods with faster queries need significantly more preprocessing time in particular for the most difficult instances.
1712.07242
Dan Kushnir
Dan Kushnir, Shirin Jalali, Iraj Saniee
Linear Time Clustering for High Dimensional Mixtures of Gaussian Clouds
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clustering mixtures of Gaussian distributions is a fundamental and challenging problem that is ubiquitous in various high-dimensional data processing tasks. While state-of-the-art work on learning Gaussian mixture models has focused primarily on improving separation bounds and their generalization to arbitrary classes of mixture models, less emphasis has been paid to practical computational efficiency of the proposed solutions. In this paper, we propose a novel and highly efficient clustering algorithm for $n$ points drawn from a mixture of two arbitrary Gaussian distributions in $\mathbb{R}^p$. The algorithm involves performing random 1-dimensional projections until a direction is found that yields a user-specified clustering error $e$. For a 1-dimensional separation parameter $\gamma$ satisfying $\gamma=Q^{-1}(e)$, the expected number of such projections is shown to be bounded by $o(\ln p)$, when $\gamma$ satisfies $\gamma\leq c\sqrt{\ln{\ln{p}}}$, with $c$ as the separability parameter of the two Gaussians in $\mathbb{R}^p$. Consequently, the expected overall running time of the algorithm is linear in $n$ and quasi-linear in $p$ at $o(\ln{p})O(np)$, and the sample complexity is independent of $p$. This result stands in contrast to prior works which provide polynomial, with at-best quadratic, running time in $p$ and $n$. We show that our bound on the expected number of 1-dimensional projections extends to the case of three or more Gaussian components, and we present a generalization of our results to mixture distributions beyond the Gaussian model.
[ { "created": "Tue, 19 Dec 2017 22:23:53 GMT", "version": "v1" }, { "created": "Fri, 22 Dec 2017 17:35:38 GMT", "version": "v2" }, { "created": "Thu, 1 Mar 2018 22:46:05 GMT", "version": "v3" } ]
2018-03-05
[ [ "Kushnir", "Dan", "" ], [ "Jalali", "Shirin", "" ], [ "Saniee", "Iraj", "" ] ]
Clustering mixtures of Gaussian distributions is a fundamental and challenging problem that is ubiquitous in various high-dimensional data processing tasks. While state-of-the-art work on learning Gaussian mixture models has focused primarily on improving separation bounds and their generalization to arbitrary classes of mixture models, less emphasis has been paid to practical computational efficiency of the proposed solutions. In this paper, we propose a novel and highly efficient clustering algorithm for $n$ points drawn from a mixture of two arbitrary Gaussian distributions in $\mathbb{R}^p$. The algorithm involves performing random 1-dimensional projections until a direction is found that yields a user-specified clustering error $e$. For a 1-dimensional separation parameter $\gamma$ satisfying $\gamma=Q^{-1}(e)$, the expected number of such projections is shown to be bounded by $o(\ln p)$, when $\gamma$ satisfies $\gamma\leq c\sqrt{\ln{\ln{p}}}$, with $c$ as the separability parameter of the two Gaussians in $\mathbb{R}^p$. Consequently, the expected overall running time of the algorithm is linear in $n$ and quasi-linear in $p$ at $o(\ln{p})O(np)$, and the sample complexity is independent of $p$. This result stands in contrast to prior works which provide polynomial, with at-best quadratic, running time in $p$ and $n$. We show that our bound on the expected number of 1-dimensional projections extends to the case of three or more Gaussian components, and we present a generalization of our results to mixture distributions beyond the Gaussian model.
2308.06241
Mohammad Maksood Akhter
Mohammad Maksood Akhter, Devpriya Kanojia
Covid-19 Public Sentiment Analysis for Indian Tweets Classification
null
null
null
null
cs.CL cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When any extraordinary event takes place in the world wide area, it is the social media that acts as the fastest carrier of the news along with the consequences dealt with that event. One can gather much information through social networks regarding the sentiments, behavior, and opinions of the people. In this paper, we focus mainly on sentiment analysis of twitter data of India which comprises of COVID-19 tweets. We show how Twitter data has been extracted and then run sentimental analysis queries on it. This is helpful to analyze the information in the tweets where opinions are highly unstructured, heterogeneous, and are either positive or negative or neutral in some cases.
[ { "created": "Tue, 1 Aug 2023 09:29:55 GMT", "version": "v1" } ]
2023-08-14
[ [ "Akhter", "Mohammad Maksood", "" ], [ "Kanojia", "Devpriya", "" ] ]
When any extraordinary event takes place in the world wide area, it is the social media that acts as the fastest carrier of the news along with the consequences dealt with that event. One can gather much information through social networks regarding the sentiments, behavior, and opinions of the people. In this paper, we focus mainly on sentiment analysis of twitter data of India which comprises of COVID-19 tweets. We show how Twitter data has been extracted and then run sentimental analysis queries on it. This is helpful to analyze the information in the tweets where opinions are highly unstructured, heterogeneous, and are either positive or negative or neutral in some cases.
2304.14289
Yuzhou Gu
Zongchen Chen, Yuzhou Gu
Fast Sampling of $b$-Matchings and $b$-Edge Covers
Added new results
null
null
null
cs.DS cs.DM math.CO math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For an integer $b \ge 1$, a $b$-matching (resp. $b$-edge cover) of a graph $G=(V,E)$ is a subset $S\subseteq E$ of edges such that every vertex is incident with at most (resp. at least) $b$ edges from $S$. We prove that for any $b \ge 1$ the simple Glauber dynamics for sampling (weighted) $b$-matchings and $b$-edge covers mixes in $O(n\log n)$ time on all $n$-vertex bounded-degree graphs. This significantly improves upon previous results which have worse running time and only work for $b$-matchings with $b \le 7$ and for $b$-edge covers with $b \le 2$. More generally, we prove spectral independence for a broad class of binary symmetric Holant problems with log-concave signatures, including $b$-matchings, $b$-edge covers, and antiferromagnetic $2$-spin edge models. We hence deduce optimal mixing time of the Glauber dynamics from spectral independence. The core of our proof is a recursive coupling inspired by (Chen and Zhang '23) which upper bounds the Wasserstein $W_1$ distance between distributions under different pinnings. Using a similar method, we also obtain the optimal $O(n\log n)$ mixing time of the Glauber dynamics for the hardcore model on $n$-vertex bounded-degree claw-free graphs, for any fugacity $\lambda$. This improves over previous works which have at least cubic dependence on $n$.
[ { "created": "Thu, 27 Apr 2023 15:48:22 GMT", "version": "v1" }, { "created": "Mon, 31 Jul 2023 18:48:30 GMT", "version": "v2" } ]
2023-08-02
[ [ "Chen", "Zongchen", "" ], [ "Gu", "Yuzhou", "" ] ]
For an integer $b \ge 1$, a $b$-matching (resp. $b$-edge cover) of a graph $G=(V,E)$ is a subset $S\subseteq E$ of edges such that every vertex is incident with at most (resp. at least) $b$ edges from $S$. We prove that for any $b \ge 1$ the simple Glauber dynamics for sampling (weighted) $b$-matchings and $b$-edge covers mixes in $O(n\log n)$ time on all $n$-vertex bounded-degree graphs. This significantly improves upon previous results which have worse running time and only work for $b$-matchings with $b \le 7$ and for $b$-edge covers with $b \le 2$. More generally, we prove spectral independence for a broad class of binary symmetric Holant problems with log-concave signatures, including $b$-matchings, $b$-edge covers, and antiferromagnetic $2$-spin edge models. We hence deduce optimal mixing time of the Glauber dynamics from spectral independence. The core of our proof is a recursive coupling inspired by (Chen and Zhang '23) which upper bounds the Wasserstein $W_1$ distance between distributions under different pinnings. Using a similar method, we also obtain the optimal $O(n\log n)$ mixing time of the Glauber dynamics for the hardcore model on $n$-vertex bounded-degree claw-free graphs, for any fugacity $\lambda$. This improves over previous works which have at least cubic dependence on $n$.
2406.06379
Siyu An
Siyu An, Qin Li, Junru Lu, Di Yin and Xing Sun
FinVerse: An Autonomous Agent System for Versatile Financial Analysis
null
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the significant advancements in cognitive intelligence driven by LLMs, autonomous agent systems have attracted extensive attention. Despite this growing interest, the development of stable and efficient agent systems poses substantial practical challenges. In this paper, we introduce FinVerse, a meticulously crafted agent system designed for a broad range of financial topics. FinVerse integrates over 600 financial APIs, enabling access to more accurate and extensive financial information compared to generalist agents. To enhance financial information processing capabilities, FinVerse is equipped with an embedded code interpreter, enabling the execution of complex data analysis tasks with precision and efficiency. Our work includes an empirical comparison of several LLMs in driving FinVerse. Specifically, we propose our own scheme for training LLMs using SFT to optimize LLM performance within FinVerse. Recognizing the scarcity of specialized datasets to build LLMs for agents, we have constructed a dataset and plan to make it open-source, providing a valuable resource for peer application developers. The demo video has been released on YouTube at https://www.youtube.com/watch?v=sk8L9_Wv7J4
[ { "created": "Mon, 10 Jun 2024 15:40:23 GMT", "version": "v1" } ]
2024-06-11
[ [ "An", "Siyu", "" ], [ "Li", "Qin", "" ], [ "Lu", "Junru", "" ], [ "Yin", "Di", "" ], [ "Sun", "Xing", "" ] ]
With the significant advancements in cognitive intelligence driven by LLMs, autonomous agent systems have attracted extensive attention. Despite this growing interest, the development of stable and efficient agent systems poses substantial practical challenges. In this paper, we introduce FinVerse, a meticulously crafted agent system designed for a broad range of financial topics. FinVerse integrates over 600 financial APIs, enabling access to more accurate and extensive financial information compared to generalist agents. To enhance financial information processing capabilities, FinVerse is equipped with an embedded code interpreter, enabling the execution of complex data analysis tasks with precision and efficiency. Our work includes an empirical comparison of several LLMs in driving FinVerse. Specifically, we propose our own scheme for training LLMs using SFT to optimize LLM performance within FinVerse. Recognizing the scarcity of specialized datasets to build LLMs for agents, we have constructed a dataset and plan to make it open-source, providing a valuable resource for peer application developers. The demo video has been released on YouTube at https://www.youtube.com/watch?v=sk8L9_Wv7J4
1808.10326
Shen Li
Shen Li, Hengru Xu, Zhengdong Lu
Generalize Symbolic Knowledge With Neural Rule Engine
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As neural networks have dominated the state-of-the-art results in a wide range of NLP tasks, it attracts considerable attention to improve the performance of neural models by integrating symbolic knowledge. Different from existing works, this paper investigates the combination of these two powerful paradigms from the knowledge-driven side. We propose Neural Rule Engine (NRE), which can learn knowledge explicitly from logic rules and then generalize them implicitly with neural networks. NRE is implemented with neural module networks in which each module represents an action of a logic rule. The experiments show that NRE could greatly improve the generalization abilities of logic rules with a significant increase in recall. Meanwhile, the precision is still maintained at a high level.
[ { "created": "Thu, 30 Aug 2018 14:51:43 GMT", "version": "v1" }, { "created": "Tue, 4 Sep 2018 06:07:15 GMT", "version": "v2" }, { "created": "Wed, 14 Aug 2019 07:15:49 GMT", "version": "v3" } ]
2019-08-15
[ [ "Li", "Shen", "" ], [ "Xu", "Hengru", "" ], [ "Lu", "Zhengdong", "" ] ]
As neural networks have dominated the state-of-the-art results in a wide range of NLP tasks, it attracts considerable attention to improve the performance of neural models by integrating symbolic knowledge. Different from existing works, this paper investigates the combination of these two powerful paradigms from the knowledge-driven side. We propose Neural Rule Engine (NRE), which can learn knowledge explicitly from logic rules and then generalize them implicitly with neural networks. NRE is implemented with neural module networks in which each module represents an action of a logic rule. The experiments show that NRE could greatly improve the generalization abilities of logic rules with a significant increase in recall. Meanwhile, the precision is still maintained at a high level.
1610.07719
Chong Shangguan
Chong Shangguan, Jingxue Ma, Gennian Ge
New results for traitor tracing schemes
9 pages, submitted
null
null
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last two decades, several classes of codes are introduced to protect the copyrighted digital data. They have important applications in the scenarios like digital fingerprinting and broadcast encryption schemes. In this paper we will discuss three important classes of such codes, namely, frameproof codes, parent-identifying codes and traceability codes. Firstly, suppose $N(t)$ is the minimal integer such that there exists a binary $t$-frameproof code of length $N$ with cardinality larger than $N$, we prove that $N(t)\ge\frac{15+\sqrt{33}}{24} (t-2)^2$, which is a great improvement of the previously known bound $N(t)\ge\binom{t+1}{2}$. Moreover, we find that the determination of $N(t)$ is closely related to a conjecture of Erd\H{o}s, Frankl and F\"uredi posed in the 1980's, which implies the conjectured value $N(t)=t^2+o(t^2)$. Secondly, we derive a new upper bound for parent-identifying codes, which is superior than all previously known bounds. Thirdly, we present an upper bound for 3-traceability codes, which shows that a $q$-ary 3-traceability code of length $N$ can have at most $cq^{\lceil N/9\rceil}$ codewords, where $c$ is a constant only related to the code length $N$. It is the first meaningful upper bound for 3-traceability codes and our result supports a conjecture of Blackburn et al. posed in 2010.
[ { "created": "Tue, 25 Oct 2016 03:52:05 GMT", "version": "v1" } ]
2016-10-26
[ [ "Shangguan", "Chong", "" ], [ "Ma", "Jingxue", "" ], [ "Ge", "Gennian", "" ] ]
In the last two decades, several classes of codes are introduced to protect the copyrighted digital data. They have important applications in the scenarios like digital fingerprinting and broadcast encryption schemes. In this paper we will discuss three important classes of such codes, namely, frameproof codes, parent-identifying codes and traceability codes. Firstly, suppose $N(t)$ is the minimal integer such that there exists a binary $t$-frameproof code of length $N$ with cardinality larger than $N$, we prove that $N(t)\ge\frac{15+\sqrt{33}}{24} (t-2)^2$, which is a great improvement of the previously known bound $N(t)\ge\binom{t+1}{2}$. Moreover, we find that the determination of $N(t)$ is closely related to a conjecture of Erd\H{o}s, Frankl and F\"uredi posed in the 1980's, which implies the conjectured value $N(t)=t^2+o(t^2)$. Secondly, we derive a new upper bound for parent-identifying codes, which is superior than all previously known bounds. Thirdly, we present an upper bound for 3-traceability codes, which shows that a $q$-ary 3-traceability code of length $N$ can have at most $cq^{\lceil N/9\rceil}$ codewords, where $c$ is a constant only related to the code length $N$. It is the first meaningful upper bound for 3-traceability codes and our result supports a conjecture of Blackburn et al. posed in 2010.
2312.10888
Fangming Zhao
Fangming Zhao, Nikolaos Pappas, Chuan Ma, Xinghua Sun, Tony Q. S. Quek, Howard H. Yang
Age-Threshold Slotted ALOHA for Optimizing Information Freshness in Mobile Networks
21 pages. Update version after peer review
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We optimize the Age of Information (AoI) in mobile networks using the age-threshold slotted ALOHA (TSA) protocol. The network comprises multiple source-destination pairs, where each source sends a sequence of status update packets to its destination over a shared spectrum. The TSA protocol stipulates that a source node must remain silent until its AoI reaches a predefined threshold, after which the node accesses the radio channel with a certain probability. Using stochastic geometry tools, we derive analytical expressions for the transmission success probability, mean peak AoI, and time-average AoI. Subsequently, we obtain closed-form expressions for the optimal update rate and age threshold that minimize the mean peak and time-average AoI, respectively. In addition, we establish a scaling law for the mean peak AoI and time-average AoI in mobile networks, revealing that the optimal mean peak AoI and time-average AoI increase linearly with the deployment density. Notably, the growth rate of time-average AoI under TSA is half of that under conventional slotted ALOHA. When considering the optimal mean peak AoI, the TSA protocol exhibits comparable performance to the traditional slotted ALOHA protocol. These findings conclusively affirm the advantage of TSA in reducing higher-order AoI, particularly in densely deployed networks.
[ { "created": "Mon, 18 Dec 2023 02:28:13 GMT", "version": "v1" }, { "created": "Wed, 5 Jun 2024 14:16:09 GMT", "version": "v2" } ]
2024-06-06
[ [ "Zhao", "Fangming", "" ], [ "Pappas", "Nikolaos", "" ], [ "Ma", "Chuan", "" ], [ "Sun", "Xinghua", "" ], [ "Quek", "Tony Q. S.", "" ], [ "Yang", "Howard H.", "" ] ]
We optimize the Age of Information (AoI) in mobile networks using the age-threshold slotted ALOHA (TSA) protocol. The network comprises multiple source-destination pairs, where each source sends a sequence of status update packets to its destination over a shared spectrum. The TSA protocol stipulates that a source node must remain silent until its AoI reaches a predefined threshold, after which the node accesses the radio channel with a certain probability. Using stochastic geometry tools, we derive analytical expressions for the transmission success probability, mean peak AoI, and time-average AoI. Subsequently, we obtain closed-form expressions for the optimal update rate and age threshold that minimize the mean peak and time-average AoI, respectively. In addition, we establish a scaling law for the mean peak AoI and time-average AoI in mobile networks, revealing that the optimal mean peak AoI and time-average AoI increase linearly with the deployment density. Notably, the growth rate of time-average AoI under TSA is half of that under conventional slotted ALOHA. When considering the optimal mean peak AoI, the TSA protocol exhibits comparable performance to the traditional slotted ALOHA protocol. These findings conclusively affirm the advantage of TSA in reducing higher-order AoI, particularly in densely deployed networks.