id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1410.0956
Federico Rossi
Federico Rossi and Marco Pavone
Distributed consensus with mixed time/communication bandwidth performance metrics
Draft, submitted to Allerton 2014
null
null
null
cs.SY cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we study the inherent trade-off between time and communication complexity for the distributed consensus problem. In our model, communication complexity is measured as the maximum data throughput (in bits per second) sent through the network at a given instant. Such a notion of communication complexity, referred to as bandwidth complexity, is related to the frequency bandwidth a designer should collectively allocate to the agents if they were to communicate via a wireless channel, which represents an important constraint for dense robotic networks. We prove a lower bound on the bandwidth complexity of the consensus problem and provide a consensus algorithm that is bandwidth-optimal for a wide class of consensus functions. We then propose a distributed algorithm that can trade communication complexity versus time complexity as a function of a tunable parameter, which can be adjusted by a system designer as a function of the properties of the wireless communication channel. We rigorously characterize the tunable algorithm's worst-case bandwidth complexity and show that it compares favorably with the bandwidth complexity of well-known consensus algorithm.
[ { "created": "Fri, 3 Oct 2014 18:52:00 GMT", "version": "v1" } ]
2014-10-07
[ [ "Rossi", "Federico", "" ], [ "Pavone", "Marco", "" ] ]
In this paper we study the inherent trade-off between time and communication complexity for the distributed consensus problem. In our model, communication complexity is measured as the maximum data throughput (in bits per second) sent through the network at a given instant. Such a notion of communication complexity, referred to as bandwidth complexity, is related to the frequency bandwidth a designer should collectively allocate to the agents if they were to communicate via a wireless channel, which represents an important constraint for dense robotic networks. We prove a lower bound on the bandwidth complexity of the consensus problem and provide a consensus algorithm that is bandwidth-optimal for a wide class of consensus functions. We then propose a distributed algorithm that can trade communication complexity versus time complexity as a function of a tunable parameter, which can be adjusted by a system designer as a function of the properties of the wireless communication channel. We rigorously characterize the tunable algorithm's worst-case bandwidth complexity and show that it compares favorably with the bandwidth complexity of well-known consensus algorithm.
2105.05674
Ismo Horppu
Ismo Horppu, Antti Nikander, Elif Buyukcan, Jere M\"akiniemi, Amin Sorkhei, Frederick Ayala-G\'omez
Automatic Classification of Games using Support Vector Machine
7 pages, 7 figures, updated contact information of one author
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Game developers benefit from availability of custom game genres when doing game market analysis. This information can help them to spot opportunities in market and make them more successful in planning a new game. In this paper we find good classifier for predicting category of a game. Prediction is based on description and title of a game. We use 2443 iOS App Store games as data set to generate a document-term matrix. To reduce the curse of dimensionality we use Latent Semantic Indexing, which, reduces the term dimension to approximately 1/9. Support Vector Machine supervised learning model is fit to pre-processed data. Model parameters are optimized using grid search and 20-fold cross validation. Best model yields to 77% mean accuracy or roughly 70% accuracy with 95% confidence. Developed classifier has been used in-house to assist games market research.
[ { "created": "Wed, 12 May 2021 14:13:21 GMT", "version": "v1" }, { "created": "Mon, 17 May 2021 09:40:29 GMT", "version": "v2" } ]
2021-05-18
[ [ "Horppu", "Ismo", "" ], [ "Nikander", "Antti", "" ], [ "Buyukcan", "Elif", "" ], [ "Mäkiniemi", "Jere", "" ], [ "Sorkhei", "Amin", "" ], [ "Ayala-Gómez", "Frederick", "" ] ]
Game developers benefit from availability of custom game genres when doing game market analysis. This information can help them to spot opportunities in market and make them more successful in planning a new game. In this paper we find good classifier for predicting category of a game. Prediction is based on description and title of a game. We use 2443 iOS App Store games as data set to generate a document-term matrix. To reduce the curse of dimensionality we use Latent Semantic Indexing, which, reduces the term dimension to approximately 1/9. Support Vector Machine supervised learning model is fit to pre-processed data. Model parameters are optimized using grid search and 20-fold cross validation. Best model yields to 77% mean accuracy or roughly 70% accuracy with 95% confidence. Developed classifier has been used in-house to assist games market research.
1607.03055
Derek Greene
Derek Greene and James P. Cross
Exploring the Political Agenda of the European Parliament Using a Dynamic Topic Modeling Approach
Long version including appendix. arXiv admin note: substantial text overlap with arXiv:1505.07302
null
null
null
cs.CL cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study analyzes the political agenda of the European Parliament (EP) plenary, how it has evolved over time, and the manner in which Members of the European Parliament (MEPs) have reacted to external and internal stimuli when making plenary speeches. To unveil the plenary agenda and detect latent themes in legislative speeches over time, MEP speech content is analyzed using a new dynamic topic modeling method based on two layers of Non-negative Matrix Factorization (NMF). This method is applied to a new corpus of all English language legislative speeches in the EP plenary from the period 1999-2014. Our findings suggest that two-layer NMF is a valuable alternative to existing dynamic topic modeling approaches found in the literature, and can unveil niche topics and associated vocabularies not captured by existing methods. Substantively, our findings suggest that the political agenda of the EP evolves significantly over time and reacts to exogenous events such as EU Treaty referenda and the emergence of the Euro-crisis. MEP contributions to the plenary agenda are also found to be impacted upon by voting behaviour and the committee structure of the Parliament.
[ { "created": "Mon, 11 Jul 2016 17:48:53 GMT", "version": "v1" } ]
2016-07-12
[ [ "Greene", "Derek", "" ], [ "Cross", "James P.", "" ] ]
This study analyzes the political agenda of the European Parliament (EP) plenary, how it has evolved over time, and the manner in which Members of the European Parliament (MEPs) have reacted to external and internal stimuli when making plenary speeches. To unveil the plenary agenda and detect latent themes in legislative speeches over time, MEP speech content is analyzed using a new dynamic topic modeling method based on two layers of Non-negative Matrix Factorization (NMF). This method is applied to a new corpus of all English language legislative speeches in the EP plenary from the period 1999-2014. Our findings suggest that two-layer NMF is a valuable alternative to existing dynamic topic modeling approaches found in the literature, and can unveil niche topics and associated vocabularies not captured by existing methods. Substantively, our findings suggest that the political agenda of the EP evolves significantly over time and reacts to exogenous events such as EU Treaty referenda and the emergence of the Euro-crisis. MEP contributions to the plenary agenda are also found to be impacted upon by voting behaviour and the committee structure of the Parliament.
1908.05838
Antonios Anastasopoulos
Antonios Anastasopoulos and Graham Neubig
Pushing the Limits of Low-Resource Morphological Inflection
to appear at EMNLP 2019
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have seen exceptional strides in the task of automatic morphological inflection generation. However, for a long tail of languages the necessary resources are hard to come by, and state-of-the-art neural methods that work well under higher resource settings perform poorly in the face of a paucity of data. In response, we propose a battery of improvements that greatly improve performance under such low-resource conditions. First, we present a novel two-step attention architecture for the inflection decoder. In addition, we investigate the effects of cross-lingual transfer from single and multiple languages, as well as monolingual data hallucination. The macro-averaged accuracy of our models outperforms the state-of-the-art by 15 percentage points. Also, we identify the crucial factors for success with cross-lingual transfer for morphological inflection: typological similarity and a common representation across languages.
[ { "created": "Fri, 16 Aug 2019 04:15:32 GMT", "version": "v1" }, { "created": "Tue, 20 Aug 2019 14:15:58 GMT", "version": "v2" } ]
2019-08-21
[ [ "Anastasopoulos", "Antonios", "" ], [ "Neubig", "Graham", "" ] ]
Recent years have seen exceptional strides in the task of automatic morphological inflection generation. However, for a long tail of languages the necessary resources are hard to come by, and state-of-the-art neural methods that work well under higher resource settings perform poorly in the face of a paucity of data. In response, we propose a battery of improvements that greatly improve performance under such low-resource conditions. First, we present a novel two-step attention architecture for the inflection decoder. In addition, we investigate the effects of cross-lingual transfer from single and multiple languages, as well as monolingual data hallucination. The macro-averaged accuracy of our models outperforms the state-of-the-art by 15 percentage points. Also, we identify the crucial factors for success with cross-lingual transfer for morphological inflection: typological similarity and a common representation across languages.
2405.13381
Chang Zhou
Chang Zhou, Yang Zhao, Jin Cao, Yi Shen, Xiaoling Cui, Chiyu Cheng
Optimizing Search Advertising Strategies: Integrating Reinforcement Learning with Generalized Second-Price Auctions for Enhanced Ad Ranking and Bidding
Accepted by 2024 5th International Conference on Electronic communication and Artificial Intelligence (ICECAI 2024)
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores the integration of strategic optimization methods in search advertising, focusing on ad ranking and bidding mechanisms within E-commerce platforms. By employing a combination of reinforcement learning and evolutionary strategies, we propose a dynamic model that adjusts to varying user interactions and optimizes the balance between advertiser cost, user relevance, and platform revenue. Our results suggest significant improvements in ad placement accuracy and cost efficiency, demonstrating the model's applicability in real-world scenarios.
[ { "created": "Wed, 22 May 2024 06:30:55 GMT", "version": "v1" }, { "created": "Wed, 29 May 2024 05:25:49 GMT", "version": "v2" } ]
2024-05-30
[ [ "Zhou", "Chang", "" ], [ "Zhao", "Yang", "" ], [ "Cao", "Jin", "" ], [ "Shen", "Yi", "" ], [ "Cui", "Xiaoling", "" ], [ "Cheng", "Chiyu", "" ] ]
This paper explores the integration of strategic optimization methods in search advertising, focusing on ad ranking and bidding mechanisms within E-commerce platforms. By employing a combination of reinforcement learning and evolutionary strategies, we propose a dynamic model that adjusts to varying user interactions and optimizes the balance between advertiser cost, user relevance, and platform revenue. Our results suggest significant improvements in ad placement accuracy and cost efficiency, demonstrating the model's applicability in real-world scenarios.
2210.00380
Ahmed Aloui
Ahmed Aloui, Juncheng Dong, Cat P. Le, Vahid Tarokh
Transfer Learning for Individual Treatment Effect Estimation
null
null
null
null
cs.LG stat.ME stat.ML
http://creativecommons.org/licenses/by/4.0/
This work considers the problem of transferring causal knowledge between tasks for Individual Treatment Effect (ITE) estimation. To this end, we theoretically assess the feasibility of transferring ITE knowledge and present a practical framework for efficient transfer. A lower bound is introduced on the ITE error of the target task to demonstrate that ITE knowledge transfer is challenging due to the absence of counterfactual information. Nevertheless, we establish generalization upper bounds on the counterfactual loss and ITE error of the target task, demonstrating the feasibility of ITE knowledge transfer. Subsequently, we introduce a framework with a new Causal Inference Task Affinity (CITA) measure for ITE knowledge transfer. Specifically, we use CITA to find the closest source task to the target task and utilize it for ITE knowledge transfer. Empirical studies are provided, demonstrating the efficacy of the proposed method. We observe that ITE knowledge transfer can significantly (up to 95%) reduce the amount of data required for ITE estimation.
[ { "created": "Sat, 1 Oct 2022 21:36:27 GMT", "version": "v1" }, { "created": "Fri, 7 Oct 2022 15:04:29 GMT", "version": "v2" }, { "created": "Mon, 5 Jun 2023 22:57:20 GMT", "version": "v3" } ]
2023-06-07
[ [ "Aloui", "Ahmed", "" ], [ "Dong", "Juncheng", "" ], [ "Le", "Cat P.", "" ], [ "Tarokh", "Vahid", "" ] ]
This work considers the problem of transferring causal knowledge between tasks for Individual Treatment Effect (ITE) estimation. To this end, we theoretically assess the feasibility of transferring ITE knowledge and present a practical framework for efficient transfer. A lower bound is introduced on the ITE error of the target task to demonstrate that ITE knowledge transfer is challenging due to the absence of counterfactual information. Nevertheless, we establish generalization upper bounds on the counterfactual loss and ITE error of the target task, demonstrating the feasibility of ITE knowledge transfer. Subsequently, we introduce a framework with a new Causal Inference Task Affinity (CITA) measure for ITE knowledge transfer. Specifically, we use CITA to find the closest source task to the target task and utilize it for ITE knowledge transfer. Empirical studies are provided, demonstrating the efficacy of the proposed method. We observe that ITE knowledge transfer can significantly (up to 95%) reduce the amount of data required for ITE estimation.
1507.00095
Sanghun Im
Sanghun Im, Hyoungsuk Jeon, Jinho Choi, and Jeongseok Ha
Secret Key Agreement with Large Antenna Arrays under the Pilot Contamination Attack
15 pages, 5 figures, and the paper is under minor revision for the publication in IEEE transactions on wireless communications
null
10.1109/TWC.2015.2456894
null
cs.CR cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a secret key agreement (SKA) protocol for a multi-user time-division duplex system where a base-station (BS) with a large antenna array (LAA) shares secret keys with users in the presence of non-colluding eavesdroppers. In the system, when the BS transmits random sequences to legitimate users for sharing common randomness, the eavesdroppers can attempt the pilot contamination attack (PCA) in which each of eavesdroppers transmits its target user's training sequence in hopes of acquiring possible information leak by steering beam towards the eavesdropper. We show that there exists a crucial complementary relation between the received signal strengths at the eavesdropper and its target user. This relation tells us that the eavesdropper inevitably leaves a trace that enables us to devise a way of measuring the amount of information leakage to the eavesdropper even if PCA parameters are unknown. To this end, we derive an estimator for the channel gain from the BS to the eavesdropper and propose a rate-adaptation scheme for adjusting the length of secret key under the PCA. Extensive analysis and evaluations are carried out under various setups, which show that the proposed scheme adequately takes advantage of the LAA to establish the secret keys under the PCA.
[ { "created": "Wed, 1 Jul 2015 02:43:16 GMT", "version": "v1" } ]
2016-11-17
[ [ "Im", "Sanghun", "" ], [ "Jeon", "Hyoungsuk", "" ], [ "Choi", "Jinho", "" ], [ "Ha", "Jeongseok", "" ] ]
We present a secret key agreement (SKA) protocol for a multi-user time-division duplex system where a base-station (BS) with a large antenna array (LAA) shares secret keys with users in the presence of non-colluding eavesdroppers. In the system, when the BS transmits random sequences to legitimate users for sharing common randomness, the eavesdroppers can attempt the pilot contamination attack (PCA) in which each of eavesdroppers transmits its target user's training sequence in hopes of acquiring possible information leak by steering beam towards the eavesdropper. We show that there exists a crucial complementary relation between the received signal strengths at the eavesdropper and its target user. This relation tells us that the eavesdropper inevitably leaves a trace that enables us to devise a way of measuring the amount of information leakage to the eavesdropper even if PCA parameters are unknown. To this end, we derive an estimator for the channel gain from the BS to the eavesdropper and propose a rate-adaptation scheme for adjusting the length of secret key under the PCA. Extensive analysis and evaluations are carried out under various setups, which show that the proposed scheme adequately takes advantage of the LAA to establish the secret keys under the PCA.
2203.08600
Benjamin Horne
Benjamin D. Horne, Maur\'icio Gruppi, Kenneth Joseph, Jon Green, John P. Wihbey, and Sibel Adal{\i}
NELA-Local: A Dataset of U.S. Local News Articles for the Study of County-level News Ecosystems
Published at ICWSM 2022
null
null
null
cs.CY cs.MM cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a dataset of over 1.4M online news articles from 313 local U.S. news outlets published over 20 months (between April 4th, 2020 and December 31st, 2021). These outlets cover a geographically diverse set of communities across the United States. In order to estimate characteristics of the local audience, included with this news article data is a wide range of county-level metadata, including demographics, 2020 Presidential Election vote shares, and community resilience estimates from the U.S. Census Bureau. The NELA-Local dataset can be found at: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/GFE66K.
[ { "created": "Wed, 16 Mar 2022 13:19:21 GMT", "version": "v1" } ]
2022-03-17
[ [ "Horne", "Benjamin D.", "" ], [ "Gruppi", "Maurício", "" ], [ "Joseph", "Kenneth", "" ], [ "Green", "Jon", "" ], [ "Wihbey", "John P.", "" ], [ "Adalı", "Sibel", "" ] ]
In this paper, we present a dataset of over 1.4M online news articles from 313 local U.S. news outlets published over 20 months (between April 4th, 2020 and December 31st, 2021). These outlets cover a geographically diverse set of communities across the United States. In order to estimate characteristics of the local audience, included with this news article data is a wide range of county-level metadata, including demographics, 2020 Presidential Election vote shares, and community resilience estimates from the U.S. Census Bureau. The NELA-Local dataset can be found at: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/GFE66K.
2205.12452
Daniel Campos
Daniel Campos, Alexandre Marques, Tuan Nguyen, Mark Kurtz, and ChengXiang Zhai
Sparse*BERT: Sparse Models Generalize To New tasks and Domains
Presented at Sparsity in Neural Networks Workshop at ICML 2022, 6 pages, 2 figures, 4 tables
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models have become the core architecture upon which most modern natural language processing (NLP) systems build. These models can consistently deliver impressive accuracy and robustness across tasks and domains, but their high computational overhead can make inference difficult and expensive. To make using these models less costly, recent work has explored leveraging structured and unstructured pruning, quantization, and distillation to improve inference speed and decrease size. This paper studies how models pruned using Gradual Unstructured Magnitude Pruning can transfer between domains and tasks. Our experimentation shows that models that are pruned during pretraining using general domain masked language models can transfer to novel domains and tasks without extensive hyperparameter exploration or specialized approaches. We demonstrate that our general sparse model Sparse*BERT can become SparseBioBERT simply by pretraining the compressed architecture on unstructured biomedical text. Moreover, we show that SparseBioBERT can match the quality of BioBERT with only 10\% of the parameters.
[ { "created": "Wed, 25 May 2022 02:51:12 GMT", "version": "v1" }, { "created": "Fri, 31 Mar 2023 22:01:44 GMT", "version": "v2" }, { "created": "Wed, 5 Apr 2023 19:54:59 GMT", "version": "v3" } ]
2023-04-07
[ [ "Campos", "Daniel", "" ], [ "Marques", "Alexandre", "" ], [ "Nguyen", "Tuan", "" ], [ "Kurtz", "Mark", "" ], [ "Zhai", "ChengXiang", "" ] ]
Large Language Models have become the core architecture upon which most modern natural language processing (NLP) systems build. These models can consistently deliver impressive accuracy and robustness across tasks and domains, but their high computational overhead can make inference difficult and expensive. To make using these models less costly, recent work has explored leveraging structured and unstructured pruning, quantization, and distillation to improve inference speed and decrease size. This paper studies how models pruned using Gradual Unstructured Magnitude Pruning can transfer between domains and tasks. Our experimentation shows that models that are pruned during pretraining using general domain masked language models can transfer to novel domains and tasks without extensive hyperparameter exploration or specialized approaches. We demonstrate that our general sparse model Sparse*BERT can become SparseBioBERT simply by pretraining the compressed architecture on unstructured biomedical text. Moreover, we show that SparseBioBERT can match the quality of BioBERT with only 10\% of the parameters.
2002.05193
Momin M. Malik
Momin M. Malik
A Hierarchy of Limitations in Machine Learning
68 pages, 7 figures
null
null
null
cs.CY cs.LG econ.EM math.ST stat.ML stat.TH
http://creativecommons.org/licenses/by-nc-sa/4.0/
"All models are wrong, but some are useful", wrote George E. P. Box (1979). Machine learning has focused on the usefulness of probability models for prediction in social systems, but is only now coming to grips with the ways in which these models are wrong---and the consequences of those shortcomings. This paper attempts a comprehensive, structured overview of the specific conceptual, procedural, and statistical limitations of models in machine learning when applied to society. Machine learning modelers themselves can use the described hierarchy to identify possible failure points and think through how to address them, and consumers of machine learning models can know what to question when confronted with the decision about if, where, and how to apply machine learning. The limitations go from commitments inherent in quantification itself, through to showing how unmodeled dependencies can lead to cross-validation being overly optimistic as a way of assessing model performance.
[ { "created": "Wed, 12 Feb 2020 19:39:29 GMT", "version": "v1" }, { "created": "Sat, 29 Feb 2020 21:04:27 GMT", "version": "v2" } ]
2020-03-03
[ [ "Malik", "Momin M.", "" ] ]
"All models are wrong, but some are useful", wrote George E. P. Box (1979). Machine learning has focused on the usefulness of probability models for prediction in social systems, but is only now coming to grips with the ways in which these models are wrong---and the consequences of those shortcomings. This paper attempts a comprehensive, structured overview of the specific conceptual, procedural, and statistical limitations of models in machine learning when applied to society. Machine learning modelers themselves can use the described hierarchy to identify possible failure points and think through how to address them, and consumers of machine learning models can know what to question when confronted with the decision about if, where, and how to apply machine learning. The limitations go from commitments inherent in quantification itself, through to showing how unmodeled dependencies can lead to cross-validation being overly optimistic as a way of assessing model performance.
1310.4822
Hugo Jair Escalante
Hugo Jair Escalante, Isabelle Guyon, Vassilis Athitsos, Pat Jangyodsuk, Jun Wan
Principal motion components for gesture recognition using a single-example
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces principal motion components (PMC), a new method for one-shot gesture recognition. In the considered scenario a single training-video is available for each gesture to be recognized, which limits the application of traditional techniques (e.g., HMMs). In PMC, a 2D map of motion energy is obtained per each pair of consecutive frames in a video. Motion maps associated to a video are processed to obtain a PCA model, which is used for recognition under a reconstruction-error approach. The main benefits of the proposed approach are its simplicity, easiness of implementation, competitive performance and efficiency. We report experimental results in one-shot gesture recognition using the ChaLearn Gesture Dataset; a benchmark comprising more than 50,000 gestures, recorded as both RGB and depth video with a Kinect camera. Results obtained with PMC are competitive with alternative methods proposed for the same data set.
[ { "created": "Thu, 17 Oct 2013 19:52:50 GMT", "version": "v1" }, { "created": "Fri, 31 Jan 2014 12:04:41 GMT", "version": "v2" } ]
2014-02-03
[ [ "Escalante", "Hugo Jair", "" ], [ "Guyon", "Isabelle", "" ], [ "Athitsos", "Vassilis", "" ], [ "Jangyodsuk", "Pat", "" ], [ "Wan", "Jun", "" ] ]
This paper introduces principal motion components (PMC), a new method for one-shot gesture recognition. In the considered scenario a single training-video is available for each gesture to be recognized, which limits the application of traditional techniques (e.g., HMMs). In PMC, a 2D map of motion energy is obtained per each pair of consecutive frames in a video. Motion maps associated to a video are processed to obtain a PCA model, which is used for recognition under a reconstruction-error approach. The main benefits of the proposed approach are its simplicity, easiness of implementation, competitive performance and efficiency. We report experimental results in one-shot gesture recognition using the ChaLearn Gesture Dataset; a benchmark comprising more than 50,000 gestures, recorded as both RGB and depth video with a Kinect camera. Results obtained with PMC are competitive with alternative methods proposed for the same data set.
2003.10886
Maryam Alimardani
Maryam Alimardani (1), Annabella Hermans (1), Angelica M. Tinga (1 and 2) ((1) Department of Cognitive Science and Artificial Intelligence, Tilburg University, The Netherlands, (2) Department of Human Factors in Vehicle Automation, Institute for Road Safety Research, The Netherlands)
Assessment of Empathy in an Affective VR Environment using EEG Signals
13 pages, 3 figures, 3 tables
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advancements in social robotics and virtual avatars, it becomes increasingly important that these agents adapt their behavior to the mood, feelings and personality of their users. One such aspect of the user is empathy. Whereas many studies measure empathy through offline measures that are collected after empathic stimulation (e.g. post-hoc questionnaires), the current study aimed to measure empathy online, using brain activity collected during the experience. Participants watched an affective 360 video of a child experiencing domestic violence in a virtual reality headset while their EEG signals were recorded. Results showed a significant attenuation of alpha, theta and delta asymmetry in the frontal and central areas of the brain. Moreover, a significant relationship between participants' empathy scores and their frontal alpha asymmetry at baseline was found. These results demonstrate specific brain activity alterations when participants are exposed to an affective virtual reality environment, with the level of empathy as a personality trait being visible in brain activity during a baseline measurement. These findings suggest the potential of EEG measurements for development of passive brain-computer interfaces that assess the user's affective responses in real-time and consequently adapt the behavior of socially intelligent agents for a personalized interaction.
[ { "created": "Tue, 24 Mar 2020 14:35:27 GMT", "version": "v1" } ]
2020-03-25
[ [ "Alimardani", "Maryam", "", "1 and\n 2" ], [ "Hermans", "Annabella", "", "1 and\n 2" ], [ "Tinga", "Angelica M.", "", "1 and\n 2" ] ]
With the advancements in social robotics and virtual avatars, it becomes increasingly important that these agents adapt their behavior to the mood, feelings and personality of their users. One such aspect of the user is empathy. Whereas many studies measure empathy through offline measures that are collected after empathic stimulation (e.g. post-hoc questionnaires), the current study aimed to measure empathy online, using brain activity collected during the experience. Participants watched an affective 360 video of a child experiencing domestic violence in a virtual reality headset while their EEG signals were recorded. Results showed a significant attenuation of alpha, theta and delta asymmetry in the frontal and central areas of the brain. Moreover, a significant relationship between participants' empathy scores and their frontal alpha asymmetry at baseline was found. These results demonstrate specific brain activity alterations when participants are exposed to an affective virtual reality environment, with the level of empathy as a personality trait being visible in brain activity during a baseline measurement. These findings suggest the potential of EEG measurements for development of passive brain-computer interfaces that assess the user's affective responses in real-time and consequently adapt the behavior of socially intelligent agents for a personalized interaction.
2209.04309
Jiawei Zheng
Jiawei Zheng and Petros Papapanagiotou and Jacques D. Fleuriot
Alignment-based conformance checking over probabilistic events
Extended version
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conformance checking techniques allow us to evaluate how well some exhibited behaviour, represented by a trace of monitored events, conforms to a specified process model. Modern monitoring and activity recognition technologies, such as those relying on sensors, the IoT, statistics and AI, can produce a wealth of relevant event data. However, this data is typically characterised by noise and uncertainty, in contrast to the assumption of a deterministic event log required by conformance checking algorithms. In this paper, we extend alignment-based conformance checking to function under a probabilistic event log. We introduce a weighted trace model and weighted alignment cost function, and a custom threshold parameter that controls the level of confidence on the event data vs. the process model. The resulting algorithm considers activities of lower but sufficiently high probability that better align with the process model. We explain the algorithm and its motivation both from formal and intuitive perspectives, and demonstrate its functionality in comparison with deterministic alignment using real-life datasets.
[ { "created": "Fri, 9 Sep 2022 14:07:37 GMT", "version": "v1" }, { "created": "Thu, 30 Mar 2023 14:16:27 GMT", "version": "v2" } ]
2023-03-31
[ [ "Zheng", "Jiawei", "" ], [ "Papapanagiotou", "Petros", "" ], [ "Fleuriot", "Jacques D.", "" ] ]
Conformance checking techniques allow us to evaluate how well some exhibited behaviour, represented by a trace of monitored events, conforms to a specified process model. Modern monitoring and activity recognition technologies, such as those relying on sensors, the IoT, statistics and AI, can produce a wealth of relevant event data. However, this data is typically characterised by noise and uncertainty, in contrast to the assumption of a deterministic event log required by conformance checking algorithms. In this paper, we extend alignment-based conformance checking to function under a probabilistic event log. We introduce a weighted trace model and weighted alignment cost function, and a custom threshold parameter that controls the level of confidence on the event data vs. the process model. The resulting algorithm considers activities of lower but sufficiently high probability that better align with the process model. We explain the algorithm and its motivation both from formal and intuitive perspectives, and demonstrate its functionality in comparison with deterministic alignment using real-life datasets.
2104.15015
Yang Dongming
Dongming Yang, Yuexian Zou, Can Zhang, Meng Cao, Jie Chen
RR-Net: Injecting Interactive Semantics in Human-Object Interaction Detection
7 pages, 6 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human-Object Interaction (HOI) detection devotes to learn how humans interact with surrounding objects. Latest end-to-end HOI detectors are short of relation reasoning, which leads to inability to learn HOI-specific interactive semantics for predictions. In this paper, we therefore propose novel relation reasoning for HOI detection. We first present a progressive Relation-aware Frame, which brings a new structure and parameter sharing pattern for interaction inference. Upon the frame, an Interaction Intensifier Module and a Correlation Parsing Module are carefully designed, where: a) interactive semantics from humans can be exploited and passed to objects to intensify interactions, b) interactive correlations among humans, objects and interactions are integrated to promote predictions. Based on modules above, we construct an end-to-end trainable framework named Relation Reasoning Network (abbr. RR-Net). Extensive experiments show that our proposed RR-Net sets a new state-of-the-art on both V-COCO and HICO-DET benchmarks and improves the baseline about 5.5% and 9.8% relatively, validating that this first effort in exploring relation reasoning and integrating interactive semantics has brought obvious improvement for end-to-end HOI detection.
[ { "created": "Fri, 30 Apr 2021 14:03:10 GMT", "version": "v1" } ]
2021-05-03
[ [ "Yang", "Dongming", "" ], [ "Zou", "Yuexian", "" ], [ "Zhang", "Can", "" ], [ "Cao", "Meng", "" ], [ "Chen", "Jie", "" ] ]
Human-Object Interaction (HOI) detection devotes to learn how humans interact with surrounding objects. Latest end-to-end HOI detectors are short of relation reasoning, which leads to inability to learn HOI-specific interactive semantics for predictions. In this paper, we therefore propose novel relation reasoning for HOI detection. We first present a progressive Relation-aware Frame, which brings a new structure and parameter sharing pattern for interaction inference. Upon the frame, an Interaction Intensifier Module and a Correlation Parsing Module are carefully designed, where: a) interactive semantics from humans can be exploited and passed to objects to intensify interactions, b) interactive correlations among humans, objects and interactions are integrated to promote predictions. Based on modules above, we construct an end-to-end trainable framework named Relation Reasoning Network (abbr. RR-Net). Extensive experiments show that our proposed RR-Net sets a new state-of-the-art on both V-COCO and HICO-DET benchmarks and improves the baseline about 5.5% and 9.8% relatively, validating that this first effort in exploring relation reasoning and integrating interactive semantics has brought obvious improvement for end-to-end HOI detection.
2306.07483
Pietro Astolfi
Enrico Fini and Pietro Astolfi and Karteek Alahari and Xavier Alameda-Pineda and Julien Mairal and Moin Nabi and Elisa Ricci
Semi-supervised learning made simple with self-supervised clustering
CVPR 2023 - Code available at https://github.com/pietroastolfi/suave-daino
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2023) 3187-3197
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations. However, in many real-world scenarios, labels are partially available, motivating a recent line of work on semi-supervised methods inspired by self-supervised principles. In this paper, we propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods such as SwAV or DINO into semi-supervised learners. More precisely, we introduce a multi-task framework merging a supervised objective using ground-truth labels and a self-supervised objective relying on clustering assignments with a single cross-entropy loss. This approach may be interpreted as imposing the cluster centroids to be class prototypes. Despite its simplicity, we provide empirical evidence that our approach is highly effective and achieves state-of-the-art performance on CIFAR100 and ImageNet.
[ { "created": "Tue, 13 Jun 2023 01:09:18 GMT", "version": "v1" } ]
2023-06-14
[ [ "Fini", "Enrico", "" ], [ "Astolfi", "Pietro", "" ], [ "Alahari", "Karteek", "" ], [ "Alameda-Pineda", "Xavier", "" ], [ "Mairal", "Julien", "" ], [ "Nabi", "Moin", "" ], [ "Ricci", "Elisa", "" ] ]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations. However, in many real-world scenarios, labels are partially available, motivating a recent line of work on semi-supervised methods inspired by self-supervised principles. In this paper, we propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods such as SwAV or DINO into semi-supervised learners. More precisely, we introduce a multi-task framework merging a supervised objective using ground-truth labels and a self-supervised objective relying on clustering assignments with a single cross-entropy loss. This approach may be interpreted as imposing the cluster centroids to be class prototypes. Despite its simplicity, we provide empirical evidence that our approach is highly effective and achieves state-of-the-art performance on CIFAR100 and ImageNet.
2402.18086
Chao Wu
Chao Wu, Xiaobin Chang, Ruixuan Wang
Generalizable Two-Branch Framework for Image Class-Incremental Learning
5 pages,3 figures,accepted by ICASSP 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Deep neural networks often severely forget previously learned knowledge when learning new knowledge. Various continual learning (CL) methods have been proposed to handle such a catastrophic forgetting issue from different perspectives and achieved substantial improvements. In this paper, a novel two-branch continual learning framework is proposed to further enhance most existing CL methods. Specifically, the main branch can be any existing CL model and the newly introduced side branch is a lightweight convolutional network. The output of each main branch block is modulated by the output of the corresponding side branch block. Such a simple two-branch model can then be easily implemented and learned with the vanilla optimization setting without whistles and bells. Extensive experiments with various settings on multiple image datasets show that the proposed framework yields consistent improvements over state-of-the-art methods.
[ { "created": "Wed, 28 Feb 2024 06:18:33 GMT", "version": "v1" }, { "created": "Sun, 3 Mar 2024 14:58:50 GMT", "version": "v2" }, { "created": "Sat, 9 Mar 2024 06:38:30 GMT", "version": "v3" }, { "created": "Wed, 13 Mar 2024 11:11:18 GMT", "version": "v4" } ]
2024-03-14
[ [ "Wu", "Chao", "" ], [ "Chang", "Xiaobin", "" ], [ "Wang", "Ruixuan", "" ] ]
Deep neural networks often severely forget previously learned knowledge when learning new knowledge. Various continual learning (CL) methods have been proposed to handle such a catastrophic forgetting issue from different perspectives and achieved substantial improvements. In this paper, a novel two-branch continual learning framework is proposed to further enhance most existing CL methods. Specifically, the main branch can be any existing CL model and the newly introduced side branch is a lightweight convolutional network. The output of each main branch block is modulated by the output of the corresponding side branch block. Such a simple two-branch model can then be easily implemented and learned with the vanilla optimization setting without whistles and bells. Extensive experiments with various settings on multiple image datasets show that the proposed framework yields consistent improvements over state-of-the-art methods.
2405.10313
Jiaxuan You
Tao Feng, Chuanyang Jin, Jingyu Liu, Kunlun Zhu, Haoqin Tu, Zirui Cheng, Guanyu Lin, Jiaxuan You
How Far Are We From AGI
null
null
null
null
cs.AI cs.CL cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of artificial intelligence (AI) has profoundly impacted human society, driving significant advancements in multiple sectors. Yet, the escalating demands on AI have highlighted the limitations of AI's current offerings, catalyzing a movement towards Artificial General Intelligence (AGI). AGI, distinguished by its ability to execute diverse real-world tasks with efficiency and effectiveness comparable to human intelligence, reflects a paramount milestone in AI evolution. While existing works have summarized specific recent advancements of AI, they lack a comprehensive discussion of AGI's definitions, goals, and developmental trajectories. Different from existing survey papers, this paper delves into the pivotal questions of our proximity to AGI and the strategies necessary for its realization through extensive surveys, discussions, and original perspectives. We start by articulating the requisite capability frameworks for AGI, integrating the internal, interface, and system dimensions. As the realization of AGI requires more advanced capabilities and adherence to stringent constraints, we further discuss necessary AGI alignment technologies to harmonize these factors. Notably, we emphasize the importance of approaching AGI responsibly by first defining the key levels of AGI progression, followed by the evaluation framework that situates the status-quo, and finally giving our roadmap of how to reach the pinnacle of AGI. Moreover, to give tangible insights into the ubiquitous impact of the integration of AI, we outline existing challenges and potential pathways toward AGI in multiple domains. In sum, serving as a pioneering exploration into the current state and future trajectory of AGI, this paper aims to foster a collective comprehension and catalyze broader public discussions among researchers and practitioners on AGI.
[ { "created": "Thu, 16 May 2024 17:59:02 GMT", "version": "v1" } ]
2024-05-17
[ [ "Feng", "Tao", "" ], [ "Jin", "Chuanyang", "" ], [ "Liu", "Jingyu", "" ], [ "Zhu", "Kunlun", "" ], [ "Tu", "Haoqin", "" ], [ "Cheng", "Zirui", "" ], [ "Lin", "Guanyu", "" ], [ "You", "Jiaxuan", "" ] ]
The evolution of artificial intelligence (AI) has profoundly impacted human society, driving significant advancements in multiple sectors. Yet, the escalating demands on AI have highlighted the limitations of AI's current offerings, catalyzing a movement towards Artificial General Intelligence (AGI). AGI, distinguished by its ability to execute diverse real-world tasks with efficiency and effectiveness comparable to human intelligence, reflects a paramount milestone in AI evolution. While existing works have summarized specific recent advancements of AI, they lack a comprehensive discussion of AGI's definitions, goals, and developmental trajectories. Different from existing survey papers, this paper delves into the pivotal questions of our proximity to AGI and the strategies necessary for its realization through extensive surveys, discussions, and original perspectives. We start by articulating the requisite capability frameworks for AGI, integrating the internal, interface, and system dimensions. As the realization of AGI requires more advanced capabilities and adherence to stringent constraints, we further discuss necessary AGI alignment technologies to harmonize these factors. Notably, we emphasize the importance of approaching AGI responsibly by first defining the key levels of AGI progression, followed by the evaluation framework that situates the status-quo, and finally giving our roadmap of how to reach the pinnacle of AGI. Moreover, to give tangible insights into the ubiquitous impact of the integration of AI, we outline existing challenges and potential pathways toward AGI in multiple domains. In sum, serving as a pioneering exploration into the current state and future trajectory of AGI, this paper aims to foster a collective comprehension and catalyze broader public discussions among researchers and practitioners on AGI.
2202.13072
Wentao Zhu
Wentao Zhu, Hang Shang, Tingxun Lv, Chao Liao, Sen Yang, Ji Liu
Adversarial Contrastive Self-Supervised Learning
8 pages, 2 figures
null
null
null
cs.CV cs.AI cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, learning from vast unlabeled data, especially self-supervised learning, has been emerging and attracted widespread attention. Self-supervised learning followed by the supervised fine-tuning on a few labeled examples can significantly improve label efficiency and outperform standard supervised training using fully annotated data. In this work, we present a novel self-supervised deep learning paradigm based on online hard negative pair mining. Specifically, we design a student-teacher network to generate multi-view of the data for self-supervised learning and integrate hard negative pair mining into the training. Then we derive a new triplet-like loss considering both positive sample pairs and mined hard negative sample pairs. Extensive experiments demonstrate the effectiveness of the proposed method and its components on ILSVRC-2012.
[ { "created": "Sat, 26 Feb 2022 05:57:45 GMT", "version": "v1" } ]
2022-03-01
[ [ "Zhu", "Wentao", "" ], [ "Shang", "Hang", "" ], [ "Lv", "Tingxun", "" ], [ "Liao", "Chao", "" ], [ "Yang", "Sen", "" ], [ "Liu", "Ji", "" ] ]
Recently, learning from vast unlabeled data, especially self-supervised learning, has been emerging and attracted widespread attention. Self-supervised learning followed by the supervised fine-tuning on a few labeled examples can significantly improve label efficiency and outperform standard supervised training using fully annotated data. In this work, we present a novel self-supervised deep learning paradigm based on online hard negative pair mining. Specifically, we design a student-teacher network to generate multi-view of the data for self-supervised learning and integrate hard negative pair mining into the training. Then we derive a new triplet-like loss considering both positive sample pairs and mined hard negative sample pairs. Extensive experiments demonstrate the effectiveness of the proposed method and its components on ILSVRC-2012.
2402.01732
Kate Glazko
Kate Glazko, Yusuf Mohammed, Ben Kosa, Venkatesh Potluri, Jennifer Mankoff
Identifying and Improving Disability Bias in GPT-Based Resume Screening
null
null
10.1145/3630106.3658933
null
cs.CY cs.AI
http://creativecommons.org/licenses/by/4.0/
As Generative AI rises in adoption, its use has expanded to include domains such as hiring and recruiting. However, without examining the potential of bias, this may negatively impact marginalized populations, including people with disabilities. To address this important concern, we present a resume audit study, in which we ask ChatGPT (specifically, GPT-4) to rank a resume against the same resume enhanced with an additional leadership award, scholarship, panel presentation, and membership that are disability related. We find that GPT-4 exhibits prejudice towards these enhanced CVs. Further, we show that this prejudice can be quantifiably reduced by training a custom GPTs on principles of DEI and disability justice. Our study also includes a unique qualitative analysis of the types of direct and indirect ableism GPT-4 uses to justify its biased decisions and suggest directions for additional bias mitigation work. Additionally, since these justifications are presumably drawn from training data containing real-world biased statements made by humans, our analysis suggests additional avenues for understanding and addressing human bias.
[ { "created": "Sun, 28 Jan 2024 17:04:59 GMT", "version": "v1" }, { "created": "Wed, 22 May 2024 19:15:18 GMT", "version": "v2" } ]
2024-05-24
[ [ "Glazko", "Kate", "" ], [ "Mohammed", "Yusuf", "" ], [ "Kosa", "Ben", "" ], [ "Potluri", "Venkatesh", "" ], [ "Mankoff", "Jennifer", "" ] ]
As Generative AI rises in adoption, its use has expanded to include domains such as hiring and recruiting. However, without examining the potential of bias, this may negatively impact marginalized populations, including people with disabilities. To address this important concern, we present a resume audit study, in which we ask ChatGPT (specifically, GPT-4) to rank a resume against the same resume enhanced with an additional leadership award, scholarship, panel presentation, and membership that are disability related. We find that GPT-4 exhibits prejudice towards these enhanced CVs. Further, we show that this prejudice can be quantifiably reduced by training a custom GPTs on principles of DEI and disability justice. Our study also includes a unique qualitative analysis of the types of direct and indirect ableism GPT-4 uses to justify its biased decisions and suggest directions for additional bias mitigation work. Additionally, since these justifications are presumably drawn from training data containing real-world biased statements made by humans, our analysis suggests additional avenues for understanding and addressing human bias.
1201.3981
Souvik Sengupta
Souvik Sengupta, Saurabh Pal, Nilanjan Banerjee
A comparison algorithm to check LTSA Layer 1 and SCORM compliance in e-Learning sites
null
null
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The success of e-Learning is largely dependent on the impact of its multimedia aided learning content on the learner over the hyper media. The e-Learning portals with different proportion of multimedia elements have different impact on the learner, as there is lack of standardization. The Learning Technology System Architecture (LTSA) Layer 1 deals with the effect of environment on the learner. From an information technology perspective it specifies learner interaction from the environment to the learner via multimedia content. Sharable Content Object Reference Model (SCROM) is a collection of standards and specifications for content of web-based e-learning and specifies how JavaScript API can be used to integrate content development. In this paper an examination is made on the design features of interactive multimedia components of the learning packages by creating an algorithm which will give a comparative study of multimedia component used by different learning packages. The resultant graph as output helps us to analysis to what extent any LMS compliance LTSA layer 1 and SCORM specification.
[ { "created": "Thu, 19 Jan 2012 07:10:52 GMT", "version": "v1" } ]
2012-01-20
[ [ "Sengupta", "Souvik", "" ], [ "Pal", "Saurabh", "" ], [ "Banerjee", "Nilanjan", "" ] ]
The success of e-Learning is largely dependent on the impact of its multimedia aided learning content on the learner over the hyper media. The e-Learning portals with different proportion of multimedia elements have different impact on the learner, as there is lack of standardization. The Learning Technology System Architecture (LTSA) Layer 1 deals with the effect of environment on the learner. From an information technology perspective it specifies learner interaction from the environment to the learner via multimedia content. Sharable Content Object Reference Model (SCROM) is a collection of standards and specifications for content of web-based e-learning and specifies how JavaScript API can be used to integrate content development. In this paper an examination is made on the design features of interactive multimedia components of the learning packages by creating an algorithm which will give a comparative study of multimedia component used by different learning packages. The resultant graph as output helps us to analysis to what extent any LMS compliance LTSA layer 1 and SCORM specification.
2405.16098
Mohammad Rostami
Zizhao Hu and Mohammad Rostami
Lateralization MLP: A Simple Brain-inspired Architecture for Diffusion
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The Transformer architecture has dominated machine learning in a wide range of tasks. The specific characteristic of this architecture is an expensive scaled dot-product attention mechanism that models the inter-token interactions, which is known to be the reason behind its success. However, such a mechanism does not have a direct parallel to the human brain which brings the question if the scaled-dot product is necessary for intelligence with strong expressive power. Inspired by the lateralization of the human brain, we propose a new simple but effective architecture called the Lateralization MLP (L-MLP). Stacking L-MLP blocks can generate complex architectures. Each L-MLP block is based on a multi-layer perceptron (MLP) that permutes data dimensions, processes each dimension in parallel, merges them, and finally passes through a joint MLP. We discover that this specific design outperforms other MLP variants and performs comparably to a transformer-based architecture in the challenging diffusion task while being highly efficient. We conduct experiments using text-to-image generation tasks to demonstrate the effectiveness and efficiency of L-MLP. Further, we look into the model behavior and discover a connection to the function of the human brain. Our code is publicly available: \url{https://github.com/zizhao-hu/L-MLP}
[ { "created": "Sat, 25 May 2024 07:10:02 GMT", "version": "v1" } ]
2024-05-28
[ [ "Hu", "Zizhao", "" ], [ "Rostami", "Mohammad", "" ] ]
The Transformer architecture has dominated machine learning in a wide range of tasks. The specific characteristic of this architecture is an expensive scaled dot-product attention mechanism that models the inter-token interactions, which is known to be the reason behind its success. However, such a mechanism does not have a direct parallel to the human brain which brings the question if the scaled-dot product is necessary for intelligence with strong expressive power. Inspired by the lateralization of the human brain, we propose a new simple but effective architecture called the Lateralization MLP (L-MLP). Stacking L-MLP blocks can generate complex architectures. Each L-MLP block is based on a multi-layer perceptron (MLP) that permutes data dimensions, processes each dimension in parallel, merges them, and finally passes through a joint MLP. We discover that this specific design outperforms other MLP variants and performs comparably to a transformer-based architecture in the challenging diffusion task while being highly efficient. We conduct experiments using text-to-image generation tasks to demonstrate the effectiveness and efficiency of L-MLP. Further, we look into the model behavior and discover a connection to the function of the human brain. Our code is publicly available: \url{https://github.com/zizhao-hu/L-MLP}
2006.03800
Hanke Chen
Hanke Chen
Extracting Cellular Location of Human Proteins Using Deep Learning
5 page, 10 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding and extracting the patterns of microscopy images has been a major challenge in the biomedical field. Although trained scientists can locate the proteins of interest within a human cell, this procedure is not efficient and accurate enough to process a large amount of data and it often leads to bias. To resolve this problem, we attempted to create an automatic image classifier using Machine Learning to locate human proteins with higher speed and accuracy than human beings. We implemented a Convolution Neural Network with Residue and Squeeze-Excitation layers classifier to locate given proteins of any type in a subcellular structure. After training the model using a series of techniques, it can locate thousands of proteins in 27 different human cell types into 28 subcellular locations, way significant than historical approaches. The model can classify 4,500 images per minute with an accuracy of 63.07%, surpassing human performance in accuracy (by 35%) and speed. Because our system can be implemented on different cell types, it opens a new vision of understanding in the biomedical field. From the locational information of the human proteins, doctors can easily detect cell's abnormal behaviors including viral infection, pathogen invasion, and malignant tumor development. Given the amount of data generalized by experiments are greater than that human can analyze, the model cut down the human resources and time needed to analyze data. Moreover, this locational information can be used in different scenarios like subcellular engineering, medical care, and etiology inspection.
[ { "created": "Sat, 6 Jun 2020 07:15:11 GMT", "version": "v1" } ]
2020-06-09
[ [ "Chen", "Hanke", "" ] ]
Understanding and extracting the patterns of microscopy images has been a major challenge in the biomedical field. Although trained scientists can locate the proteins of interest within a human cell, this procedure is not efficient and accurate enough to process a large amount of data and it often leads to bias. To resolve this problem, we attempted to create an automatic image classifier using Machine Learning to locate human proteins with higher speed and accuracy than human beings. We implemented a Convolution Neural Network with Residue and Squeeze-Excitation layers classifier to locate given proteins of any type in a subcellular structure. After training the model using a series of techniques, it can locate thousands of proteins in 27 different human cell types into 28 subcellular locations, way significant than historical approaches. The model can classify 4,500 images per minute with an accuracy of 63.07%, surpassing human performance in accuracy (by 35%) and speed. Because our system can be implemented on different cell types, it opens a new vision of understanding in the biomedical field. From the locational information of the human proteins, doctors can easily detect cell's abnormal behaviors including viral infection, pathogen invasion, and malignant tumor development. Given the amount of data generalized by experiments are greater than that human can analyze, the model cut down the human resources and time needed to analyze data. Moreover, this locational information can be used in different scenarios like subcellular engineering, medical care, and etiology inspection.
2202.12450
Wenrui Zhang
Wenrui Zhang, Shijia Geng, Zhaoji Fu, Linlin Zheng, Chenyang Jiang, Shenda Hong
MetaVA: Curriculum Meta-learning and Pre-fine-tuning of Deep Neural Networks for Detecting Ventricular Arrhythmias based on ECGs
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ventricular arrhythmias (VA) are the main causes of sudden cardiac death. Developing machine learning methods for detecting VA based on electrocardiograms (ECGs) can help save people's lives. However, developing such machine learning models for ECGs is challenging because of the following: 1) group-level diversity from different subjects and 2) individual-level diversity from different moments of a single subject. In this study, we aim to solve these problems in the pre-training and fine-tuning stages. For the pre-training stage, we propose a novel model agnostic meta-learning (MAML) with curriculum learning (CL) method to solve group-level diversity. MAML is expected to better transfer the knowledge from a large dataset and use only a few recordings to quickly adapt the model to a new person. CL is supposed to further improve MAML by meta-learning from easy to difficult tasks. For the fine-tuning stage, we propose improved pre-fine-tuning to solve individual-level diversity. We conduct experiments using a combination of three publicly available ECG datasets. The results show that our method outperforms the compared methods in terms of all evaluation metrics. Ablation studies show that MAML and CL could help perform more evenly, and pre-fine-tuning could better fit the model to training data.
[ { "created": "Fri, 25 Feb 2022 01:26:19 GMT", "version": "v1" }, { "created": "Tue, 1 Mar 2022 02:05:59 GMT", "version": "v2" } ]
2022-03-02
[ [ "Zhang", "Wenrui", "" ], [ "Geng", "Shijia", "" ], [ "Fu", "Zhaoji", "" ], [ "Zheng", "Linlin", "" ], [ "Jiang", "Chenyang", "" ], [ "Hong", "Shenda", "" ] ]
Ventricular arrhythmias (VA) are the main causes of sudden cardiac death. Developing machine learning methods for detecting VA based on electrocardiograms (ECGs) can help save people's lives. However, developing such machine learning models for ECGs is challenging because of the following: 1) group-level diversity from different subjects and 2) individual-level diversity from different moments of a single subject. In this study, we aim to solve these problems in the pre-training and fine-tuning stages. For the pre-training stage, we propose a novel model agnostic meta-learning (MAML) with curriculum learning (CL) method to solve group-level diversity. MAML is expected to better transfer the knowledge from a large dataset and use only a few recordings to quickly adapt the model to a new person. CL is supposed to further improve MAML by meta-learning from easy to difficult tasks. For the fine-tuning stage, we propose improved pre-fine-tuning to solve individual-level diversity. We conduct experiments using a combination of three publicly available ECG datasets. The results show that our method outperforms the compared methods in terms of all evaluation metrics. Ablation studies show that MAML and CL could help perform more evenly, and pre-fine-tuning could better fit the model to training data.
1703.06813
Yakup Kutlu
Kadir Tohma, \.Ipek Abas{\i}kele\c{s} Turgut, Cuma Celal Korkmaz, Yakup Kutlu
Performance Evaluation of Mobile Base Station under different Network Sizes on Cluster-Based Wireless Sensor Networks
Natural and Engineering Sciences (NESciences)
null
null
null
cs.NI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The position of the base station (BS) in wireless sensor networks (WSNs) has a significant impact on network lifetime. This paper suggests a mobile BS positioning algorithm for cluster-based WSNs, which considers both the location and the remaining energy level of the cluster heads in the network and evaluate the performance of the algorithm under different values of network sizes, including 100m x 100m, 200m x 200m and 300m x 300m. Simulations are conducted by using OMNeT++ and proposed method is compared with two different static BS positions, including central and external, on HEED protocol. The results show that mobile BS performs better than both central and external BS positions under all network sizes. Besides, the performance difference between the proposed method and the others increases as the size of the network increases, which demonstrates that the proposed mobile BS positioning also provides scalability.
[ { "created": "Mon, 20 Mar 2017 15:58:04 GMT", "version": "v1" } ]
2017-03-21
[ [ "Tohma", "Kadir", "" ], [ "Turgut", "İpek Abasıkeleş", "" ], [ "Korkmaz", "Cuma Celal", "" ], [ "Kutlu", "Yakup", "" ] ]
The position of the base station (BS) in wireless sensor networks (WSNs) has a significant impact on network lifetime. This paper suggests a mobile BS positioning algorithm for cluster-based WSNs, which considers both the location and the remaining energy level of the cluster heads in the network and evaluate the performance of the algorithm under different values of network sizes, including 100m x 100m, 200m x 200m and 300m x 300m. Simulations are conducted by using OMNeT++ and proposed method is compared with two different static BS positions, including central and external, on HEED protocol. The results show that mobile BS performs better than both central and external BS positions under all network sizes. Besides, the performance difference between the proposed method and the others increases as the size of the network increases, which demonstrates that the proposed mobile BS positioning also provides scalability.
2207.03574
Chawin Sitawarin
Chawin Sitawarin, Zachary Golan-Strieb, David Wagner
Demystifying the Adversarial Robustness of Random Transformation Defenses
ICML 2022 (short presentation), AAAI 2022 AdvML Workshop (best paper, oral presentation)
null
null
null
cs.CR cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural networks' lack of robustness against attacks raises concerns in security-sensitive settings such as autonomous vehicles. While many countermeasures may look promising, only a few withstand rigorous evaluation. Defenses using random transformations (RT) have shown impressive results, particularly BaRT (Raff et al., 2019) on ImageNet. However, this type of defense has not been rigorously evaluated, leaving its robustness properties poorly understood. Their stochastic properties make evaluation more challenging and render many proposed attacks on deterministic models inapplicable. First, we show that the BPDA attack (Athalye et al., 2018a) used in BaRT's evaluation is ineffective and likely overestimates its robustness. We then attempt to construct the strongest possible RT defense through the informed selection of transformations and Bayesian optimization for tuning their parameters. Furthermore, we create the strongest possible attack to evaluate our RT defense. Our new attack vastly outperforms the baseline, reducing the accuracy by 83% compared to the 19% reduction by the commonly used EoT attack ($4.3\times$ improvement). Our result indicates that the RT defense on the Imagenette dataset (a ten-class subset of ImageNet) is not robust against adversarial examples. Extending the study further, we use our new attack to adversarially train RT defense (called AdvRT), resulting in a large robustness gain. Code is available at https://github.com/wagner-group/demystify-random-transform.
[ { "created": "Sat, 18 Jun 2022 04:14:38 GMT", "version": "v1" }, { "created": "Fri, 15 Jul 2022 10:19:22 GMT", "version": "v2" } ]
2022-07-18
[ [ "Sitawarin", "Chawin", "" ], [ "Golan-Strieb", "Zachary", "" ], [ "Wagner", "David", "" ] ]
Neural networks' lack of robustness against attacks raises concerns in security-sensitive settings such as autonomous vehicles. While many countermeasures may look promising, only a few withstand rigorous evaluation. Defenses using random transformations (RT) have shown impressive results, particularly BaRT (Raff et al., 2019) on ImageNet. However, this type of defense has not been rigorously evaluated, leaving its robustness properties poorly understood. Their stochastic properties make evaluation more challenging and render many proposed attacks on deterministic models inapplicable. First, we show that the BPDA attack (Athalye et al., 2018a) used in BaRT's evaluation is ineffective and likely overestimates its robustness. We then attempt to construct the strongest possible RT defense through the informed selection of transformations and Bayesian optimization for tuning their parameters. Furthermore, we create the strongest possible attack to evaluate our RT defense. Our new attack vastly outperforms the baseline, reducing the accuracy by 83% compared to the 19% reduction by the commonly used EoT attack ($4.3\times$ improvement). Our result indicates that the RT defense on the Imagenette dataset (a ten-class subset of ImageNet) is not robust against adversarial examples. Extending the study further, we use our new attack to adversarially train RT defense (called AdvRT), resulting in a large robustness gain. Code is available at https://github.com/wagner-group/demystify-random-transform.
2208.02973
Junde Wu
Junde Wu, Yu Zhang, Rao Fu, Yuanpei Liu, Jing Gao
An Efficient Person Clustering Algorithm for Open Checkout-free Groceries
European Conference on Computer Vision (ECCV) 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Open checkout-free grocery is the grocery store where the customers never have to wait in line to check out. Developing a system like this is not trivial since it faces challenges of recognizing the dynamic and massive flow of people. In particular, a clustering method that can efficiently assign each snapshot to the corresponding customer is essential for the system. In order to address the unique challenges in the open checkout-free grocery, we propose an efficient and effective person clustering method. Specifically, we first propose a Crowded Sub-Graph (CSG) to localize the relationship among massive and continuous data streams. CSG is constructed by the proposed Pick-Link-Weight (PLW) strategy, which \textbf{picks} the nodes based on time-space information, \textbf{links} the nodes via trajectory information, and \textbf{weighs} the links by the proposed von Mises-Fisher (vMF) similarity metric. Then, to ensure that the method adapts to the dynamic and unseen person flow, we propose Graph Convolutional Network (GCN) with a simple Nearest Neighbor (NN) strategy to accurately cluster the instances of CSG. GCN is adopted to project the features into low-dimensional separable space, and NN is able to quickly produce a result in this space upon dynamic person flow. The experimental results show that the proposed method outperforms other alternative algorithms in this scenario. In practice, the whole system has been implemented and deployed in several real-world open checkout-free groceries.
[ { "created": "Fri, 5 Aug 2022 03:48:07 GMT", "version": "v1" }, { "created": "Sun, 4 Sep 2022 09:05:42 GMT", "version": "v2" }, { "created": "Sun, 18 Sep 2022 04:39:21 GMT", "version": "v3" } ]
2022-09-20
[ [ "Wu", "Junde", "" ], [ "Zhang", "Yu", "" ], [ "Fu", "Rao", "" ], [ "Liu", "Yuanpei", "" ], [ "Gao", "Jing", "" ] ]
Open checkout-free grocery is the grocery store where the customers never have to wait in line to check out. Developing a system like this is not trivial since it faces challenges of recognizing the dynamic and massive flow of people. In particular, a clustering method that can efficiently assign each snapshot to the corresponding customer is essential for the system. In order to address the unique challenges in the open checkout-free grocery, we propose an efficient and effective person clustering method. Specifically, we first propose a Crowded Sub-Graph (CSG) to localize the relationship among massive and continuous data streams. CSG is constructed by the proposed Pick-Link-Weight (PLW) strategy, which \textbf{picks} the nodes based on time-space information, \textbf{links} the nodes via trajectory information, and \textbf{weighs} the links by the proposed von Mises-Fisher (vMF) similarity metric. Then, to ensure that the method adapts to the dynamic and unseen person flow, we propose Graph Convolutional Network (GCN) with a simple Nearest Neighbor (NN) strategy to accurately cluster the instances of CSG. GCN is adopted to project the features into low-dimensional separable space, and NN is able to quickly produce a result in this space upon dynamic person flow. The experimental results show that the proposed method outperforms other alternative algorithms in this scenario. In practice, the whole system has been implemented and deployed in several real-world open checkout-free groceries.
2003.03394
Miguel D. Bustamante
Ikram Ullah, Umar Hayat and Miguel D. Bustamante
Image Encryption Using Elliptic Curves and Rossby/Drift Wave Triads
Accepted and published version (Entropy 2020, 22, 454)
Entropy 2020, 22, 454
10.3390/e22040454
null
cs.CR math.AG physics.ao-ph
http://creativecommons.org/licenses/by/4.0/
We propose an image encryption scheme based on quasi-resonant Rossby/drift wave triads (related to elliptic surfaces) and Mordell elliptic curves (MECs). By defining a total order on quasi-resonant triads, at a first stage we construct quasi-resonant triads using auxiliary parameters of elliptic surfaces in order to generate pseudo-random numbers. At a second stage, we employ an MEC to construct a dynamic substitution box (S-box) for the plain image. The generated pseudo-random numbers and S-box are used to provide diffusion and confusion, respectively, in the tested image. We test the proposed scheme against well-known attacks by encrypting all gray images taken from the USC-SIPI image database. Our experimental results indicate the high security of the newly developed scheme. Finally, via extensive comparisons we show that the new scheme outperforms other popular schemes.
[ { "created": "Fri, 6 Mar 2020 19:02:55 GMT", "version": "v1" }, { "created": "Sun, 10 May 2020 12:45:38 GMT", "version": "v2" } ]
2020-05-12
[ [ "Ullah", "Ikram", "" ], [ "Hayat", "Umar", "" ], [ "Bustamante", "Miguel D.", "" ] ]
We propose an image encryption scheme based on quasi-resonant Rossby/drift wave triads (related to elliptic surfaces) and Mordell elliptic curves (MECs). By defining a total order on quasi-resonant triads, at a first stage we construct quasi-resonant triads using auxiliary parameters of elliptic surfaces in order to generate pseudo-random numbers. At a second stage, we employ an MEC to construct a dynamic substitution box (S-box) for the plain image. The generated pseudo-random numbers and S-box are used to provide diffusion and confusion, respectively, in the tested image. We test the proposed scheme against well-known attacks by encrypting all gray images taken from the USC-SIPI image database. Our experimental results indicate the high security of the newly developed scheme. Finally, via extensive comparisons we show that the new scheme outperforms other popular schemes.
2012.00993
Haonan Huang
Haonan Huang, Naiyao Liang, Wei Yan, Zuyuan Yang, Weijun Sun
Partially Shared Semi-supervised Deep Matrix Factorization with Multi-view Data
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Since many real-world data can be described from multiple views, multi-view learning has attracted considerable attention. Various methods have been proposed and successfully applied to multi-view learning, typically based on matrix factorization models. Recently, it is extended to the deep structure to exploit the hierarchical information of multi-view data, but the view-specific features and the label information are seldom considered. To address these concerns, we present a partially shared semi-supervised deep matrix factorization model (PSDMF). By integrating the partially shared deep decomposition structure, graph regularization and the semi-supervised regression model, PSDMF can learn a compact and discriminative representation through eliminating the effects of uncorrelated information. In addition, we develop an efficient iterative updating algorithm for PSDMF. Extensive experiments on five benchmark datasets demonstrate that PSDMF can achieve better performance than the state-of-the-art multi-view learning approaches. The MATLAB source code is available at https://github.com/libertyhhn/PartiallySharedDMF.
[ { "created": "Wed, 2 Dec 2020 06:59:41 GMT", "version": "v1" } ]
2020-12-03
[ [ "Huang", "Haonan", "" ], [ "Liang", "Naiyao", "" ], [ "Yan", "Wei", "" ], [ "Yang", "Zuyuan", "" ], [ "Sun", "Weijun", "" ] ]
Since many real-world data can be described from multiple views, multi-view learning has attracted considerable attention. Various methods have been proposed and successfully applied to multi-view learning, typically based on matrix factorization models. Recently, it is extended to the deep structure to exploit the hierarchical information of multi-view data, but the view-specific features and the label information are seldom considered. To address these concerns, we present a partially shared semi-supervised deep matrix factorization model (PSDMF). By integrating the partially shared deep decomposition structure, graph regularization and the semi-supervised regression model, PSDMF can learn a compact and discriminative representation through eliminating the effects of uncorrelated information. In addition, we develop an efficient iterative updating algorithm for PSDMF. Extensive experiments on five benchmark datasets demonstrate that PSDMF can achieve better performance than the state-of-the-art multi-view learning approaches. The MATLAB source code is available at https://github.com/libertyhhn/PartiallySharedDMF.
1908.04964
Anbang Yao
Jiahui Zhang, Dawei Sun, Zixin Luo, Anbang Yao, Lei Zhou, Tianwei Shen, Yurong Chen, Long Quan, Hongen Liao
Learning Two-View Correspondences and Geometry Using Order-Aware Network
Accepted to ICCV 2019, and Winner solution to both tracks of CVPR IMW 2019 Challenge. Code will be available soon at https://github.com/zjhthu/OANet.git
null
null
null
cs.CV cs.CG cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Establishing correspondences between two images requires both local and global spatial context. Given putative correspondences of feature points in two views, in this paper, we propose Order-Aware Network, which infers the probabilities of correspondences being inliers and regresses the relative pose encoded by the essential matrix. Specifically, this proposed network is built hierarchically and comprises three novel operations. First, to capture the local context of sparse correspondences, the network clusters unordered input correspondences by learning a soft assignment matrix. These clusters are in a canonical order and invariant to input permutations. Next, the clusters are spatially correlated to form the global context of correspondences. After that, the context-encoded clusters are recovered back to the original size through a proposed upsampling operator. We intensively experiment on both outdoor and indoor datasets. The accuracy of the two-view geometry and correspondences are significantly improved over the state-of-the-arts. Code will be available at https://github.com/zjhthu/OANet.git.
[ { "created": "Wed, 14 Aug 2019 05:42:18 GMT", "version": "v1" } ]
2019-08-15
[ [ "Zhang", "Jiahui", "" ], [ "Sun", "Dawei", "" ], [ "Luo", "Zixin", "" ], [ "Yao", "Anbang", "" ], [ "Zhou", "Lei", "" ], [ "Shen", "Tianwei", "" ], [ "Chen", "Yurong", "" ], [ "Quan", "Long", "" ], [ "Liao", "Hongen", "" ] ]
Establishing correspondences between two images requires both local and global spatial context. Given putative correspondences of feature points in two views, in this paper, we propose Order-Aware Network, which infers the probabilities of correspondences being inliers and regresses the relative pose encoded by the essential matrix. Specifically, this proposed network is built hierarchically and comprises three novel operations. First, to capture the local context of sparse correspondences, the network clusters unordered input correspondences by learning a soft assignment matrix. These clusters are in a canonical order and invariant to input permutations. Next, the clusters are spatially correlated to form the global context of correspondences. After that, the context-encoded clusters are recovered back to the original size through a proposed upsampling operator. We intensively experiment on both outdoor and indoor datasets. The accuracy of the two-view geometry and correspondences are significantly improved over the state-of-the-arts. Code will be available at https://github.com/zjhthu/OANet.git.
1101.0640
Chandra Nair
Chandra Nair
A note on outer bounds for broadcast channel
This was presented in the International Zurich Seminar 2010. This is just for a documented proof of the result
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this note we establish two facts concerning the so-called {\em New-Jersey} outer bound. We show that this outer bound is equivalent to a much simpler {\em computable} region; and secondly we show that in the absence of private information this bound is exactly same as the $UV$-outerbound.
[ { "created": "Tue, 4 Jan 2011 02:28:37 GMT", "version": "v1" } ]
2011-01-05
[ [ "Nair", "Chandra", "" ] ]
In this note we establish two facts concerning the so-called {\em New-Jersey} outer bound. We show that this outer bound is equivalent to a much simpler {\em computable} region; and secondly we show that in the absence of private information this bound is exactly same as the $UV$-outerbound.
2108.04607
Liping Wang
Liping Wang, Fenyu Hu, Shu Wu, Liang Wang
Fully Hyperbolic Graph Convolution Network for Recommendation
Accepted by CIKM 2021 short paper track
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Recently, Graph Convolution Network (GCN) based methods have achieved outstanding performance for recommendation. These methods embed users and items in Euclidean space, and perform graph convolution on user-item interaction graphs. However, real-world datasets usually exhibit tree-like hierarchical structures, which make Euclidean space less effective in capturing user-item relationship. In contrast, hyperbolic space, as a continuous analogue of a tree-graph, provides a promising alternative. In this paper, we propose a fully hyperbolic GCN model for recommendation, where all operations are performed in hyperbolic space. Utilizing the advantage of hyperbolic space, our method is able to embed users/items with less distortion and capture user-item interaction relationship more accurately. Extensive experiments on public benchmark datasets show that our method outperforms both Euclidean and hyperbolic counterparts and requires far lower embedding dimensionality to achieve comparable performance.
[ { "created": "Tue, 10 Aug 2021 11:26:42 GMT", "version": "v1" } ]
2021-08-11
[ [ "Wang", "Liping", "" ], [ "Hu", "Fenyu", "" ], [ "Wu", "Shu", "" ], [ "Wang", "Liang", "" ] ]
Recently, Graph Convolution Network (GCN) based methods have achieved outstanding performance for recommendation. These methods embed users and items in Euclidean space, and perform graph convolution on user-item interaction graphs. However, real-world datasets usually exhibit tree-like hierarchical structures, which make Euclidean space less effective in capturing user-item relationship. In contrast, hyperbolic space, as a continuous analogue of a tree-graph, provides a promising alternative. In this paper, we propose a fully hyperbolic GCN model for recommendation, where all operations are performed in hyperbolic space. Utilizing the advantage of hyperbolic space, our method is able to embed users/items with less distortion and capture user-item interaction relationship more accurately. Extensive experiments on public benchmark datasets show that our method outperforms both Euclidean and hyperbolic counterparts and requires far lower embedding dimensionality to achieve comparable performance.
2407.15614
Ferran Maura
Ferran Maura, Miguel Casasnovas, Boris Bellalta
Experimenting with Adaptive Bitrate Algorithms for Virtual Reality Streaming over Wi-Fi
null
null
null
null
cs.NI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactive Virtual Reality (VR) streaming over Wi-Fi networks encounters significant challenges due to bandwidth fluctuations caused by channel contention and user mobility. Adaptive BitRate (ABR) algorithms dynamically adjust the video encoding bitrate based on the available network capacity, aiming to maximize image quality while mitigating congestion and preserving the user's Quality of Experience (QoE). In this paper, we experiment with ABR algorithms for VR streaming using Air Light VR (ALVR), an open-source VR streaming solution. We extend ALVR with a comprehensive set of metrics that provide a robust characterization of the network's state, enabling more informed bitrate adjustments. To demonstrate the utility of these performance indicators, we develop and test the Network-aware Step-wise ABR algorithm for VR streaming (NeSt-VR). Results validate the accuracy of the newly implemented network performance metrics and demonstrate NeSt-VR's video bitrate adaptation capabilities.
[ { "created": "Mon, 22 Jul 2024 13:20:47 GMT", "version": "v1" } ]
2024-07-23
[ [ "Maura", "Ferran", "" ], [ "Casasnovas", "Miguel", "" ], [ "Bellalta", "Boris", "" ] ]
Interactive Virtual Reality (VR) streaming over Wi-Fi networks encounters significant challenges due to bandwidth fluctuations caused by channel contention and user mobility. Adaptive BitRate (ABR) algorithms dynamically adjust the video encoding bitrate based on the available network capacity, aiming to maximize image quality while mitigating congestion and preserving the user's Quality of Experience (QoE). In this paper, we experiment with ABR algorithms for VR streaming using Air Light VR (ALVR), an open-source VR streaming solution. We extend ALVR with a comprehensive set of metrics that provide a robust characterization of the network's state, enabling more informed bitrate adjustments. To demonstrate the utility of these performance indicators, we develop and test the Network-aware Step-wise ABR algorithm for VR streaming (NeSt-VR). Results validate the accuracy of the newly implemented network performance metrics and demonstrate NeSt-VR's video bitrate adaptation capabilities.
2103.06018
Yi Liu
Yan Zheng and Yi Liu, Xiaofei Xie, Yepang Liu, Lei Ma, Jianye Hao, Yang Liu
Automatic Web Testing using Curiosity-Driven Reinforcement Learning
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Web testing has long been recognized as a notoriously difficult task. Even nowadays, web testing still heavily relies on manual efforts while automated web testing is far from achieving human-level performance. Key challenges in web testing include dynamic content update and deep bugs hiding under complicated user interactions and specific input values, which can only be triggered by certain action sequences in the huge search space. In this paper, we propose WebExplor, an automatic end-to-end web testing framework, to achieve an adaptive exploration of web applications. WebExplor adopts curiosity-driven reinforcement learning to generate high-quality action sequences (test cases) satisfying temporal logical relations. Besides, WebExplor incrementally builds an automaton during the online testing process, which provides high-level guidance to further improve the testing efficiency. We have conducted comprehensive evaluations of WebExplor on six real-world projects, a commercial SaaS web application, and performed an in-the-wild study of the top 50 web applications in the world. The results demonstrate that in most cases WebExplor can achieve a significantly higher failure detection rate, code coverage, and efficiency than existing state-of-the-art web testing techniques. WebExplor also detected 12 previously unknown failures in the commercial web application, which have been confirmed and fixed by the developers. Furthermore, our in-the-wild study further uncovered 3,466 exceptions and errors.
[ { "created": "Wed, 10 Mar 2021 12:34:36 GMT", "version": "v1" } ]
2021-03-11
[ [ "Zheng", "Yan", "" ], [ "Liu", "Yi", "" ], [ "Xie", "Xiaofei", "" ], [ "Liu", "Yepang", "" ], [ "Ma", "Lei", "" ], [ "Hao", "Jianye", "" ], [ "Liu", "Yang", "" ] ]
Web testing has long been recognized as a notoriously difficult task. Even nowadays, web testing still heavily relies on manual efforts while automated web testing is far from achieving human-level performance. Key challenges in web testing include dynamic content update and deep bugs hiding under complicated user interactions and specific input values, which can only be triggered by certain action sequences in the huge search space. In this paper, we propose WebExplor, an automatic end-to-end web testing framework, to achieve an adaptive exploration of web applications. WebExplor adopts curiosity-driven reinforcement learning to generate high-quality action sequences (test cases) satisfying temporal logical relations. Besides, WebExplor incrementally builds an automaton during the online testing process, which provides high-level guidance to further improve the testing efficiency. We have conducted comprehensive evaluations of WebExplor on six real-world projects, a commercial SaaS web application, and performed an in-the-wild study of the top 50 web applications in the world. The results demonstrate that in most cases WebExplor can achieve a significantly higher failure detection rate, code coverage, and efficiency than existing state-of-the-art web testing techniques. WebExplor also detected 12 previously unknown failures in the commercial web application, which have been confirmed and fixed by the developers. Furthermore, our in-the-wild study further uncovered 3,466 exceptions and errors.
2006.05367
Huaying Hao
Huaying Hao, Huazhu Fu, Yanwu Xu, Jianlong Yang, Fei Li, Xiulan Zhang, Jiang Liu, Yitian Zhao
Open-Narrow-Synechiae Anterior Chamber Angle Classification in AS-OCT Sequences
Accepted to MICCAI 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anterior chamber angle (ACA) classification is a key step in the diagnosis of angle-closure glaucoma in Anterior Segment Optical Coherence Tomography (AS-OCT). Existing automated analysis methods focus on a binary classification system (i.e., open angle or angle-closure) in a 2D AS-OCT slice. However, clinical diagnosis requires a more discriminating ACA three-class system (i.e., open, narrow, or synechiae angles) for the benefit of clinicians who seek better to understand the progression of the spectrum of angle-closure glaucoma types. To address this, we propose a novel sequence multi-scale aggregation deep network (SMA-Net) for open-narrow-synechiae ACA classification based on an AS-OCT sequence. In our method, a Multi-Scale Discriminative Aggregation (MSDA) block is utilized to learn the multi-scale representations at slice level, while a ConvLSTM is introduced to study the temporal dynamics of these representations at sequence level. Finally, a multi-level loss function is used to combine the slice-based and sequence-based losses. The proposed method is evaluated across two AS-OCT datasets. The experimental results show that the proposed method outperforms existing state-of-the-art methods in applicability, effectiveness, and accuracy. We believe this work to be the first attempt to classify ACAs into open, narrow, or synechia types grading using AS-OCT sequences.
[ { "created": "Tue, 9 Jun 2020 16:00:00 GMT", "version": "v1" } ]
2020-06-11
[ [ "Hao", "Huaying", "" ], [ "Fu", "Huazhu", "" ], [ "Xu", "Yanwu", "" ], [ "Yang", "Jianlong", "" ], [ "Li", "Fei", "" ], [ "Zhang", "Xiulan", "" ], [ "Liu", "Jiang", "" ], [ "Zhao", "Yitian", "" ] ]
Anterior chamber angle (ACA) classification is a key step in the diagnosis of angle-closure glaucoma in Anterior Segment Optical Coherence Tomography (AS-OCT). Existing automated analysis methods focus on a binary classification system (i.e., open angle or angle-closure) in a 2D AS-OCT slice. However, clinical diagnosis requires a more discriminating ACA three-class system (i.e., open, narrow, or synechiae angles) for the benefit of clinicians who seek better to understand the progression of the spectrum of angle-closure glaucoma types. To address this, we propose a novel sequence multi-scale aggregation deep network (SMA-Net) for open-narrow-synechiae ACA classification based on an AS-OCT sequence. In our method, a Multi-Scale Discriminative Aggregation (MSDA) block is utilized to learn the multi-scale representations at slice level, while a ConvLSTM is introduced to study the temporal dynamics of these representations at sequence level. Finally, a multi-level loss function is used to combine the slice-based and sequence-based losses. The proposed method is evaluated across two AS-OCT datasets. The experimental results show that the proposed method outperforms existing state-of-the-art methods in applicability, effectiveness, and accuracy. We believe this work to be the first attempt to classify ACAs into open, narrow, or synechia types grading using AS-OCT sequences.
1408.3775
Joao Marcos
Carlos Caleiro, Jo\~ao Marcos, Marco Volpe
Bivalent semantics, generalized compositionality and analytic classic-like tableaux for finite-valued logics
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper is a contribution both to the theoretical foundations and to the actual construction of efficient automatizable proof procedures for non-classical logics. We focus here on the case of finite-valued logics, and exhibit: (i) a mechanism for producing a classic-like description of them in terms of an effective variety of bivalent semantics; (ii) a mechanism for extracting, from the bivalent semantics so obtained, uniform (classically-labeled) cut-free standard analytic tableaux with possibly branching invertible rules and paired with proof strategies designed to guarantee termination of the associated proof procedure; (iii) a mechanism to also provide, for the same logics, uniform cut-based tableau systems with linear rules. The latter tableau systems are shown to be adequate even when restricted to analytic cuts, and they are also shown to polynomially simulate truth-tables, a feature that is not enjoyed by the former standard type of tableau systems (not even in the 2-valued case). The results are based on useful generalizations of the notions of analyticity and compositionality, and illustrate a theory that applies to many other classes of non-classical logics.
[ { "created": "Sat, 16 Aug 2014 21:53:03 GMT", "version": "v1" } ]
2014-08-19
[ [ "Caleiro", "Carlos", "" ], [ "Marcos", "João", "" ], [ "Volpe", "Marco", "" ] ]
The paper is a contribution both to the theoretical foundations and to the actual construction of efficient automatizable proof procedures for non-classical logics. We focus here on the case of finite-valued logics, and exhibit: (i) a mechanism for producing a classic-like description of them in terms of an effective variety of bivalent semantics; (ii) a mechanism for extracting, from the bivalent semantics so obtained, uniform (classically-labeled) cut-free standard analytic tableaux with possibly branching invertible rules and paired with proof strategies designed to guarantee termination of the associated proof procedure; (iii) a mechanism to also provide, for the same logics, uniform cut-based tableau systems with linear rules. The latter tableau systems are shown to be adequate even when restricted to analytic cuts, and they are also shown to polynomially simulate truth-tables, a feature that is not enjoyed by the former standard type of tableau systems (not even in the 2-valued case). The results are based on useful generalizations of the notions of analyticity and compositionality, and illustrate a theory that applies to many other classes of non-classical logics.
2405.15306
Jonas Belouadi
Jonas Belouadi, Simone Paolo Ponzetto, Steffen Eger
DeTikZify: Synthesizing Graphics Programs for Scientific Figures and Sketches with TikZ
Project page: https://github.com/potamides/DeTikZify
null
null
null
cs.CL cs.CV
http://creativecommons.org/licenses/by/4.0/
Creating high-quality scientific figures can be time-consuming and challenging, even though sketching ideas on paper is relatively easy. Furthermore, recreating existing figures that are not stored in formats preserving semantic information is equally complex. To tackle this problem, we introduce DeTikZify, a novel multimodal language model that automatically synthesizes scientific figures as semantics-preserving TikZ graphics programs based on sketches and existing figures. To achieve this, we create three new datasets: DaTikZv2, the largest TikZ dataset to date, containing over 360k human-created TikZ graphics; SketchFig, a dataset that pairs hand-drawn sketches with their corresponding scientific figures; and SciCap++, a collection of diverse scientific figures and associated metadata. We train DeTikZify on SciCap++ and DaTikZv2, along with synthetically generated sketches learned from SketchFig. We also introduce an MCTS-based inference algorithm that enables DeTikZify to iteratively refine its outputs without the need for additional training. Through both automatic and human evaluation, we demonstrate that DeTikZify outperforms commercial Claude 3 and GPT-4V in synthesizing TikZ programs, with the MCTS algorithm effectively boosting its performance. We make our code, models, and datasets publicly available.
[ { "created": "Fri, 24 May 2024 07:48:35 GMT", "version": "v1" }, { "created": "Tue, 28 May 2024 06:48:58 GMT", "version": "v2" } ]
2024-05-31
[ [ "Belouadi", "Jonas", "" ], [ "Ponzetto", "Simone Paolo", "" ], [ "Eger", "Steffen", "" ] ]
Creating high-quality scientific figures can be time-consuming and challenging, even though sketching ideas on paper is relatively easy. Furthermore, recreating existing figures that are not stored in formats preserving semantic information is equally complex. To tackle this problem, we introduce DeTikZify, a novel multimodal language model that automatically synthesizes scientific figures as semantics-preserving TikZ graphics programs based on sketches and existing figures. To achieve this, we create three new datasets: DaTikZv2, the largest TikZ dataset to date, containing over 360k human-created TikZ graphics; SketchFig, a dataset that pairs hand-drawn sketches with their corresponding scientific figures; and SciCap++, a collection of diverse scientific figures and associated metadata. We train DeTikZify on SciCap++ and DaTikZv2, along with synthetically generated sketches learned from SketchFig. We also introduce an MCTS-based inference algorithm that enables DeTikZify to iteratively refine its outputs without the need for additional training. Through both automatic and human evaluation, we demonstrate that DeTikZify outperforms commercial Claude 3 and GPT-4V in synthesizing TikZ programs, with the MCTS algorithm effectively boosting its performance. We make our code, models, and datasets publicly available.
2206.11076
Sam Clarke
Sam Clarke and Jess Whittlestone
A Survey of the Potential Long-term Impacts of AI
9 pages, to be published in Proceedings of 2022 AAAI/ACM Conference on AI, Ethics, and Society
null
10.1145/3514094.3534131 10.1145/3514094.3534131 10.1145/3514094.3534131
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
It is increasingly recognised that advances in artificial intelligence could have large and long-lasting impacts on society. However, what form those impacts will take, just how large and long-lasting they will be, and whether they will ultimately be positive or negative for humanity, is far from clear. Based on surveying literature on the societal impacts of AI, we identify and discuss five potential long-term impacts of AI: how AI could lead to long-term changes in science, cooperation, power, epistemics, and values. We review the state of existing research in each of these areas and highlight priority questions for future research.
[ { "created": "Wed, 22 Jun 2022 13:42:28 GMT", "version": "v1" } ]
2022-06-23
[ [ "Clarke", "Sam", "" ], [ "Whittlestone", "Jess", "" ] ]
It is increasingly recognised that advances in artificial intelligence could have large and long-lasting impacts on society. However, what form those impacts will take, just how large and long-lasting they will be, and whether they will ultimately be positive or negative for humanity, is far from clear. Based on surveying literature on the societal impacts of AI, we identify and discuss five potential long-term impacts of AI: how AI could lead to long-term changes in science, cooperation, power, epistemics, and values. We review the state of existing research in each of these areas and highlight priority questions for future research.
2406.14402
Christian Anti\'c
Christian Anti\'c
Logic-based analogical proportions
null
null
null
null
cs.LO cs.DM math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The author has recently introduced an abstract algebraic framework of analogical proportions within the general setting of universal algebra. The purpose of this paper is to lift that framework from universal algebra to the strictly more expressive setting of full first-order logic. We show that the so-obtained logic-based framework preserves all desired properties and we prove novel results in that extended setting.
[ { "created": "Thu, 20 Jun 2024 15:23:40 GMT", "version": "v1" } ]
2024-06-21
[ [ "Antić", "Christian", "" ] ]
The author has recently introduced an abstract algebraic framework of analogical proportions within the general setting of universal algebra. The purpose of this paper is to lift that framework from universal algebra to the strictly more expressive setting of full first-order logic. We show that the so-obtained logic-based framework preserves all desired properties and we prove novel results in that extended setting.
1910.04056
Saida Mahmoud
Marco Menardi, Alex Falcon, Saida S.Mohamed, Lorenzo Seidenari, Giuseppe Serra, Alberto Del Bimbo and Carlo Tasso
Text-to-Image Synthesis Based on Machine Generated Captions
null
null
null
null
cs.LG cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text to Image Synthesis refers to the process of automatic generation of a photo-realistic image starting from a given text and is revolutionizing many real-world applications. In order to perform such process it is necessary to exploit datasets containing captioned images, meaning that each image is associated with one (or more) captions describing it. Despite the abundance of uncaptioned images datasets, the number of captioned datasets is limited. To address this issue, in this paper we propose an approach capable of generating images starting from a given text using conditional GANs trained on uncaptioned images dataset. In particular, uncaptioned images are fed to an Image Captioning Module to generate the descriptions. Then, the GAN Module is trained on both the input image and the machine-generated caption. To evaluate the results, the performance of our solution is compared with the results obtained by the unconditional GAN. For the experiments, we chose to use the uncaptioned dataset LSUN bedroom. The results obtained in our study are preliminary but still promising.
[ { "created": "Wed, 9 Oct 2019 15:14:09 GMT", "version": "v1" } ]
2019-10-10
[ [ "Menardi", "Marco", "" ], [ "Falcon", "Alex", "" ], [ "Mohamed", "Saida S.", "" ], [ "Seidenari", "Lorenzo", "" ], [ "Serra", "Giuseppe", "" ], [ "Del Bimbo", "Alberto", "" ], [ "Tasso", "Carlo", "" ] ]
Text to Image Synthesis refers to the process of automatic generation of a photo-realistic image starting from a given text and is revolutionizing many real-world applications. In order to perform such process it is necessary to exploit datasets containing captioned images, meaning that each image is associated with one (or more) captions describing it. Despite the abundance of uncaptioned images datasets, the number of captioned datasets is limited. To address this issue, in this paper we propose an approach capable of generating images starting from a given text using conditional GANs trained on uncaptioned images dataset. In particular, uncaptioned images are fed to an Image Captioning Module to generate the descriptions. Then, the GAN Module is trained on both the input image and the machine-generated caption. To evaluate the results, the performance of our solution is compared with the results obtained by the unconditional GAN. For the experiments, we chose to use the uncaptioned dataset LSUN bedroom. The results obtained in our study are preliminary but still promising.
2305.18405
Yue Liu
Yue Liu, Ke Liang, Jun Xia, Sihang Zhou, Xihong Yang, Xinwang Liu, Stan Z. Li
Dink-Net: Neural Clustering on Large Graphs
19 pages, 5 figures
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep graph clustering, which aims to group the nodes of a graph into disjoint clusters with deep neural networks, has achieved promising progress in recent years. However, the existing methods fail to scale to the large graph with million nodes. To solve this problem, a scalable deep graph clustering method (Dink-Net) is proposed with the idea of dilation and shrink. Firstly, by discriminating nodes, whether being corrupted by augmentations, representations are learned in a self-supervised manner. Meanwhile, the cluster centres are initialized as learnable neural parameters. Subsequently, the clustering distribution is optimized by minimizing the proposed cluster dilation loss and cluster shrink loss in an adversarial manner. By these settings, we unify the two-step clustering, i.e., representation learning and clustering optimization, into an end-to-end framework, guiding the network to learn clustering-friendly features. Besides, Dink-Net scales well to large graphs since the designed loss functions adopt the mini-batch data to optimize the clustering distribution even without performance drops. Both experimental results and theoretical analyses demonstrate the superiority of our method. Compared to the runner-up, Dink-Net achieves 9.62% NMI improvement on the ogbn-papers100M dataset with 111 million nodes and 1.6 billion edges. The source code is released at https://github.com/yueliu1999/Dink-Net. Besides, a collection (papers, codes, and datasets) of deep graph clustering is shared at https://github.com/yueliu1999/Awesome-Deep-Graph-Clustering.
[ { "created": "Sun, 28 May 2023 15:33:24 GMT", "version": "v1" }, { "created": "Wed, 31 May 2023 09:39:12 GMT", "version": "v2" }, { "created": "Fri, 14 Jul 2023 16:00:24 GMT", "version": "v3" } ]
2023-07-17
[ [ "Liu", "Yue", "" ], [ "Liang", "Ke", "" ], [ "Xia", "Jun", "" ], [ "Zhou", "Sihang", "" ], [ "Yang", "Xihong", "" ], [ "Liu", "Xinwang", "" ], [ "Li", "Stan Z.", "" ] ]
Deep graph clustering, which aims to group the nodes of a graph into disjoint clusters with deep neural networks, has achieved promising progress in recent years. However, the existing methods fail to scale to the large graph with million nodes. To solve this problem, a scalable deep graph clustering method (Dink-Net) is proposed with the idea of dilation and shrink. Firstly, by discriminating nodes, whether being corrupted by augmentations, representations are learned in a self-supervised manner. Meanwhile, the cluster centres are initialized as learnable neural parameters. Subsequently, the clustering distribution is optimized by minimizing the proposed cluster dilation loss and cluster shrink loss in an adversarial manner. By these settings, we unify the two-step clustering, i.e., representation learning and clustering optimization, into an end-to-end framework, guiding the network to learn clustering-friendly features. Besides, Dink-Net scales well to large graphs since the designed loss functions adopt the mini-batch data to optimize the clustering distribution even without performance drops. Both experimental results and theoretical analyses demonstrate the superiority of our method. Compared to the runner-up, Dink-Net achieves 9.62% NMI improvement on the ogbn-papers100M dataset with 111 million nodes and 1.6 billion edges. The source code is released at https://github.com/yueliu1999/Dink-Net. Besides, a collection (papers, codes, and datasets) of deep graph clustering is shared at https://github.com/yueliu1999/Awesome-Deep-Graph-Clustering.
2310.00332
Iurii Katser
Iurii Katser, Vyacheslav Kozitsin, Igor Mozolin
MFL Data Preprocessing and CNN-based Oil Pipeline Defects Detection
9 pages, 6 figures, 5 tables, 14 references. arXiv admin note: text overlap with arXiv:2009.10163 by other authors
null
null
null
cs.LG cs.CV cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Recently, the application of computer vision for anomaly detection has been under attention in several industrial fields. An important example is oil pipeline defect detection. Failure of one oil pipeline can interrupt the operation of the entire transportation system or cause a far-reaching failure. The automated defect detection could significantly decrease the inspection time and the related costs. However, there is a gap in the related literature when it comes to dealing with this task. The existing studies do not sufficiently cover the research of the Magnetic Flux Leakage data and the preprocessing techniques that allow overcoming the limitations set by the available data. This work focuses on alleviating these issues. Moreover, in doing so, we exploited the recent convolutional neural network structures and proposed robust approaches, aiming to acquire high performance considering the related metrics. The proposed approaches and their applicability were verified using real-world data.
[ { "created": "Sat, 30 Sep 2023 10:37:12 GMT", "version": "v1" } ]
2023-10-03
[ [ "Katser", "Iurii", "" ], [ "Kozitsin", "Vyacheslav", "" ], [ "Mozolin", "Igor", "" ] ]
Recently, the application of computer vision for anomaly detection has been under attention in several industrial fields. An important example is oil pipeline defect detection. Failure of one oil pipeline can interrupt the operation of the entire transportation system or cause a far-reaching failure. The automated defect detection could significantly decrease the inspection time and the related costs. However, there is a gap in the related literature when it comes to dealing with this task. The existing studies do not sufficiently cover the research of the Magnetic Flux Leakage data and the preprocessing techniques that allow overcoming the limitations set by the available data. This work focuses on alleviating these issues. Moreover, in doing so, we exploited the recent convolutional neural network structures and proposed robust approaches, aiming to acquire high performance considering the related metrics. The proposed approaches and their applicability were verified using real-world data.
0806.2008
Arnaud Martin
Arnaud Martin (E3I2), Christophe Osswald (E3I2)
Generalized proportional conflict redistribution rule applied to Sonar imagery and Radar targets classification
null
Advances and Applications of DSmT for Information Fusion, Florentin Smarandache & Jean Dezert (Ed.) (2006) 289-304
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this chapter, we present two applications in information fusion in order to evaluate the generalized proportional conflict redistribution rule presented in the chapter \cite{Martin06a}. Most of the time the combination rules are evaluated only on simple examples. We study here different combination rules and compare them in terms of decision on real data. Indeed, in real applications, we need a reliable decision and it is the final results that matter. Two applications are presented here: a fusion of human experts opinions on the kind of underwater sediments depict on sonar image and a classifier fusion for radar targets recognition.
[ { "created": "Thu, 12 Jun 2008 06:47:26 GMT", "version": "v1" } ]
2008-12-18
[ [ "Martin", "Arnaud", "", "E3I2" ], [ "Osswald", "Christophe", "", "E3I2" ] ]
In this chapter, we present two applications in information fusion in order to evaluate the generalized proportional conflict redistribution rule presented in the chapter \cite{Martin06a}. Most of the time the combination rules are evaluated only on simple examples. We study here different combination rules and compare them in terms of decision on real data. Indeed, in real applications, we need a reliable decision and it is the final results that matter. Two applications are presented here: a fusion of human experts opinions on the kind of underwater sediments depict on sonar image and a classifier fusion for radar targets recognition.
1807.09951
Long Zhao
Long Zhao, Xi Peng, Yu Tian, Mubbasir Kapadia, Dimitris Metaxas
Learning to Forecast and Refine Residual Motion for Image-to-Video Generation
17 pages, 8 figures, 4 tables, accepted by ECCV 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of image-to-video translation, where an input image is translated into an output video containing motions of a single object. Recent methods for such problems typically train transformation networks to generate future frames conditioned on the structure sequence. Parallel work has shown that short high-quality motions can be generated by spatiotemporal generative networks that leverage temporal knowledge from the training data. We combine the benefits of both approaches and propose a two-stage generation framework where videos are generated from structures and then refined by temporal signals. To model motions more efficiently, we train networks to learn residual motion between the current and future frames, which avoids learning motion-irrelevant details. We conduct extensive experiments on two image-to-video translation tasks: facial expression retargeting and human pose forecasting. Superior results over the state-of-the-art methods on both tasks demonstrate the effectiveness of our approach.
[ { "created": "Thu, 26 Jul 2018 04:42:58 GMT", "version": "v1" } ]
2018-07-27
[ [ "Zhao", "Long", "" ], [ "Peng", "Xi", "" ], [ "Tian", "Yu", "" ], [ "Kapadia", "Mubbasir", "" ], [ "Metaxas", "Dimitris", "" ] ]
We consider the problem of image-to-video translation, where an input image is translated into an output video containing motions of a single object. Recent methods for such problems typically train transformation networks to generate future frames conditioned on the structure sequence. Parallel work has shown that short high-quality motions can be generated by spatiotemporal generative networks that leverage temporal knowledge from the training data. We combine the benefits of both approaches and propose a two-stage generation framework where videos are generated from structures and then refined by temporal signals. To model motions more efficiently, we train networks to learn residual motion between the current and future frames, which avoids learning motion-irrelevant details. We conduct extensive experiments on two image-to-video translation tasks: facial expression retargeting and human pose forecasting. Superior results over the state-of-the-art methods on both tasks demonstrate the effectiveness of our approach.
2403.11764
Yixuan Huang
Yixuan Huang, Jie Yang, Chao-Kai Wen, Shi Jin
RIS-aided Single-frequency 3D Imaging by Exploiting Multi-view Image Correlations
16 pages, 12 figures, accepted by IEEE Transactions on Communications
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Retrieving range information in three-dimensional (3D) radio imaging is particularly challenging due to the limited communication bandwidth and pilot resources. To address this issue, we consider a reconfigurable intelligent surface (RIS)-aided uplink communication scenario, generating multiple measurements through RIS phase adjustment. This study successfully realizes 3D single-frequency imaging by exploiting the near-field multi-view image correlations deduced from user mobility. We first highlight the significance of considering anisotropy in multi-view image formation by investigating radar cross-section properties and diffraction resolution limits. We then propose a novel model for joint multi-view 3D imaging that incorporates occlusion effects and anisotropic scattering. These factors lead to slow image support variation and smooth coefficient evolution, which are mathematically modeled as Markov processes. Based on this model, we employ the Expectation Maximization-Turbo-Generalized Approximate Message Passing algorithm for joint multi-view single-frequency 3D imaging with limited measurements. Simulation results reveal the superiority of joint multi-view imaging in terms of enhanced imaging ranges, accuracies, and anisotropy characterization compared to single-view imaging. Combining adjacent observations for joint multi-view imaging enables a reduction in the measurement overhead by 80%.
[ { "created": "Mon, 18 Mar 2024 13:22:22 GMT", "version": "v1" } ]
2024-03-19
[ [ "Huang", "Yixuan", "" ], [ "Yang", "Jie", "" ], [ "Wen", "Chao-Kai", "" ], [ "Jin", "Shi", "" ] ]
Retrieving range information in three-dimensional (3D) radio imaging is particularly challenging due to the limited communication bandwidth and pilot resources. To address this issue, we consider a reconfigurable intelligent surface (RIS)-aided uplink communication scenario, generating multiple measurements through RIS phase adjustment. This study successfully realizes 3D single-frequency imaging by exploiting the near-field multi-view image correlations deduced from user mobility. We first highlight the significance of considering anisotropy in multi-view image formation by investigating radar cross-section properties and diffraction resolution limits. We then propose a novel model for joint multi-view 3D imaging that incorporates occlusion effects and anisotropic scattering. These factors lead to slow image support variation and smooth coefficient evolution, which are mathematically modeled as Markov processes. Based on this model, we employ the Expectation Maximization-Turbo-Generalized Approximate Message Passing algorithm for joint multi-view single-frequency 3D imaging with limited measurements. Simulation results reveal the superiority of joint multi-view imaging in terms of enhanced imaging ranges, accuracies, and anisotropy characterization compared to single-view imaging. Combining adjacent observations for joint multi-view imaging enables a reduction in the measurement overhead by 80%.
2308.15989
Dian Zheng
Dian Zheng, Xiao-Ming Wu, Zuhao Liu, Jingke Meng, Wei-shi Zheng
DiffuVolume: Diffusion Model for Volume based Stereo Matching
17 pages, 11 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stereo matching is a significant part in many computer vision tasks and driving-based applications. Recently cost volume-based methods have achieved great success benefiting from the rich geometry information in paired images. However, the redundancy of cost volume also interferes with the model training and limits the performance. To construct a more precise cost volume, we pioneeringly apply the diffusion model to stereo matching. Our method, termed DiffuVolume, considers the diffusion model as a cost volume filter, which will recurrently remove the redundant information from the cost volume. Two main designs make our method not trivial. Firstly, to make the diffusion model more adaptive to stereo matching, we eschew the traditional manner of directly adding noise into the image but embed the diffusion model into a task-specific module. In this way, we outperform the traditional diffusion stereo matching method by 22% EPE improvement and 240 times inference acceleration. Secondly, DiffuVolume can be easily embedded into any volume-based stereo matching network with boost performance but slight parameters rise (only 2%). By adding the DiffuVolume into well-performed methods, we outperform all the published methods on Scene Flow, KITTI2012, KITTI2015 benchmarks and zero-shot generalization setting. It is worth mentioning that the proposed model ranks 1st on KITTI 2012 leader board, 2nd on KITTI 2015 leader board since 15, July 2023.
[ { "created": "Wed, 30 Aug 2023 12:19:35 GMT", "version": "v1" } ]
2023-08-31
[ [ "Zheng", "Dian", "" ], [ "Wu", "Xiao-Ming", "" ], [ "Liu", "Zuhao", "" ], [ "Meng", "Jingke", "" ], [ "Zheng", "Wei-shi", "" ] ]
Stereo matching is a significant part in many computer vision tasks and driving-based applications. Recently cost volume-based methods have achieved great success benefiting from the rich geometry information in paired images. However, the redundancy of cost volume also interferes with the model training and limits the performance. To construct a more precise cost volume, we pioneeringly apply the diffusion model to stereo matching. Our method, termed DiffuVolume, considers the diffusion model as a cost volume filter, which will recurrently remove the redundant information from the cost volume. Two main designs make our method not trivial. Firstly, to make the diffusion model more adaptive to stereo matching, we eschew the traditional manner of directly adding noise into the image but embed the diffusion model into a task-specific module. In this way, we outperform the traditional diffusion stereo matching method by 22% EPE improvement and 240 times inference acceleration. Secondly, DiffuVolume can be easily embedded into any volume-based stereo matching network with boost performance but slight parameters rise (only 2%). By adding the DiffuVolume into well-performed methods, we outperform all the published methods on Scene Flow, KITTI2012, KITTI2015 benchmarks and zero-shot generalization setting. It is worth mentioning that the proposed model ranks 1st on KITTI 2012 leader board, 2nd on KITTI 2015 leader board since 15, July 2023.
2201.07877
Zhiyu Zhang
Zhiyu Zhang, Ashok Cutkosky, Ioannis Paschalidis
PDE-Based Optimal Strategy for Unconstrained Online Learning
ICML 2022
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unconstrained Online Linear Optimization (OLO) is a practical problem setting to study the training of machine learning models. Existing works proposed a number of potential-based algorithms, but in general the design of these potential functions relies heavily on guessing. To streamline this workflow, we present a framework that generates new potential functions by solving a Partial Differential Equation (PDE). Specifically, when losses are 1-Lipschitz, our framework produces a novel algorithm with anytime regret bound $C\sqrt{T}+||u||\sqrt{2T}[\sqrt{\log(1+||u||/C)}+2]$, where $C$ is a user-specified constant and $u$ is any comparator unknown and unbounded a priori. Such a bound attains an optimal loss-regret trade-off without the impractical doubling trick. Moreover, a matching lower bound shows that the leading order term, including the constant multiplier $\sqrt{2}$, is tight. To our knowledge, the proposed algorithm is the first to achieve such optimalities.
[ { "created": "Wed, 19 Jan 2022 22:21:21 GMT", "version": "v1" }, { "created": "Wed, 15 Jun 2022 17:59:08 GMT", "version": "v2" } ]
2022-06-16
[ [ "Zhang", "Zhiyu", "" ], [ "Cutkosky", "Ashok", "" ], [ "Paschalidis", "Ioannis", "" ] ]
Unconstrained Online Linear Optimization (OLO) is a practical problem setting to study the training of machine learning models. Existing works proposed a number of potential-based algorithms, but in general the design of these potential functions relies heavily on guessing. To streamline this workflow, we present a framework that generates new potential functions by solving a Partial Differential Equation (PDE). Specifically, when losses are 1-Lipschitz, our framework produces a novel algorithm with anytime regret bound $C\sqrt{T}+||u||\sqrt{2T}[\sqrt{\log(1+||u||/C)}+2]$, where $C$ is a user-specified constant and $u$ is any comparator unknown and unbounded a priori. Such a bound attains an optimal loss-regret trade-off without the impractical doubling trick. Moreover, a matching lower bound shows that the leading order term, including the constant multiplier $\sqrt{2}$, is tight. To our knowledge, the proposed algorithm is the first to achieve such optimalities.
1708.02872
Xin Jin
Xin Jin, Shiming Ge, Chenggen Song
Privacy Preserving Face Retrieval in the Cloud for Mobile Users
Abuse Preventive Data Mining (APDM2017, IJCAI Workshop), 19-25 August, 2017 Melbourne, Australia
null
null
null
cs.CV cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, cloud storage and processing have been widely adopted. Mobile users in one family or one team may automatically backup their photos to the same shared cloud storage space. The powerful face detector trained and provided by a 3rd party may be used to retrieve the photo collection which contains a specific group of persons from the cloud storage server. However, the privacy of the mobile users may be leaked to the cloud server providers. In the meanwhile, the copyright of the face detector should be protected. Thus, in this paper, we propose a protocol of privacy preserving face retrieval in the cloud for mobile users, which protects the user photos and the face detector simultaneously. The cloud server only provides the resources of storage and computing and can not learn anything of the user photos and the face detector. We test our protocol inside several families and classes. The experimental results reveal that our protocol can successfully retrieve the proper photos from the cloud server and protect the user photos and the face detector.
[ { "created": "Wed, 9 Aug 2017 15:21:42 GMT", "version": "v1" } ]
2017-08-10
[ [ "Jin", "Xin", "" ], [ "Ge", "Shiming", "" ], [ "Song", "Chenggen", "" ] ]
Recently, cloud storage and processing have been widely adopted. Mobile users in one family or one team may automatically backup their photos to the same shared cloud storage space. The powerful face detector trained and provided by a 3rd party may be used to retrieve the photo collection which contains a specific group of persons from the cloud storage server. However, the privacy of the mobile users may be leaked to the cloud server providers. In the meanwhile, the copyright of the face detector should be protected. Thus, in this paper, we propose a protocol of privacy preserving face retrieval in the cloud for mobile users, which protects the user photos and the face detector simultaneously. The cloud server only provides the resources of storage and computing and can not learn anything of the user photos and the face detector. We test our protocol inside several families and classes. The experimental results reveal that our protocol can successfully retrieve the proper photos from the cloud server and protect the user photos and the face detector.
2311.00634
Soham Irtiza Swapnil
Rafat Tabassum Sukonna, Soham Irtiza Swapnil
A Bi-level Framework for Traffic Accident Duration Prediction: Leveraging Weather and Road Condition Data within a Practical Optimum Pipeline
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Due to the stochastic nature of events, predicting the duration of a traffic incident presents a formidable challenge. Accurate duration estimation can result in substantial advantages for commuters in selecting optimal routes and for traffic management personnel in addressing non-recurring congestion issues. In this study, we gathered accident duration, road conditions, and meteorological data from a database of traffic accidents to check the feasibility of a traffic accident duration pipeline without accident contextual information data like accident severity and textual description. Multiple machine learning models were employed to predict whether an accident's impact on road traffic would be of a short-term or long-term nature, and then utilizing a bimodal approach the precise duration of the incident's effect was determined. Our binary classification random forest model distinguished between short-term and long-term effects with an 83% accuracy rate, while the LightGBM regression model outperformed other machine learning regression models with Mean Average Error (MAE) values of 26.15 and 13.3 and RMSE values of 32.91 and 28.91 for short and long-term accident duration prediction, respectively. Using the optimal classification and regression model identified in the preceding section, we then construct an end-to-end pipeline to incorporate the entire process. The results of both separate and combined approaches were comparable with previous works, which shows the applicability of only using static features for predicting traffic accident duration. The SHAP value analysis identified weather conditions, wind chill and wind speed as the most influential factors in determining the duration of an accident.
[ { "created": "Wed, 1 Nov 2023 16:33:37 GMT", "version": "v1" }, { "created": "Fri, 3 Nov 2023 19:26:03 GMT", "version": "v2" } ]
2023-11-07
[ [ "Sukonna", "Rafat Tabassum", "" ], [ "Swapnil", "Soham Irtiza", "" ] ]
Due to the stochastic nature of events, predicting the duration of a traffic incident presents a formidable challenge. Accurate duration estimation can result in substantial advantages for commuters in selecting optimal routes and for traffic management personnel in addressing non-recurring congestion issues. In this study, we gathered accident duration, road conditions, and meteorological data from a database of traffic accidents to check the feasibility of a traffic accident duration pipeline without accident contextual information data like accident severity and textual description. Multiple machine learning models were employed to predict whether an accident's impact on road traffic would be of a short-term or long-term nature, and then utilizing a bimodal approach the precise duration of the incident's effect was determined. Our binary classification random forest model distinguished between short-term and long-term effects with an 83% accuracy rate, while the LightGBM regression model outperformed other machine learning regression models with Mean Average Error (MAE) values of 26.15 and 13.3 and RMSE values of 32.91 and 28.91 for short and long-term accident duration prediction, respectively. Using the optimal classification and regression model identified in the preceding section, we then construct an end-to-end pipeline to incorporate the entire process. The results of both separate and combined approaches were comparable with previous works, which shows the applicability of only using static features for predicting traffic accident duration. The SHAP value analysis identified weather conditions, wind chill and wind speed as the most influential factors in determining the duration of an accident.
1907.00376
Davide Taibi
Valentina Lenarduzzi, Francesco Lomio, Heikki Huttunen, Davide Taibi
Are SonarQube Rules Inducing Bugs?
null
27th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) London, Ontario, February 18-21, 2020
null
null
cs.SE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Background. The popularity of tools for analyzing Technical Debt, and particularly the popularity of SonarQube, is increasing rapidly. SonarQube proposes a set of coding rules, which represent something wrong in the code that will soon be reflected in a fault or will increase maintenance effort. However, our local companies were not confident in the usefulness of the rules proposed by SonarQube and contracted us to investigate the fault-proneness of these rules. Objective. In this work we aim at understanding which SonarQube rules are actually fault-prone and to understand which machine learning models can be adopted to accurately identify fault-prone rules. Method. We designed and conducted an empirical study on 21 well-known mature open-source projects. We applied the SZZ algorithm to label the fault-inducing commits. We analyzed the fault-proneness by comparing the classification power of seven machine learning models. Result. Among the 202 rules defined for Java by SonarQube, only 25 can be considered to have relatively low fault-proneness. Moreover, violations considered as "bugs" by SonarQube were generally not fault-prone and, consequently, the fault-prediction power of the model proposed by SonarQube is extremely low. Conclusion. The rules applied by SonarQube for calculating technical debt should be thoroughly investigated and their harmfulness needs to be further confirmed. Therefore, companies should carefully consider which rules they really need to apply, especially if their goal is to reduce fault-proneness.
[ { "created": "Sun, 30 Jun 2019 13:04:27 GMT", "version": "v1" }, { "created": "Thu, 19 Dec 2019 07:32:51 GMT", "version": "v2" } ]
2019-12-20
[ [ "Lenarduzzi", "Valentina", "" ], [ "Lomio", "Francesco", "" ], [ "Huttunen", "Heikki", "" ], [ "Taibi", "Davide", "" ] ]
Background. The popularity of tools for analyzing Technical Debt, and particularly the popularity of SonarQube, is increasing rapidly. SonarQube proposes a set of coding rules, which represent something wrong in the code that will soon be reflected in a fault or will increase maintenance effort. However, our local companies were not confident in the usefulness of the rules proposed by SonarQube and contracted us to investigate the fault-proneness of these rules. Objective. In this work we aim at understanding which SonarQube rules are actually fault-prone and to understand which machine learning models can be adopted to accurately identify fault-prone rules. Method. We designed and conducted an empirical study on 21 well-known mature open-source projects. We applied the SZZ algorithm to label the fault-inducing commits. We analyzed the fault-proneness by comparing the classification power of seven machine learning models. Result. Among the 202 rules defined for Java by SonarQube, only 25 can be considered to have relatively low fault-proneness. Moreover, violations considered as "bugs" by SonarQube were generally not fault-prone and, consequently, the fault-prediction power of the model proposed by SonarQube is extremely low. Conclusion. The rules applied by SonarQube for calculating technical debt should be thoroughly investigated and their harmfulness needs to be further confirmed. Therefore, companies should carefully consider which rules they really need to apply, especially if their goal is to reduce fault-proneness.
1907.07428
Tomasz Ma\'nkowski
Piotr Kaczmarek, Tomasz Ma\'nkowski and Jakub Tomczy\'nski
putEMG -- a surface electromyography hand gesture recognition dataset
null
null
10.3390/s19163548
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
In this paper, we present a putEMG dataset intended for evaluation of hand gesture recognition methods based on sEMG signal. The dataset was acquired for 44 able-bodied subjects and include 8 gestures (3 full hand gestures, 4 pinches, and idle). It consists of uninterrupted recordings of 24 sEMG channels from the subject's forearm, RGB video stream and depth camera images used for hand motion tracking. Moreover, exemplary processing scripts are also published. putEMG dataset is available under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license at: https://www.biolab.put.poznan.pl/putemg-dataset/. The dataset was validated regarding sEMG amplitudes and gesture recognition performance. The classification was performed using state-of-the-art classifiers and feature sets. Accuracy of 90% was achieved for SVM classifier utilising RMS feature and for LDA classifier using Hudgin's and Du's feature sets. Analysis of performance for particular gestures showed that LDA/Du combination has significantly higher accuracy for full hand gestures, while SVM/RMS performs better for pinch gestures. Presented dataset can be used as a benchmark for various classification methods, evaluation of electrode localisation concepts, or development of classification methods invariant to user-specific features or electrode displacement.
[ { "created": "Wed, 17 Jul 2019 10:29:01 GMT", "version": "v1" }, { "created": "Mon, 5 Aug 2019 10:49:16 GMT", "version": "v2" }, { "created": "Thu, 22 Aug 2019 08:30:38 GMT", "version": "v3" } ]
2019-08-23
[ [ "Kaczmarek", "Piotr", "" ], [ "Mańkowski", "Tomasz", "" ], [ "Tomczyński", "Jakub", "" ] ]
In this paper, we present a putEMG dataset intended for evaluation of hand gesture recognition methods based on sEMG signal. The dataset was acquired for 44 able-bodied subjects and include 8 gestures (3 full hand gestures, 4 pinches, and idle). It consists of uninterrupted recordings of 24 sEMG channels from the subject's forearm, RGB video stream and depth camera images used for hand motion tracking. Moreover, exemplary processing scripts are also published. putEMG dataset is available under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license at: https://www.biolab.put.poznan.pl/putemg-dataset/. The dataset was validated regarding sEMG amplitudes and gesture recognition performance. The classification was performed using state-of-the-art classifiers and feature sets. Accuracy of 90% was achieved for SVM classifier utilising RMS feature and for LDA classifier using Hudgin's and Du's feature sets. Analysis of performance for particular gestures showed that LDA/Du combination has significantly higher accuracy for full hand gestures, while SVM/RMS performs better for pinch gestures. Presented dataset can be used as a benchmark for various classification methods, evaluation of electrode localisation concepts, or development of classification methods invariant to user-specific features or electrode displacement.
2302.07244
Melvin Mokhtari
Melvin Mokhtari, Ali Seraj, Niloufar Saeedi, Adel Karshenas
The Impact of Twitter Sentiments on Stock Market Trends
null
null
null
null
cs.LG cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Web is a vast virtual space where people can share their opinions, impacting all aspects of life and having implications for marketing and communication. The most up-to-date and comprehensive information can be found on social media because of how widespread and straightforward it is to post a message. Proportionately, they are regarded as a valuable resource for making precise market predictions. In particular, Twitter has developed into a potent tool for understanding user sentiment. This article examines how well tweets can influence stock symbol trends. We analyze the volume, sentiment, and mentions of the top five stock symbols in the S&P 500 index on Twitter over three months. Long Short-Term Memory, Bernoulli Na\"ive Bayes, and Random Forest were the three algorithms implemented in this process. Our study revealed a significant correlation between stock prices and Twitter sentiment.
[ { "created": "Tue, 14 Feb 2023 18:43:20 GMT", "version": "v1" } ]
2023-02-15
[ [ "Mokhtari", "Melvin", "" ], [ "Seraj", "Ali", "" ], [ "Saeedi", "Niloufar", "" ], [ "Karshenas", "Adel", "" ] ]
The Web is a vast virtual space where people can share their opinions, impacting all aspects of life and having implications for marketing and communication. The most up-to-date and comprehensive information can be found on social media because of how widespread and straightforward it is to post a message. Proportionately, they are regarded as a valuable resource for making precise market predictions. In particular, Twitter has developed into a potent tool for understanding user sentiment. This article examines how well tweets can influence stock symbol trends. We analyze the volume, sentiment, and mentions of the top five stock symbols in the S&P 500 index on Twitter over three months. Long Short-Term Memory, Bernoulli Na\"ive Bayes, and Random Forest were the three algorithms implemented in this process. Our study revealed a significant correlation between stock prices and Twitter sentiment.
1805.00155
Cyrus Omar
Cyrus Omar, Ian Voysey, Ravi Chugh, Matthew A. Hammer
Live Functional Programming with Typed Holes
Published in PACMPL issue POPL 2019. Please cite the conference paper!
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper develops a dynamic semantics for incomplete functional programs, starting from the static semantics developed in recent work on Hazelnut. We model incomplete functional programs as expressions with holes, with empty holes standing for missing expressions or types, and non-empty holes operating as membranes around static and dynamic type inconsistencies. Rather than aborting when evaluation encounters any of these holes as in some existing systems, evaluation proceeds around holes, tracking the closure around each hole instance as it flows through the remainder of the program. Editor services can use the information in these hole closures to help the programmer develop and confirm their mental model of the behavior of the complete portions of the program as they decide how to fill the remaining holes. Hole closures also enable a fill-and-resume operation that avoids the need to restart evaluation after edits that amount to hole filling. Formally, the semantics borrows machinery from both gradual type theory (which supplies the basis for handling unfilled type holes) and contextual modal type theory (which supplies a logical basis for hole closures), combining these and developing additional machinery necessary to continue evaluation past holes while maintaining type safety. We have mechanized the metatheory of the core calculus, called Hazelnut Live, using the Agda proof assistant. We have also implemented these ideas into the Hazel programming environment. The implementation inserts holes automatically, following the Hazelnut edit action calculus, to guarantee that every editor state has some (possibly incomplete) type. Taken together with this paper's type safety property, the result is a proof-of-concept live programming environment where rich dynamic feedback is truly available without gaps, i.e. for every reachable editor state.
[ { "created": "Tue, 1 May 2018 02:26:49 GMT", "version": "v1" }, { "created": "Tue, 17 Jul 2018 18:32:55 GMT", "version": "v2" }, { "created": "Tue, 13 Nov 2018 21:57:47 GMT", "version": "v3" }, { "created": "Tue, 20 Nov 2018 19:48:21 GMT", "version": "v4" } ]
2018-11-22
[ [ "Omar", "Cyrus", "" ], [ "Voysey", "Ian", "" ], [ "Chugh", "Ravi", "" ], [ "Hammer", "Matthew A.", "" ] ]
This paper develops a dynamic semantics for incomplete functional programs, starting from the static semantics developed in recent work on Hazelnut. We model incomplete functional programs as expressions with holes, with empty holes standing for missing expressions or types, and non-empty holes operating as membranes around static and dynamic type inconsistencies. Rather than aborting when evaluation encounters any of these holes as in some existing systems, evaluation proceeds around holes, tracking the closure around each hole instance as it flows through the remainder of the program. Editor services can use the information in these hole closures to help the programmer develop and confirm their mental model of the behavior of the complete portions of the program as they decide how to fill the remaining holes. Hole closures also enable a fill-and-resume operation that avoids the need to restart evaluation after edits that amount to hole filling. Formally, the semantics borrows machinery from both gradual type theory (which supplies the basis for handling unfilled type holes) and contextual modal type theory (which supplies a logical basis for hole closures), combining these and developing additional machinery necessary to continue evaluation past holes while maintaining type safety. We have mechanized the metatheory of the core calculus, called Hazelnut Live, using the Agda proof assistant. We have also implemented these ideas into the Hazel programming environment. The implementation inserts holes automatically, following the Hazelnut edit action calculus, to guarantee that every editor state has some (possibly incomplete) type. Taken together with this paper's type safety property, the result is a proof-of-concept live programming environment where rich dynamic feedback is truly available without gaps, i.e. for every reachable editor state.
1708.06465
EPTCS
Giovanna J. Lavado (Dipartimento di Informatica, Universit\`a degli Studi di Milano), Giovanni Pighizzini (Dipartimento di Informatica, Universit\`a degli Studi di Milano), Luca Prigioniero (Dipartimento di Informatica, Universit\`a degli Studi di Milano)
Weakly and Strongly Irreversible Regular Languages
In Proceedings AFL 2017, arXiv:1708.06226
EPTCS 252, 2017, pp. 143-156
10.4204/EPTCS.252.15
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finite automata whose computations can be reversed, at any point, by knowing the last k symbols read from the input, for a fixed k, are considered. These devices and their accepted languages are called k-reversible automata and k-reversible languages, respectively. The existence of k-reversible languages which are not (k-1)-reversible is known, for each k>1. This gives an infinite hierarchy of weakly irreversible languages, i.e., languages which are k-reversible for some k. Conditions characterizing the class of k-reversible languages, for each fixed k, and the class of weakly irreversible languages are obtained. From these conditions, a procedure that given a finite automaton decides if the accepted language is weakly or strongly (i.e., not weakly) irreversible is described. Furthermore, a construction which allows to transform any finite automaton which is not k-reversible, but which accepts a k-reversible language, into an equivalent k-reversible finite automaton, is presented.
[ { "created": "Tue, 22 Aug 2017 00:50:39 GMT", "version": "v1" } ]
2017-08-23
[ [ "Lavado", "Giovanna J.", "", "Dipartimento di Informatica, Università degli\n Studi di Milano" ], [ "Pighizzini", "Giovanni", "", "Dipartimento di Informatica,\n Università degli Studi di Milano" ], [ "Prigioniero", "Luca", "", "Dipartimento di\n Informatica, Università degli Studi di Milano" ] ]
Finite automata whose computations can be reversed, at any point, by knowing the last k symbols read from the input, for a fixed k, are considered. These devices and their accepted languages are called k-reversible automata and k-reversible languages, respectively. The existence of k-reversible languages which are not (k-1)-reversible is known, for each k>1. This gives an infinite hierarchy of weakly irreversible languages, i.e., languages which are k-reversible for some k. Conditions characterizing the class of k-reversible languages, for each fixed k, and the class of weakly irreversible languages are obtained. From these conditions, a procedure that given a finite automaton decides if the accepted language is weakly or strongly (i.e., not weakly) irreversible is described. Furthermore, a construction which allows to transform any finite automaton which is not k-reversible, but which accepts a k-reversible language, into an equivalent k-reversible finite automaton, is presented.
1801.03069
Tingjun Chen
Tingjun Chen, Mahmood Baraani Dastjerdi, Guy Farkash, Jin Zhou, Harish Krishnaswamy, Gil Zussman
Open-Access Full-Duplex Wireless in the ORBIT Testbed
null
null
null
null
cs.NI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to support experimentation with full-duplex (FD) wireless, we recently integrated an open-access FD transceiver in the ORBIT testbed. In this report, we present the design and implementation of the FD transceiver and interfaces, and provide examples and guidelines for experimentation. In particular, an ORBIT node with a National Instruments (NI)/Ettus Research Universal Software Radio Peripheral (USRP) N210 software-defined radio (SDR) was equipped with the Columbia FlexICoN Gen-1 customized RF self-interference (SI) canceller box. The RF canceller box includes an RF SI canceller that is implemented using discrete components on a printed circuit board (PCB) and achieves 40dB RF SI cancellation across 5MHz bandwidth. We provide an FD transceiver baseline program and present two example FD experiments where 90dB and 85dB overall SI cancellation is achieved for a simple waveform and PSK modulated signals across both the RF and digital domains. We also discuss potential FD wireless experiments that can be conducted based on the implemented open-access FD transceiver and baseline program.
[ { "created": "Tue, 9 Jan 2018 18:21:57 GMT", "version": "v1" }, { "created": "Tue, 29 May 2018 14:29:48 GMT", "version": "v2" } ]
2018-05-30
[ [ "Chen", "Tingjun", "" ], [ "Dastjerdi", "Mahmood Baraani", "" ], [ "Farkash", "Guy", "" ], [ "Zhou", "Jin", "" ], [ "Krishnaswamy", "Harish", "" ], [ "Zussman", "Gil", "" ] ]
In order to support experimentation with full-duplex (FD) wireless, we recently integrated an open-access FD transceiver in the ORBIT testbed. In this report, we present the design and implementation of the FD transceiver and interfaces, and provide examples and guidelines for experimentation. In particular, an ORBIT node with a National Instruments (NI)/Ettus Research Universal Software Radio Peripheral (USRP) N210 software-defined radio (SDR) was equipped with the Columbia FlexICoN Gen-1 customized RF self-interference (SI) canceller box. The RF canceller box includes an RF SI canceller that is implemented using discrete components on a printed circuit board (PCB) and achieves 40dB RF SI cancellation across 5MHz bandwidth. We provide an FD transceiver baseline program and present two example FD experiments where 90dB and 85dB overall SI cancellation is achieved for a simple waveform and PSK modulated signals across both the RF and digital domains. We also discuss potential FD wireless experiments that can be conducted based on the implemented open-access FD transceiver and baseline program.
2009.06679
Mohamed Nafzi
Mohamed Nafzi, Michael Brauckmann, Tobias Glasmachers
Data Augmentation and Clustering for Vehicle Make/Model Classification
Proceedings of the 2020 Computing Conference, Volume 1-3, SAI 16-17 July 2020 London
null
10.1007/978-3-030-52249-0_24
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Vehicle shape information is very important in Intelligent Traffic Systems (ITS). In this paper we present a way to exploit a training data set of vehicles released in different years and captured under different perspectives. Also the efficacy of clustering to enhance the make/model classification is presented. Both steps led to improved classification results and a greater robustness. Deeper convolutional neural network based on ResNet architecture has been designed for the training of the vehicle make/model classification. The unequal class distribution of training data produces an a priori probability. Its elimination, obtained by removing of the bias and through hard normalization of the centroids in the classification layer, improves the classification results. A developed application has been used to test the vehicle re-identification on video data manually based on make/model and color classification. This work was partially funded under the grant.
[ { "created": "Mon, 14 Sep 2020 18:24:31 GMT", "version": "v1" } ]
2020-09-16
[ [ "Nafzi", "Mohamed", "" ], [ "Brauckmann", "Michael", "" ], [ "Glasmachers", "Tobias", "" ] ]
Vehicle shape information is very important in Intelligent Traffic Systems (ITS). In this paper we present a way to exploit a training data set of vehicles released in different years and captured under different perspectives. Also the efficacy of clustering to enhance the make/model classification is presented. Both steps led to improved classification results and a greater robustness. Deeper convolutional neural network based on ResNet architecture has been designed for the training of the vehicle make/model classification. The unequal class distribution of training data produces an a priori probability. Its elimination, obtained by removing of the bias and through hard normalization of the centroids in the classification layer, improves the classification results. A developed application has been used to test the vehicle re-identification on video data manually based on make/model and color classification. This work was partially funded under the grant.
2305.10754
Qiankun Zuo Dr.
Qiankun Zuo, Hao Tian, Chi-Man Pun, Hongfei Wang, Yudong Zhang, Jin Hong
Brain Imaging-to-Graph Generation using Adversarial Hierarchical Diffusion Models for MCI Causality Analysis
10 pages, 12 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Effective connectivity can describe the causal patterns among brain regions. These patterns have the potential to reveal the pathological mechanism and promote early diagnosis and effective drug development for cognitive disease. However, the current methods utilize software toolkits to extract empirical features from brain imaging to estimate effective connectivity. These methods heavily rely on manual parameter settings and may result in large errors during effective connectivity estimation. In this paper, a novel brain imaging-to-graph generation (BIGG) framework is proposed to map functional magnetic resonance imaging (fMRI) into effective connectivity for mild cognitive impairment (MCI) analysis. To be specific, the proposed BIGG framework is based on the diffusion denoising probabilistic models (DDPM), where each denoising step is modeled as a generative adversarial network (GAN) to progressively translate the noise and conditional fMRI to effective connectivity. The hierarchical transformers in the generator are designed to estimate the noise at multiple scales. Each scale concentrates on both spatial and temporal information between brain regions, enabling good quality in noise removal and better inference of causal relations. Meanwhile, the transformer-based discriminator constrains the generator to further capture global and local patterns for improving high-quality and diversity generation. By introducing the diffusive factor, the denoising inference with a large sampling step size is more efficient and can maintain high-quality results for effective connectivity generation. Evaluations of the ADNI dataset demonstrate the feasibility and efficacy of the proposed model. The proposed model not only achieves superior prediction performance compared with other competing methods but also predicts MCI-related causal connections that are consistent with clinical studies.
[ { "created": "Thu, 18 May 2023 06:54:56 GMT", "version": "v1" }, { "created": "Mon, 3 Jun 2024 01:35:11 GMT", "version": "v2" } ]
2024-06-04
[ [ "Zuo", "Qiankun", "" ], [ "Tian", "Hao", "" ], [ "Pun", "Chi-Man", "" ], [ "Wang", "Hongfei", "" ], [ "Zhang", "Yudong", "" ], [ "Hong", "Jin", "" ] ]
Effective connectivity can describe the causal patterns among brain regions. These patterns have the potential to reveal the pathological mechanism and promote early diagnosis and effective drug development for cognitive disease. However, the current methods utilize software toolkits to extract empirical features from brain imaging to estimate effective connectivity. These methods heavily rely on manual parameter settings and may result in large errors during effective connectivity estimation. In this paper, a novel brain imaging-to-graph generation (BIGG) framework is proposed to map functional magnetic resonance imaging (fMRI) into effective connectivity for mild cognitive impairment (MCI) analysis. To be specific, the proposed BIGG framework is based on the diffusion denoising probabilistic models (DDPM), where each denoising step is modeled as a generative adversarial network (GAN) to progressively translate the noise and conditional fMRI to effective connectivity. The hierarchical transformers in the generator are designed to estimate the noise at multiple scales. Each scale concentrates on both spatial and temporal information between brain regions, enabling good quality in noise removal and better inference of causal relations. Meanwhile, the transformer-based discriminator constrains the generator to further capture global and local patterns for improving high-quality and diversity generation. By introducing the diffusive factor, the denoising inference with a large sampling step size is more efficient and can maintain high-quality results for effective connectivity generation. Evaluations of the ADNI dataset demonstrate the feasibility and efficacy of the proposed model. The proposed model not only achieves superior prediction performance compared with other competing methods but also predicts MCI-related causal connections that are consistent with clinical studies.
2104.08618
Kiavash Satvat
Kiavash Satvat, Rigel Gjomemo and V.N. Venkatakrishnan
EXTRACTOR: Extracting Attack Behavior from Threat Reports
6th IEEE European Symposium on Security and Privacy
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The knowledge on attacks contained in Cyber Threat Intelligence (CTI) reports is very important to effectively identify and quickly respond to cyber threats. However, this knowledge is often embedded in large amounts of text, and therefore difficult to use effectively. To address this challenge, we propose a novel approach and tool called EXTRACTOR that allows precise automatic extraction of concise attack behaviors from CTI reports. EXTRACTOR makes no strong assumptions about the text and is capable of extracting attack behaviors as provenance graphs from unstructured text. We evaluate EXTRACTOR using real-world incident reports from various sources as well as reports of DARPA adversarial engagements that involve several attack campaigns on various OS platforms of Windows, Linux, and FreeBSD. Our evaluation results show that EXTRACTOR can extract concise provenance graphs from CTI reports and show that these graphs can successfully be used by cyber-analytics tools in threat-hunting.
[ { "created": "Sat, 17 Apr 2021 18:51:00 GMT", "version": "v1" } ]
2021-04-20
[ [ "Satvat", "Kiavash", "" ], [ "Gjomemo", "Rigel", "" ], [ "Venkatakrishnan", "V. N.", "" ] ]
The knowledge on attacks contained in Cyber Threat Intelligence (CTI) reports is very important to effectively identify and quickly respond to cyber threats. However, this knowledge is often embedded in large amounts of text, and therefore difficult to use effectively. To address this challenge, we propose a novel approach and tool called EXTRACTOR that allows precise automatic extraction of concise attack behaviors from CTI reports. EXTRACTOR makes no strong assumptions about the text and is capable of extracting attack behaviors as provenance graphs from unstructured text. We evaluate EXTRACTOR using real-world incident reports from various sources as well as reports of DARPA adversarial engagements that involve several attack campaigns on various OS platforms of Windows, Linux, and FreeBSD. Our evaluation results show that EXTRACTOR can extract concise provenance graphs from CTI reports and show that these graphs can successfully be used by cyber-analytics tools in threat-hunting.
2404.05338
Alessandro Navone
Alessandro Navone, Mauro Martini, Marco Ambrosio, Andrea Ostuni, Simone Angarano, Marcello Chiaberge
GPS-free Autonomous Navigation in Cluttered Tree Rows with Deep Semantic Segmentation
arXiv admin note: text overlap with arXiv:2304.08988
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Segmentation-based autonomous navigation has recently been presented as an appealing approach to guiding robotic platforms through crop rows without requiring perfect GPS localization. Nevertheless, current techniques are restricted to situations where the distinct separation between the plants and the sky allows for the identification of the row's center. However, tall, dense vegetation, such as high tree rows and orchards, is the primary cause of GPS signal blockage. In this study, we increase the overall robustness and adaptability of the control algorithm by extending the segmentation-based robotic guiding to those cases where canopies and branches occlude the sky and prevent the utilization of GPS and earlier approaches. An efficient Deep Neural Network architecture has been used to address semantic segmentation, performing the training with synthetic data only. Numerous vineyards and tree fields have undergone extensive testing in both simulation and real-world to show the solution's competitive benefits.
[ { "created": "Mon, 8 Apr 2024 09:26:31 GMT", "version": "v1" } ]
2024-04-11
[ [ "Navone", "Alessandro", "" ], [ "Martini", "Mauro", "" ], [ "Ambrosio", "Marco", "" ], [ "Ostuni", "Andrea", "" ], [ "Angarano", "Simone", "" ], [ "Chiaberge", "Marcello", "" ] ]
Segmentation-based autonomous navigation has recently been presented as an appealing approach to guiding robotic platforms through crop rows without requiring perfect GPS localization. Nevertheless, current techniques are restricted to situations where the distinct separation between the plants and the sky allows for the identification of the row's center. However, tall, dense vegetation, such as high tree rows and orchards, is the primary cause of GPS signal blockage. In this study, we increase the overall robustness and adaptability of the control algorithm by extending the segmentation-based robotic guiding to those cases where canopies and branches occlude the sky and prevent the utilization of GPS and earlier approaches. An efficient Deep Neural Network architecture has been used to address semantic segmentation, performing the training with synthetic data only. Numerous vineyards and tree fields have undergone extensive testing in both simulation and real-world to show the solution's competitive benefits.
1902.06125
Joseph Salmon
Alain Rakotomamonjy (LITIS), Gilles Gasso (LITIS), Joseph Salmon (IMAG, Univ. Montpellier)
Screening Rules for Lasso with Non-Convex Sparse Regularizers
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Leveraging on the convexity of the Lasso problem , screening rules help in accelerating solvers by discarding irrelevant variables, during the optimization process. However, because they provide better theoretical guarantees in identifying relevant variables, several non-convex regularizers for the Lasso have been proposed in the literature. This work is the first that introduces a screening rule strategy into a non-convex Lasso solver. The approach we propose is based on a iterative majorization-minimization (MM) strategy that includes a screening rule in the inner solver and a condition for propagating screened variables between iterations of MM. In addition to improve efficiency of solvers, we also provide guarantees that the inner solver is able to identify the zeros components of its critical point in finite time. Our experimental analysis illustrates the significant computational gain brought by the new screening rule compared to classical coordinate-descent or proximal gradient descent methods.
[ { "created": "Sat, 16 Feb 2019 17:08:56 GMT", "version": "v1" }, { "created": "Tue, 19 Feb 2019 13:01:17 GMT", "version": "v2" } ]
2019-02-20
[ [ "Rakotomamonjy", "Alain", "", "LITIS" ], [ "Gasso", "Gilles", "", "LITIS" ], [ "Salmon", "Joseph", "", "IMAG, Univ. Montpellier" ] ]
Leveraging on the convexity of the Lasso problem , screening rules help in accelerating solvers by discarding irrelevant variables, during the optimization process. However, because they provide better theoretical guarantees in identifying relevant variables, several non-convex regularizers for the Lasso have been proposed in the literature. This work is the first that introduces a screening rule strategy into a non-convex Lasso solver. The approach we propose is based on a iterative majorization-minimization (MM) strategy that includes a screening rule in the inner solver and a condition for propagating screened variables between iterations of MM. In addition to improve efficiency of solvers, we also provide guarantees that the inner solver is able to identify the zeros components of its critical point in finite time. Our experimental analysis illustrates the significant computational gain brought by the new screening rule compared to classical coordinate-descent or proximal gradient descent methods.
2309.00928
Kailun Yang
Xuan He, Kailun Yang, Junwei Zheng, Jin Yuan, Luis M. Bergasa, Hui Zhang, Zhiyong Li
S$^3$-MonoDETR: Supervised Shape&Scale-perceptive Deformable Transformer for Monocular 3D Object Detection
The source code will be made publicly available at https://github.com/mikasa3lili/S3-MonoDETR
null
null
null
cs.CV cs.RO eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, transformer-based methods have shown exceptional performance in monocular 3D object detection, which can predict 3D attributes from a single 2D image. These methods typically use visual and depth representations to generate query points on objects, whose quality plays a decisive role in the detection accuracy. However, current unsupervised attention mechanisms without any geometry appearance awareness in transformers are susceptible to producing noisy features for query points, which severely limits the network performance and also makes the model have a poor ability to detect multi-category objects in a single training process. To tackle this problem, this paper proposes a novel "Supervised Shape&Scale-perceptive Deformable Attention" (S$^3$-DA) module for monocular 3D object detection. Concretely, S$^3$-DA utilizes visual and depth features to generate diverse local features with various shapes and scales and predict the corresponding matching distribution simultaneously to impose valuable shape&scale perception for each query. Benefiting from this, S$^3$-DA effectively estimates receptive fields for query points belonging to any category, enabling them to generate robust query features. Besides, we propose a Multi-classification-based Shape$\&$Scale Matching (MSM) loss to supervise the above process. Extensive experiments on KITTI and Waymo Open datasets demonstrate that S$^3$-DA significantly improves the detection accuracy, yielding state-of-the-art performance of single-category and multi-category 3D object detection in a single training process compared to the existing approaches. The source code will be made publicly available at https://github.com/mikasa3lili/S3-MonoDETR.
[ { "created": "Sat, 2 Sep 2023 12:36:38 GMT", "version": "v1" } ]
2023-09-06
[ [ "He", "Xuan", "" ], [ "Yang", "Kailun", "" ], [ "Zheng", "Junwei", "" ], [ "Yuan", "Jin", "" ], [ "Bergasa", "Luis M.", "" ], [ "Zhang", "Hui", "" ], [ "Li", "Zhiyong", "" ] ]
Recently, transformer-based methods have shown exceptional performance in monocular 3D object detection, which can predict 3D attributes from a single 2D image. These methods typically use visual and depth representations to generate query points on objects, whose quality plays a decisive role in the detection accuracy. However, current unsupervised attention mechanisms without any geometry appearance awareness in transformers are susceptible to producing noisy features for query points, which severely limits the network performance and also makes the model have a poor ability to detect multi-category objects in a single training process. To tackle this problem, this paper proposes a novel "Supervised Shape&Scale-perceptive Deformable Attention" (S$^3$-DA) module for monocular 3D object detection. Concretely, S$^3$-DA utilizes visual and depth features to generate diverse local features with various shapes and scales and predict the corresponding matching distribution simultaneously to impose valuable shape&scale perception for each query. Benefiting from this, S$^3$-DA effectively estimates receptive fields for query points belonging to any category, enabling them to generate robust query features. Besides, we propose a Multi-classification-based Shape$\&$Scale Matching (MSM) loss to supervise the above process. Extensive experiments on KITTI and Waymo Open datasets demonstrate that S$^3$-DA significantly improves the detection accuracy, yielding state-of-the-art performance of single-category and multi-category 3D object detection in a single training process compared to the existing approaches. The source code will be made publicly available at https://github.com/mikasa3lili/S3-MonoDETR.
2110.06922
Yue Wang
Yue Wang and Vitor Guizilini and Tianyuan Zhang and Yilun Wang and Hang Zhao and Justin Solomon
DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries
Accepted to CORL 2021
null
null
null
cs.CV cs.AI cs.LG cs.RO
http://creativecommons.org/licenses/by/4.0/
We introduce a framework for multi-camera 3D object detection. In contrast to existing works, which estimate 3D bounding boxes directly from monocular images or use depth prediction networks to generate input for 3D object detection from 2D information, our method manipulates predictions directly in 3D space. Our architecture extracts 2D features from multiple camera images and then uses a sparse set of 3D object queries to index into these 2D features, linking 3D positions to multi-view images using camera transformation matrices. Finally, our model makes a bounding box prediction per object query, using a set-to-set loss to measure the discrepancy between the ground-truth and the prediction. This top-down approach outperforms its bottom-up counterpart in which object bounding box prediction follows per-pixel depth estimation, since it does not suffer from the compounding error introduced by a depth prediction model. Moreover, our method does not require post-processing such as non-maximum suppression, dramatically improving inference speed. We achieve state-of-the-art performance on the nuScenes autonomous driving benchmark.
[ { "created": "Wed, 13 Oct 2021 17:59:35 GMT", "version": "v1" } ]
2021-10-14
[ [ "Wang", "Yue", "" ], [ "Guizilini", "Vitor", "" ], [ "Zhang", "Tianyuan", "" ], [ "Wang", "Yilun", "" ], [ "Zhao", "Hang", "" ], [ "Solomon", "Justin", "" ] ]
We introduce a framework for multi-camera 3D object detection. In contrast to existing works, which estimate 3D bounding boxes directly from monocular images or use depth prediction networks to generate input for 3D object detection from 2D information, our method manipulates predictions directly in 3D space. Our architecture extracts 2D features from multiple camera images and then uses a sparse set of 3D object queries to index into these 2D features, linking 3D positions to multi-view images using camera transformation matrices. Finally, our model makes a bounding box prediction per object query, using a set-to-set loss to measure the discrepancy between the ground-truth and the prediction. This top-down approach outperforms its bottom-up counterpart in which object bounding box prediction follows per-pixel depth estimation, since it does not suffer from the compounding error introduced by a depth prediction model. Moreover, our method does not require post-processing such as non-maximum suppression, dramatically improving inference speed. We achieve state-of-the-art performance on the nuScenes autonomous driving benchmark.
2007.00072
Nikoli Dryden
Andrei Ivanov, Nikoli Dryden, Tal Ben-Nun, Shigang Li, Torsten Hoefler
Data Movement Is All You Need: A Case Study on Optimizing Transformers
22 pages, 8 figures; MLSys 2021 camera ready
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transformers are one of the most important machine learning workloads today. Training one is a very compute-intensive task, often taking days or weeks, and significant attention has been given to optimizing transformers. Despite this, existing implementations do not efficiently utilize GPUs. We find that data movement is the key bottleneck when training. Due to Amdahl's Law and massive improvements in compute performance, training has now become memory-bound. Further, existing frameworks use suboptimal data layouts. Using these insights, we present a recipe for globally optimizing data movement in transformers. We reduce data movement by up to 22.91% and overall achieve a 1.30x performance improvement over state-of-the-art frameworks when training a BERT encoder layer and 1.19x for the entire BERT. Our approach is applicable more broadly to optimizing deep neural networks, and offers insight into how to tackle emerging performance bottlenecks.
[ { "created": "Tue, 30 Jun 2020 19:26:36 GMT", "version": "v1" }, { "created": "Thu, 2 Jul 2020 09:26:19 GMT", "version": "v2" }, { "created": "Mon, 8 Nov 2021 12:43:08 GMT", "version": "v3" } ]
2021-11-09
[ [ "Ivanov", "Andrei", "" ], [ "Dryden", "Nikoli", "" ], [ "Ben-Nun", "Tal", "" ], [ "Li", "Shigang", "" ], [ "Hoefler", "Torsten", "" ] ]
Transformers are one of the most important machine learning workloads today. Training one is a very compute-intensive task, often taking days or weeks, and significant attention has been given to optimizing transformers. Despite this, existing implementations do not efficiently utilize GPUs. We find that data movement is the key bottleneck when training. Due to Amdahl's Law and massive improvements in compute performance, training has now become memory-bound. Further, existing frameworks use suboptimal data layouts. Using these insights, we present a recipe for globally optimizing data movement in transformers. We reduce data movement by up to 22.91% and overall achieve a 1.30x performance improvement over state-of-the-art frameworks when training a BERT encoder layer and 1.19x for the entire BERT. Our approach is applicable more broadly to optimizing deep neural networks, and offers insight into how to tackle emerging performance bottlenecks.
1011.6223
Benedikt Meurer
Benedikt Meurer
Just-In-Time compilation of OCaml byte-code
15 pages, 6 figures, 3 tables
null
null
null
cs.PL cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents various improvements that were applied to OCamlJIT2, a Just-In-Time compiler for the OCaml byte-code virtual machine. OCamlJIT2 currently runs on various Unix-like systems with x86 or x86-64 processors. The improvements, including the new x86 port, are described in detail, and performance measures are given, including a direct comparison of OCamlJIT2 to OCamlJIT.
[ { "created": "Mon, 29 Nov 2010 13:24:11 GMT", "version": "v1" }, { "created": "Wed, 11 May 2011 08:37:18 GMT", "version": "v2" }, { "created": "Tue, 27 Sep 2011 14:14:21 GMT", "version": "v3" } ]
2011-09-28
[ [ "Meurer", "Benedikt", "" ] ]
This paper presents various improvements that were applied to OCamlJIT2, a Just-In-Time compiler for the OCaml byte-code virtual machine. OCamlJIT2 currently runs on various Unix-like systems with x86 or x86-64 processors. The improvements, including the new x86 port, are described in detail, and performance measures are given, including a direct comparison of OCamlJIT2 to OCamlJIT.
2307.12082
Bruce Jin
Siyuan Jin, Mianmian Zhang, Yekai Guo, Yuejiang He, Ziyuan Li, Bichao Chen, Bing Zhu, and Yong Xia
Software Code Quality Measurement: Implications from Metric Distributions
The paper has been accepted for presentation at IEEE QRS 2023. Unfortunately, due to authorship limits, Mianmian Zhang, Yekai Guo, and Yuejiang He could not be included as co-authors. However, we gratefully acknowledge their valuable contributions to this work and use this arXiv version to prove their contributions
2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security (QRS), Chiang Mai, Thailand, 2023, pp. 488-496
10.1109/QRS60937.2023.00054
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Software code quality is a construct with three dimensions: maintainability, reliability, and functionality. Although many firms have incorporated code quality metrics in their operations, evaluating these metrics still lacks consistent standards. We categorized distinct metrics into two types: 1) monotonic metrics that consistently influence code quality; and 2) non-monotonic metrics that lack a consistent relationship with code quality. To consistently evaluate them, we proposed a distribution-based method to get metric scores. Our empirical analysis includes 36,460 high-quality open-source software (OSS) repositories and their raw metrics from SonarQube and CK. The evaluated scores demonstrate great explainability on software adoption. Our work contributes to the multi-dimensional construct of code quality and its metric measurements, which provides practical implications for consistent measurements on both monotonic and non-monotonic metrics.
[ { "created": "Sat, 22 Jul 2023 13:55:42 GMT", "version": "v1" }, { "created": "Wed, 9 Aug 2023 00:38:24 GMT", "version": "v2" }, { "created": "Sun, 1 Oct 2023 02:42:56 GMT", "version": "v3" }, { "created": "Tue, 16 Jan 2024 11:32:21 GMT", "version": "v4" } ]
2024-01-17
[ [ "Jin", "Siyuan", "" ], [ "Zhang", "Mianmian", "" ], [ "Guo", "Yekai", "" ], [ "He", "Yuejiang", "" ], [ "Li", "Ziyuan", "" ], [ "Chen", "Bichao", "" ], [ "Zhu", "Bing", "" ], [ "Xia", "Yong", "" ] ]
Software code quality is a construct with three dimensions: maintainability, reliability, and functionality. Although many firms have incorporated code quality metrics in their operations, evaluating these metrics still lacks consistent standards. We categorized distinct metrics into two types: 1) monotonic metrics that consistently influence code quality; and 2) non-monotonic metrics that lack a consistent relationship with code quality. To consistently evaluate them, we proposed a distribution-based method to get metric scores. Our empirical analysis includes 36,460 high-quality open-source software (OSS) repositories and their raw metrics from SonarQube and CK. The evaluated scores demonstrate great explainability on software adoption. Our work contributes to the multi-dimensional construct of code quality and its metric measurements, which provides practical implications for consistent measurements on both monotonic and non-monotonic metrics.
2304.06190
Honglin Bao
Honglin Bao and Misha Teplitskiy
Do "bad" citations have "good" effects?
Main: 28 pages, one table, 5 figures; Appendix: 11 pages, 13 figures
null
null
null
cs.DL cs.CY cs.MA nlin.AO
http://creativecommons.org/licenses/by/4.0/
The scientific community discourages authors of research papers from citing papers that did not influence them. Such "rhetorical" citations are assumed to degrade the literature and incentives for good work. While a world where authors cite only substantively appears attractive, we argue that mandating substantive citing may have underappreciated consequences on the allocation of attention and dynamism in scientific literatures. We develop a novel agent-based model in which agents cite substantively and rhetorically. Agents first select papers to read based on their expected quality, read them and observe their actual quality, become influenced by those that are sufficiently good, and substantively cite them. Next, agents fill any remaining slots in the reference lists by (rhetorically) citing papers that support their narrative, regardless of whether they were actually influential. By turning rhetorical citing on-and-off, we find that rhetorical citing increases the correlation between quality and citations, increases citation churn, and reduces citation inequality. This occurs because rhetorical citing redistributes some citations from a stable set of elite-quality papers to a more dynamic set with high-to-moderate quality and high rhetorical value. Increasing the size of reference lists, often seen as an undesirable trend, amplifies the effects. In sum, rhetorical citing helps deconcentrate attention and makes it easier to displace incumbent ideas, so whether it is indeed undesirable depends on the metrics used to judge desirability.
[ { "created": "Wed, 12 Apr 2023 23:42:06 GMT", "version": "v1" }, { "created": "Sun, 16 Apr 2023 19:08:22 GMT", "version": "v2" } ]
2023-04-18
[ [ "Bao", "Honglin", "" ], [ "Teplitskiy", "Misha", "" ] ]
The scientific community discourages authors of research papers from citing papers that did not influence them. Such "rhetorical" citations are assumed to degrade the literature and incentives for good work. While a world where authors cite only substantively appears attractive, we argue that mandating substantive citing may have underappreciated consequences on the allocation of attention and dynamism in scientific literatures. We develop a novel agent-based model in which agents cite substantively and rhetorically. Agents first select papers to read based on their expected quality, read them and observe their actual quality, become influenced by those that are sufficiently good, and substantively cite them. Next, agents fill any remaining slots in the reference lists by (rhetorically) citing papers that support their narrative, regardless of whether they were actually influential. By turning rhetorical citing on-and-off, we find that rhetorical citing increases the correlation between quality and citations, increases citation churn, and reduces citation inequality. This occurs because rhetorical citing redistributes some citations from a stable set of elite-quality papers to a more dynamic set with high-to-moderate quality and high rhetorical value. Increasing the size of reference lists, often seen as an undesirable trend, amplifies the effects. In sum, rhetorical citing helps deconcentrate attention and makes it easier to displace incumbent ideas, so whether it is indeed undesirable depends on the metrics used to judge desirability.
1702.07825
Andrew Gibiansky
Sercan O. Arik, Mike Chrzanowski, Adam Coates, Gregory Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Andrew Ng, Jonathan Raiman, Shubho Sengupta, Mohammad Shoeybi
Deep Voice: Real-time Neural Text-to-Speech
Submitted to ICML 2017
null
null
null
cs.CL cs.LG cs.NE cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present Deep Voice, a production-quality text-to-speech system constructed entirely from deep neural networks. Deep Voice lays the groundwork for truly end-to-end neural speech synthesis. The system comprises five major building blocks: a segmentation model for locating phoneme boundaries, a grapheme-to-phoneme conversion model, a phoneme duration prediction model, a fundamental frequency prediction model, and an audio synthesis model. For the segmentation model, we propose a novel way of performing phoneme boundary detection with deep neural networks using connectionist temporal classification (CTC) loss. For the audio synthesis model, we implement a variant of WaveNet that requires fewer parameters and trains faster than the original. By using a neural network for each component, our system is simpler and more flexible than traditional text-to-speech systems, where each component requires laborious feature engineering and extensive domain expertise. Finally, we show that inference with our system can be performed faster than real time and describe optimized WaveNet inference kernels on both CPU and GPU that achieve up to 400x speedups over existing implementations.
[ { "created": "Sat, 25 Feb 2017 03:11:04 GMT", "version": "v1" }, { "created": "Tue, 7 Mar 2017 23:09:23 GMT", "version": "v2" } ]
2017-03-09
[ [ "Arik", "Sercan O.", "" ], [ "Chrzanowski", "Mike", "" ], [ "Coates", "Adam", "" ], [ "Diamos", "Gregory", "" ], [ "Gibiansky", "Andrew", "" ], [ "Kang", "Yongguo", "" ], [ "Li", "Xian", "" ], [ "Miller", "John", "" ], [ "Ng", "Andrew", "" ], [ "Raiman", "Jonathan", "" ], [ "Sengupta", "Shubho", "" ], [ "Shoeybi", "Mohammad", "" ] ]
We present Deep Voice, a production-quality text-to-speech system constructed entirely from deep neural networks. Deep Voice lays the groundwork for truly end-to-end neural speech synthesis. The system comprises five major building blocks: a segmentation model for locating phoneme boundaries, a grapheme-to-phoneme conversion model, a phoneme duration prediction model, a fundamental frequency prediction model, and an audio synthesis model. For the segmentation model, we propose a novel way of performing phoneme boundary detection with deep neural networks using connectionist temporal classification (CTC) loss. For the audio synthesis model, we implement a variant of WaveNet that requires fewer parameters and trains faster than the original. By using a neural network for each component, our system is simpler and more flexible than traditional text-to-speech systems, where each component requires laborious feature engineering and extensive domain expertise. Finally, we show that inference with our system can be performed faster than real time and describe optimized WaveNet inference kernels on both CPU and GPU that achieve up to 400x speedups over existing implementations.
2203.09893
Juan Jos\'e Bosch
Rachel M. Bittner, Juan Jos\'e Bosch, David Rubinstein, Gabriel Meseguer-Brocal, Sebastian Ewert
A Lightweight Instrument-Agnostic Model for Polyphonic Note Transcription and Multipitch Estimation
null
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Automatic Music Transcription (AMT) has been recognized as a key enabling technology with a wide range of applications. Given the task's complexity, best results have typically been reported for systems focusing on specific settings, e.g. instrument-specific systems tend to yield improved results over instrument-agnostic methods. Similarly, higher accuracy can be obtained when only estimating frame-wise $f_0$ values and neglecting the harder note event detection. Despite their high accuracy, such specialized systems often cannot be deployed in the real-world. Storage and network constraints prohibit the use of multiple specialized models, while memory and run-time constraints limit their complexity. In this paper, we propose a lightweight neural network for musical instrument transcription, which supports polyphonic outputs and generalizes to a wide variety of instruments (including vocals). Our model is trained to jointly predict frame-wise onsets, multipitch and note activations, and we experimentally show that this multi-output structure improves the resulting frame-level note accuracy. Despite its simplicity, benchmark results show our system's note estimation to be substantially better than a comparable baseline, and its frame-level accuracy to be only marginally below those of specialized state-of-the-art AMT systems. With this work we hope to encourage the community to further investigate low-resource, instrument-agnostic AMT systems.
[ { "created": "Fri, 18 Mar 2022 12:07:36 GMT", "version": "v1" }, { "created": "Thu, 12 May 2022 16:24:07 GMT", "version": "v2" } ]
2022-05-13
[ [ "Bittner", "Rachel M.", "" ], [ "Bosch", "Juan José", "" ], [ "Rubinstein", "David", "" ], [ "Meseguer-Brocal", "Gabriel", "" ], [ "Ewert", "Sebastian", "" ] ]
Automatic Music Transcription (AMT) has been recognized as a key enabling technology with a wide range of applications. Given the task's complexity, best results have typically been reported for systems focusing on specific settings, e.g. instrument-specific systems tend to yield improved results over instrument-agnostic methods. Similarly, higher accuracy can be obtained when only estimating frame-wise $f_0$ values and neglecting the harder note event detection. Despite their high accuracy, such specialized systems often cannot be deployed in the real-world. Storage and network constraints prohibit the use of multiple specialized models, while memory and run-time constraints limit their complexity. In this paper, we propose a lightweight neural network for musical instrument transcription, which supports polyphonic outputs and generalizes to a wide variety of instruments (including vocals). Our model is trained to jointly predict frame-wise onsets, multipitch and note activations, and we experimentally show that this multi-output structure improves the resulting frame-level note accuracy. Despite its simplicity, benchmark results show our system's note estimation to be substantially better than a comparable baseline, and its frame-level accuracy to be only marginally below those of specialized state-of-the-art AMT systems. With this work we hope to encourage the community to further investigate low-resource, instrument-agnostic AMT systems.
2311.14722
Karmvir Singh Phogat
Karmvir Singh Phogat, Chetan Harsha, Sridhar Dasaratha, Shashishekar Ramakrishna, Sai Akhil Puranam
Zero-Shot Question Answering over Financial Documents using Large Language Models
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
We introduce a large language model (LLM) based approach to answer complex questions requiring multi-hop numerical reasoning over financial reports. While LLMs have exhibited remarkable performance on various natural language and reasoning tasks, complex reasoning problems often rely on few-shot prompts that require carefully crafted examples. In contrast, our approach uses novel zero-shot prompts that guide the LLM to encode the required reasoning into a Python program or a domain specific language. The generated program is then executed by a program interpreter, thus mitigating the limitations of LLM in performing accurate arithmetic calculations. We evaluate the proposed approach on three financial datasets using some of the recently developed generative pretrained transformer (GPT) models and perform comparisons with various zero-shot baselines. The experimental results demonstrate that our approach significantly improves the accuracy for all the LLMs over their respective baselines. We provide a detailed analysis of the results, generating insights to support our findings. The success of our approach demonstrates the enormous potential to extract complex domain specific numerical reasoning by designing zero-shot prompts to effectively exploit the knowledge embedded in LLMs.
[ { "created": "Sun, 19 Nov 2023 16:23:34 GMT", "version": "v1" } ]
2023-11-28
[ [ "Phogat", "Karmvir Singh", "" ], [ "Harsha", "Chetan", "" ], [ "Dasaratha", "Sridhar", "" ], [ "Ramakrishna", "Shashishekar", "" ], [ "Puranam", "Sai Akhil", "" ] ]
We introduce a large language model (LLM) based approach to answer complex questions requiring multi-hop numerical reasoning over financial reports. While LLMs have exhibited remarkable performance on various natural language and reasoning tasks, complex reasoning problems often rely on few-shot prompts that require carefully crafted examples. In contrast, our approach uses novel zero-shot prompts that guide the LLM to encode the required reasoning into a Python program or a domain specific language. The generated program is then executed by a program interpreter, thus mitigating the limitations of LLM in performing accurate arithmetic calculations. We evaluate the proposed approach on three financial datasets using some of the recently developed generative pretrained transformer (GPT) models and perform comparisons with various zero-shot baselines. The experimental results demonstrate that our approach significantly improves the accuracy for all the LLMs over their respective baselines. We provide a detailed analysis of the results, generating insights to support our findings. The success of our approach demonstrates the enormous potential to extract complex domain specific numerical reasoning by designing zero-shot prompts to effectively exploit the knowledge embedded in LLMs.
1406.2023
Gian Luca Pozzato
Laura Giordano, Valentina Gliozzi, Nicola Olivetti, Gian Luca Pozzato
Rational Closure in SHIQ
30 pages, extended version of paper accepted to DL2014
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We define a notion of rational closure for the logic SHIQ, which does not enjoys the finite model property, building on the notion of rational closure introduced by Lehmann and Magidor in [23]. We provide a semantic characterization of rational closure in SHIQ in terms of a preferential semantics, based on a finite rank characterization of minimal models. We show that the rational closure of a TBox can be computed in EXPTIME using entailment in SHIQ.
[ { "created": "Sun, 8 Jun 2014 20:16:30 GMT", "version": "v1" } ]
2014-06-10
[ [ "Giordano", "Laura", "" ], [ "Gliozzi", "Valentina", "" ], [ "Olivetti", "Nicola", "" ], [ "Pozzato", "Gian Luca", "" ] ]
We define a notion of rational closure for the logic SHIQ, which does not enjoys the finite model property, building on the notion of rational closure introduced by Lehmann and Magidor in [23]. We provide a semantic characterization of rational closure in SHIQ in terms of a preferential semantics, based on a finite rank characterization of minimal models. We show that the rational closure of a TBox can be computed in EXPTIME using entailment in SHIQ.
2303.12445
Leo Milecki
Leo Milecki, Vicky Kalogeiton, Sylvain Bodard, Dany Anglicheau, Jean-Michel Correas, Marc-Olivier Timsit, Maria Vakalopoulou
MEDIMP: 3D Medical Images with clinical Prompts from limited tabular data for renal transplantation
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Renal transplantation emerges as the most effective solution for end-stage renal disease. Occurring from complex causes, a substantial risk of transplant chronic dysfunction persists and may lead to graft loss. Medical imaging plays a substantial role in renal transplant monitoring in clinical practice. However, graft supervision is multi-disciplinary, notably joining nephrology, urology, and radiology, while identifying robust biomarkers from such high-dimensional and complex data for prognosis is challenging. In this work, taking inspiration from the recent success of Large Language Models (LLMs), we propose MEDIMP -- Medical Images with clinical Prompts -- a model to learn meaningful multi-modal representations of renal transplant Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE MRI) by incorporating structural clinicobiological data after translating them into text prompts. MEDIMP is based on contrastive learning from joint text-image paired embeddings to perform this challenging task. Moreover, we propose a framework that generates medical prompts using automatic textual data augmentations from LLMs. Our goal is to learn meaningful manifolds of renal transplant DCE MRI, interesting for the prognosis of the transplant or patient status (2, 3, and 4 years after the transplant), fully exploiting the limited available multi-modal data most efficiently. Extensive experiments and comparisons with other renal transplant representation learning methods with limited data prove the effectiveness of MEDIMP in a relevant clinical setting, giving new directions toward medical prompts. Our code is available at https://github.com/leomlck/MEDIMP.
[ { "created": "Wed, 22 Mar 2023 10:30:43 GMT", "version": "v1" }, { "created": "Sat, 29 Apr 2023 15:42:49 GMT", "version": "v2" } ]
2023-05-02
[ [ "Milecki", "Leo", "" ], [ "Kalogeiton", "Vicky", "" ], [ "Bodard", "Sylvain", "" ], [ "Anglicheau", "Dany", "" ], [ "Correas", "Jean-Michel", "" ], [ "Timsit", "Marc-Olivier", "" ], [ "Vakalopoulou", "Maria", "" ] ]
Renal transplantation emerges as the most effective solution for end-stage renal disease. Occurring from complex causes, a substantial risk of transplant chronic dysfunction persists and may lead to graft loss. Medical imaging plays a substantial role in renal transplant monitoring in clinical practice. However, graft supervision is multi-disciplinary, notably joining nephrology, urology, and radiology, while identifying robust biomarkers from such high-dimensional and complex data for prognosis is challenging. In this work, taking inspiration from the recent success of Large Language Models (LLMs), we propose MEDIMP -- Medical Images with clinical Prompts -- a model to learn meaningful multi-modal representations of renal transplant Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE MRI) by incorporating structural clinicobiological data after translating them into text prompts. MEDIMP is based on contrastive learning from joint text-image paired embeddings to perform this challenging task. Moreover, we propose a framework that generates medical prompts using automatic textual data augmentations from LLMs. Our goal is to learn meaningful manifolds of renal transplant DCE MRI, interesting for the prognosis of the transplant or patient status (2, 3, and 4 years after the transplant), fully exploiting the limited available multi-modal data most efficiently. Extensive experiments and comparisons with other renal transplant representation learning methods with limited data prove the effectiveness of MEDIMP in a relevant clinical setting, giving new directions toward medical prompts. Our code is available at https://github.com/leomlck/MEDIMP.
1505.06228
Fatma Elghannam Rashad
Fatma Elghannam, Tarek El-Shishtawy
Keyphrase Based Evaluation of Automatic Text Summarization
4 pages, 1 figure, 3 tables
International Journal of Computer Applications 117(7):5-8, May 2015. ISBN : 973-93-80886-51-2
10.5120/20564-2953
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The development of methods to deal with the informative contents of the text units in the matching process is a major challenge in automatic summary evaluation systems that use fixed n-gram matching. The limitation causes inaccurate matching between units in a peer and reference summaries. The present study introduces a new Keyphrase based Summary Evaluator KpEval for evaluating automatic summaries. The KpEval relies on the keyphrases since they convey the most important concepts of a text. In the evaluation process, the keyphrases are used in their lemma form as the matching text unit. The system was applied to evaluate different summaries of Arabic multi-document data set presented at TAC2011. The results showed that the new evaluation technique correlates well with the known evaluation systems: Rouge1, Rouge2, RougeSU4, and AutoSummENG MeMoG. KpEval has the strongest correlation with AutoSummENG MeMoG, Pearson and spearman correlation coefficient measures are 0.8840, 0.9667 respectively.
[ { "created": "Fri, 22 May 2015 21:12:35 GMT", "version": "v1" } ]
2015-05-26
[ [ "Elghannam", "Fatma", "" ], [ "El-Shishtawy", "Tarek", "" ] ]
The development of methods to deal with the informative contents of the text units in the matching process is a major challenge in automatic summary evaluation systems that use fixed n-gram matching. The limitation causes inaccurate matching between units in a peer and reference summaries. The present study introduces a new Keyphrase based Summary Evaluator KpEval for evaluating automatic summaries. The KpEval relies on the keyphrases since they convey the most important concepts of a text. In the evaluation process, the keyphrases are used in their lemma form as the matching text unit. The system was applied to evaluate different summaries of Arabic multi-document data set presented at TAC2011. The results showed that the new evaluation technique correlates well with the known evaluation systems: Rouge1, Rouge2, RougeSU4, and AutoSummENG MeMoG. KpEval has the strongest correlation with AutoSummENG MeMoG, Pearson and spearman correlation coefficient measures are 0.8840, 0.9667 respectively.
2004.09990
Alexander Martin Mussgnug
Alexander M. Mussgnug
A Philosophy of Data
null
null
null
null
cs.DB cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We argue that while this discourse on data ethics is of critical importance, it is missing one fundamental point: If more and more efforts in business, government, science, and our daily lives are data-driven, we should pay more attention to what exactly we are driven by. Therefore, we need more debate on what fundamental properties constitute data. In the first section of the paper, we work from the fundamental properties necessary for statistical computation to a definition of statistical data. We define a statistical datum as the coming together of substantive and numerical properties and differentiate between qualitative and quantitative data. Subsequently, we qualify our definition by arguing that for data to be practically useful, it needs to be commensurable in a manner that reveals meaningful differences that allow for the generation of relevant insights through statistical methodologies. In the second section, we focus on what our conception of data can contribute to the discourse on data ethics and beyond. First, we hold that the need for useful data to be commensurable rules out an understanding of properties as fundamentally unique or equal. Second, we argue that practical concerns lead us to increasingly standardize how we operationalize a substantive property; in other words, how we formalize the relationship between the substantive and numerical properties of data. Thereby, we also standardize the interpretation of a property. With our increasing reliance on data and data technologies, these two characteristics of data affect our collective conception of reality. Statistical data's exclusion of the fundamentally unique and equal influences our perspective on the world, and the standardization of substantive properties can be viewed as profound ontological practice, entrenching ever more pervasive interpretations of phenomena in our everyday lives.
[ { "created": "Wed, 15 Apr 2020 14:47:24 GMT", "version": "v1" }, { "created": "Wed, 20 May 2020 12:36:57 GMT", "version": "v2" } ]
2020-05-21
[ [ "Mussgnug", "Alexander M.", "" ] ]
We argue that while this discourse on data ethics is of critical importance, it is missing one fundamental point: If more and more efforts in business, government, science, and our daily lives are data-driven, we should pay more attention to what exactly we are driven by. Therefore, we need more debate on what fundamental properties constitute data. In the first section of the paper, we work from the fundamental properties necessary for statistical computation to a definition of statistical data. We define a statistical datum as the coming together of substantive and numerical properties and differentiate between qualitative and quantitative data. Subsequently, we qualify our definition by arguing that for data to be practically useful, it needs to be commensurable in a manner that reveals meaningful differences that allow for the generation of relevant insights through statistical methodologies. In the second section, we focus on what our conception of data can contribute to the discourse on data ethics and beyond. First, we hold that the need for useful data to be commensurable rules out an understanding of properties as fundamentally unique or equal. Second, we argue that practical concerns lead us to increasingly standardize how we operationalize a substantive property; in other words, how we formalize the relationship between the substantive and numerical properties of data. Thereby, we also standardize the interpretation of a property. With our increasing reliance on data and data technologies, these two characteristics of data affect our collective conception of reality. Statistical data's exclusion of the fundamentally unique and equal influences our perspective on the world, and the standardization of substantive properties can be viewed as profound ontological practice, entrenching ever more pervasive interpretations of phenomena in our everyday lives.
2312.06406
Andrew Murdoch Mr.
Andrew Murdoch, Johannes Cornelius Schoeman, Hendrik Willem Jordaan
Partial End-to-end Reinforcement Learning for Robustness Against Modelling Error in Autonomous Racing
Submitted to IEEE Transactions on Intelligent Transport Systems
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we address the issue of increasing the performance of reinforcement learning (RL) solutions for autonomous racing cars when navigating under conditions where practical vehicle modelling errors (commonly known as \emph{model mismatches}) are present. To address this challenge, we propose a partial end-to-end algorithm that decouples the planning and control tasks. Within this framework, an RL agent generates a trajectory comprising a path and velocity, which is subsequently tracked using a pure pursuit steering controller and a proportional velocity controller, respectively. In contrast, many current learning-based (i.e., reinforcement and imitation learning) algorithms utilise an end-to-end approach whereby a deep neural network directly maps from sensor data to control commands. By leveraging the robustness of a classical controller, our partial end-to-end driving algorithm exhibits better robustness towards model mismatches than standard end-to-end algorithms.
[ { "created": "Mon, 11 Dec 2023 14:27:10 GMT", "version": "v1" }, { "created": "Mon, 5 Aug 2024 17:00:00 GMT", "version": "v2" } ]
2024-08-06
[ [ "Murdoch", "Andrew", "" ], [ "Schoeman", "Johannes Cornelius", "" ], [ "Jordaan", "Hendrik Willem", "" ] ]
In this paper, we address the issue of increasing the performance of reinforcement learning (RL) solutions for autonomous racing cars when navigating under conditions where practical vehicle modelling errors (commonly known as \emph{model mismatches}) are present. To address this challenge, we propose a partial end-to-end algorithm that decouples the planning and control tasks. Within this framework, an RL agent generates a trajectory comprising a path and velocity, which is subsequently tracked using a pure pursuit steering controller and a proportional velocity controller, respectively. In contrast, many current learning-based (i.e., reinforcement and imitation learning) algorithms utilise an end-to-end approach whereby a deep neural network directly maps from sensor data to control commands. By leveraging the robustness of a classical controller, our partial end-to-end driving algorithm exhibits better robustness towards model mismatches than standard end-to-end algorithms.
1907.10887
Davide Fucci
Valentina Lenarduzzi and Davide Fucci
Towards an Holistic Definition of Requirements Debt
null
ESEM2019 Vision paper track
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When not appropriately managed, technical debt is considered to have negative effects on the long term success of a software project. However, how the debt metaphor applies to requirements engineering in general, and to requirements engineering activities in particular, is not well understood. Grounded in the existing literature, we present a holistic definition of requirements debt which include debt incurred during the identification, formalization, and implementation of requirements. We outline future assessment to validate and further refine our proposed definition. This conceptualization is the first step towards a requirements debt monitoring framework to support stakeholders decisions, such as when to incur and eventually pay back requirements debt, and at what costs
[ { "created": "Thu, 25 Jul 2019 08:01:17 GMT", "version": "v1" } ]
2019-07-26
[ [ "Lenarduzzi", "Valentina", "" ], [ "Fucci", "Davide", "" ] ]
When not appropriately managed, technical debt is considered to have negative effects on the long term success of a software project. However, how the debt metaphor applies to requirements engineering in general, and to requirements engineering activities in particular, is not well understood. Grounded in the existing literature, we present a holistic definition of requirements debt which include debt incurred during the identification, formalization, and implementation of requirements. We outline future assessment to validate and further refine our proposed definition. This conceptualization is the first step towards a requirements debt monitoring framework to support stakeholders decisions, such as when to incur and eventually pay back requirements debt, and at what costs
2304.09617
Amir Masoud Ghalamzan Esfahani
Vishnu Rajendran S, Bappaditya Debnath, Bappaditya Debnath, Sariah Mghames, Willow Mandil, Soran Parsa, Simon Parsons, Amir Ghalamzan-E
Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control
Preprint: to be appeared in Journal of Field Robotics
null
null
null
cs.RO
http://creativecommons.org/licenses/by-sa/4.0/
This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.
[ { "created": "Wed, 19 Apr 2023 12:50:15 GMT", "version": "v1" } ]
2023-04-20
[ [ "S", "Vishnu Rajendran", "" ], [ "Debnath", "Bappaditya", "" ], [ "Debnath", "Bappaditya", "" ], [ "Mghames", "Sariah", "" ], [ "Mandil", "Willow", "" ], [ "Parsa", "Soran", "" ], [ "Parsons", "Simon", "" ], [ "Ghalamzan-E", "Amir", "" ] ]
This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.
2103.16365
Zhenyi He
Nianchen Deng and Zhenyi He and Jiannan Ye and Budmonde Duinkharjav and Praneeth Chakravarthula and Xubo Yang and Qi Sun
FoV-NeRF: Foveated Neural Radiance Fields for Virtual Reality
9 pages
null
null
null
cs.GR cs.CV
http://creativecommons.org/licenses/by/4.0/
Virtual Reality (VR) is becoming ubiquitous with the rise of consumer displays and commercial VR platforms. Such displays require low latency and high quality rendering of synthetic imagery with reduced compute overheads. Recent advances in neural rendering showed promise of unlocking new possibilities in 3D computer graphics via image-based representations of virtual or physical environments. Specifically, the neural radiance fields (NeRF) demonstrated that photo-realistic quality and continuous view changes of 3D scenes can be achieved without loss of view-dependent effects. While NeRF can significantly benefit rendering for VR applications, it faces unique challenges posed by high field-of-view, high resolution, and stereoscopic/egocentric viewing, typically causing low quality and high latency of the rendered images. In VR, this not only harms the interaction experience but may also cause sickness. To tackle these problems toward six-degrees-of-freedom, egocentric, and stereo NeRF in VR, we present the first gaze-contingent 3D neural representation and view synthesis method. We incorporate the human psychophysics of visual- and stereo-acuity into an egocentric neural representation of 3D scenery. We then jointly optimize the latency/performance and visual quality while mutually bridging human perception and neural scene synthesis to achieve perceptually high-quality immersive interaction. We conducted both objective analysis and subjective studies to evaluate the effectiveness of our approach. We find that our method significantly reduces latency (up to 99% time reduction compared with NeRF) without loss of high-fidelity rendering (perceptually identical to full-resolution ground truth). The presented approach may serve as the first step toward future VR/AR systems that capture, teleport, and visualize remote environments in real-time.
[ { "created": "Tue, 30 Mar 2021 14:05:47 GMT", "version": "v1" }, { "created": "Fri, 22 Jul 2022 05:29:37 GMT", "version": "v2" } ]
2022-07-25
[ [ "Deng", "Nianchen", "" ], [ "He", "Zhenyi", "" ], [ "Ye", "Jiannan", "" ], [ "Duinkharjav", "Budmonde", "" ], [ "Chakravarthula", "Praneeth", "" ], [ "Yang", "Xubo", "" ], [ "Sun", "Qi", "" ] ]
Virtual Reality (VR) is becoming ubiquitous with the rise of consumer displays and commercial VR platforms. Such displays require low latency and high quality rendering of synthetic imagery with reduced compute overheads. Recent advances in neural rendering showed promise of unlocking new possibilities in 3D computer graphics via image-based representations of virtual or physical environments. Specifically, the neural radiance fields (NeRF) demonstrated that photo-realistic quality and continuous view changes of 3D scenes can be achieved without loss of view-dependent effects. While NeRF can significantly benefit rendering for VR applications, it faces unique challenges posed by high field-of-view, high resolution, and stereoscopic/egocentric viewing, typically causing low quality and high latency of the rendered images. In VR, this not only harms the interaction experience but may also cause sickness. To tackle these problems toward six-degrees-of-freedom, egocentric, and stereo NeRF in VR, we present the first gaze-contingent 3D neural representation and view synthesis method. We incorporate the human psychophysics of visual- and stereo-acuity into an egocentric neural representation of 3D scenery. We then jointly optimize the latency/performance and visual quality while mutually bridging human perception and neural scene synthesis to achieve perceptually high-quality immersive interaction. We conducted both objective analysis and subjective studies to evaluate the effectiveness of our approach. We find that our method significantly reduces latency (up to 99% time reduction compared with NeRF) without loss of high-fidelity rendering (perceptually identical to full-resolution ground truth). The presented approach may serve as the first step toward future VR/AR systems that capture, teleport, and visualize remote environments in real-time.
2211.13808
Rushikesh Zawar
Rushikesh Zawar, Krupa Bhayani, Neelanjan Bhowmik, Kamlesh Tiwari and Dhiraj Sangwan
Detecting Anomalies using Generative Adversarial Networks on Images
null
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Automatic detection of anomalies such as weapons or threat objects in baggage security, or detecting impaired items in industrial production is an important computer vision task demanding high efficiency and accuracy. Most of the available data in the anomaly detection task is imbalanced as the number of positive/anomalous instances is sparse. Inadequate availability of the data makes training of a deep neural network architecture for anomaly detection challenging. This paper proposes a novel Generative Adversarial Network (GAN) based model for anomaly detection. It uses normal (non-anomalous) images to learn about the normality based on which it detects if an input image contains an anomalous/threat object. The proposed model uses a generator with an encoder-decoder network having dense convolutional skip connections for enhanced reconstruction and to capture the data distribution. A self-attention augmented discriminator is used having the ability to check the consistency of detailed features even in distant portions. We use spectral normalisation to facilitate stable and improved training of the GAN. Experiments are performed on three datasets, viz. CIFAR-10, MVTec AD (for industrial applications) and SIXray (for X-ray baggage security). On the MVTec AD and SIXray datasets, our model achieves an improvement of upto 21% and 4.6%, respectively
[ { "created": "Thu, 24 Nov 2022 21:52:25 GMT", "version": "v1" } ]
2022-11-28
[ [ "Zawar", "Rushikesh", "" ], [ "Bhayani", "Krupa", "" ], [ "Bhowmik", "Neelanjan", "" ], [ "Tiwari", "Kamlesh", "" ], [ "Sangwan", "Dhiraj", "" ] ]
Automatic detection of anomalies such as weapons or threat objects in baggage security, or detecting impaired items in industrial production is an important computer vision task demanding high efficiency and accuracy. Most of the available data in the anomaly detection task is imbalanced as the number of positive/anomalous instances is sparse. Inadequate availability of the data makes training of a deep neural network architecture for anomaly detection challenging. This paper proposes a novel Generative Adversarial Network (GAN) based model for anomaly detection. It uses normal (non-anomalous) images to learn about the normality based on which it detects if an input image contains an anomalous/threat object. The proposed model uses a generator with an encoder-decoder network having dense convolutional skip connections for enhanced reconstruction and to capture the data distribution. A self-attention augmented discriminator is used having the ability to check the consistency of detailed features even in distant portions. We use spectral normalisation to facilitate stable and improved training of the GAN. Experiments are performed on three datasets, viz. CIFAR-10, MVTec AD (for industrial applications) and SIXray (for X-ray baggage security). On the MVTec AD and SIXray datasets, our model achieves an improvement of upto 21% and 4.6%, respectively
cs/0203014
Stephen F. Bush
Stephen F. Bush
Active Virtual Network Management Prediction: Complexity as a Framework for Prediction, Optimization, and Assurance
null
IEEE Computer Society Press, Proceedings of the 2002 DARPA Active Networks Conference and Exposition (DANCE 2002), May 29-31, 2002, San Francisco, California, USA
10.1109/DANCE.2002.1003518
null
cs.CC cs.NI
null
Research into active networking has provided the incentive to re-visit what has traditionally been classified as distinct properties and characteristics of information transfer such as protocol versus service; at a more fundamental level this paper considers the blending of computation and communication by means of complexity. The specific service examined in this paper is network self-prediction enabled by Active Virtual Network Management Prediction. Computation/communication is analyzed via Kolmogorov Complexity. The result is a mechanism to understand and improve the performance of active networking and Active Virtual Network Management Prediction in particular. The Active Virtual Network Management Prediction mechanism allows information, in various states of algorithmic and static form, to be transported in the service of prediction for network management. The results are generally applicable to algorithmic transmission of information. Kolmogorov Complexity is used and experimentally validated as a theory describing the relationship among algorithmic compression, complexity, and prediction accuracy within an active network. Finally, the paper concludes with a complexity-based framework for Information Assurance that attempts to take a holistic view of vulnerability analysis.
[ { "created": "Mon, 11 Mar 2002 23:09:16 GMT", "version": "v1" } ]
2016-11-17
[ [ "Bush", "Stephen F.", "" ] ]
Research into active networking has provided the incentive to re-visit what has traditionally been classified as distinct properties and characteristics of information transfer such as protocol versus service; at a more fundamental level this paper considers the blending of computation and communication by means of complexity. The specific service examined in this paper is network self-prediction enabled by Active Virtual Network Management Prediction. Computation/communication is analyzed via Kolmogorov Complexity. The result is a mechanism to understand and improve the performance of active networking and Active Virtual Network Management Prediction in particular. The Active Virtual Network Management Prediction mechanism allows information, in various states of algorithmic and static form, to be transported in the service of prediction for network management. The results are generally applicable to algorithmic transmission of information. Kolmogorov Complexity is used and experimentally validated as a theory describing the relationship among algorithmic compression, complexity, and prediction accuracy within an active network. Finally, the paper concludes with a complexity-based framework for Information Assurance that attempts to take a holistic view of vulnerability analysis.
2012.02782
Xiao-Yun Zhou
Xiao-Yun Zhou, Jiacheng Sun, Nanyang Ye, Xu Lan, Qijun Luo, Bo-Lin Lai, Pedro Esperanca, Guang-Zhong Yang, Zhenguo Li
Batch Group Normalization
8 pages
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Deep Convolutional Neural Networks (DCNNs) are hard and time-consuming to train. Normalization is one of the effective solutions. Among previous normalization methods, Batch Normalization (BN) performs well at medium and large batch sizes and is with good generalizability to multiple vision tasks, while its performance degrades significantly at small batch sizes. In this paper, we find that BN saturates at extreme large batch sizes, i.e., 128 images per worker, i.e., GPU, as well and propose that the degradation/saturation of BN at small/extreme large batch sizes is caused by noisy/confused statistic calculation. Hence without adding new trainable parameters, using multiple-layer or multi-iteration information, or introducing extra computation, Batch Group Normalization (BGN) is proposed to solve the noisy/confused statistic calculation of BN at small/extreme large batch sizes with introducing the channel, height and width dimension to compensate. The group technique in Group Normalization (GN) is used and a hyper-parameter G is used to control the number of feature instances used for statistic calculation, hence to offer neither noisy nor confused statistic for different batch sizes. We empirically demonstrate that BGN consistently outperforms BN, Instance Normalization (IN), Layer Normalization (LN), GN, and Positional Normalization (PN), across a wide spectrum of vision tasks, including image classification, Neural Architecture Search (NAS), adversarial learning, Few Shot Learning (FSL) and Unsupervised Domain Adaptation (UDA), indicating its good performance, robust stability to batch size and wide generalizability. For example, for training ResNet-50 on ImageNet with a batch size of 2, BN achieves Top1 accuracy of 66.512% while BGN achieves 76.096% with notable improvement.
[ { "created": "Fri, 4 Dec 2020 18:57:52 GMT", "version": "v1" }, { "created": "Wed, 9 Dec 2020 01:26:51 GMT", "version": "v2" } ]
2020-12-10
[ [ "Zhou", "Xiao-Yun", "" ], [ "Sun", "Jiacheng", "" ], [ "Ye", "Nanyang", "" ], [ "Lan", "Xu", "" ], [ "Luo", "Qijun", "" ], [ "Lai", "Bo-Lin", "" ], [ "Esperanca", "Pedro", "" ], [ "Yang", "Guang-Zhong", "" ], [ "Li", "Zhenguo", "" ] ]
Deep Convolutional Neural Networks (DCNNs) are hard and time-consuming to train. Normalization is one of the effective solutions. Among previous normalization methods, Batch Normalization (BN) performs well at medium and large batch sizes and is with good generalizability to multiple vision tasks, while its performance degrades significantly at small batch sizes. In this paper, we find that BN saturates at extreme large batch sizes, i.e., 128 images per worker, i.e., GPU, as well and propose that the degradation/saturation of BN at small/extreme large batch sizes is caused by noisy/confused statistic calculation. Hence without adding new trainable parameters, using multiple-layer or multi-iteration information, or introducing extra computation, Batch Group Normalization (BGN) is proposed to solve the noisy/confused statistic calculation of BN at small/extreme large batch sizes with introducing the channel, height and width dimension to compensate. The group technique in Group Normalization (GN) is used and a hyper-parameter G is used to control the number of feature instances used for statistic calculation, hence to offer neither noisy nor confused statistic for different batch sizes. We empirically demonstrate that BGN consistently outperforms BN, Instance Normalization (IN), Layer Normalization (LN), GN, and Positional Normalization (PN), across a wide spectrum of vision tasks, including image classification, Neural Architecture Search (NAS), adversarial learning, Few Shot Learning (FSL) and Unsupervised Domain Adaptation (UDA), indicating its good performance, robust stability to batch size and wide generalizability. For example, for training ResNet-50 on ImageNet with a batch size of 2, BN achieves Top1 accuracy of 66.512% while BGN achieves 76.096% with notable improvement.
2404.11256
Xuesong Li
Xuesong Li, Zeeshan Hayder, Ali Zia, Connor Cassidy, Shiming Liu, Warwick Stiller, Eric Stone, Warren Conaty, Lars Petersson, Vivien Rolland
MMCBE: Multi-modality Dataset for Crop Biomass Estimation and Beyond
10 pages, 10 figures, 3 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Crop biomass, a critical indicator of plant growth, health, and productivity, is invaluable for crop breeding programs and agronomic research. However, the accurate and scalable quantification of crop biomass remains inaccessible due to limitations in existing measurement methods. One of the obstacles impeding the advancement of current crop biomass prediction methodologies is the scarcity of publicly available datasets. Addressing this gap, we introduce a new dataset in this domain, i.e. Multi-modality dataset for crop biomass estimation (MMCBE). Comprising 216 sets of multi-view drone images, coupled with LiDAR point clouds, and hand-labelled ground truth, MMCBE represents the first multi-modality one in the field. This dataset aims to establish benchmark methods for crop biomass quantification and foster the development of vision-based approaches. We have rigorously evaluated state-of-the-art crop biomass estimation methods using MMCBE and ventured into additional potential applications, such as 3D crop reconstruction from drone imagery and novel-view rendering. With this publication, we are making our comprehensive dataset available to the broader community.
[ { "created": "Wed, 17 Apr 2024 11:06:42 GMT", "version": "v1" } ]
2024-04-18
[ [ "Li", "Xuesong", "" ], [ "Hayder", "Zeeshan", "" ], [ "Zia", "Ali", "" ], [ "Cassidy", "Connor", "" ], [ "Liu", "Shiming", "" ], [ "Stiller", "Warwick", "" ], [ "Stone", "Eric", "" ], [ "Conaty", "Warren", "" ], [ "Petersson", "Lars", "" ], [ "Rolland", "Vivien", "" ] ]
Crop biomass, a critical indicator of plant growth, health, and productivity, is invaluable for crop breeding programs and agronomic research. However, the accurate and scalable quantification of crop biomass remains inaccessible due to limitations in existing measurement methods. One of the obstacles impeding the advancement of current crop biomass prediction methodologies is the scarcity of publicly available datasets. Addressing this gap, we introduce a new dataset in this domain, i.e. Multi-modality dataset for crop biomass estimation (MMCBE). Comprising 216 sets of multi-view drone images, coupled with LiDAR point clouds, and hand-labelled ground truth, MMCBE represents the first multi-modality one in the field. This dataset aims to establish benchmark methods for crop biomass quantification and foster the development of vision-based approaches. We have rigorously evaluated state-of-the-art crop biomass estimation methods using MMCBE and ventured into additional potential applications, such as 3D crop reconstruction from drone imagery and novel-view rendering. With this publication, we are making our comprehensive dataset available to the broader community.
1609.06268
Faizan Javed
Yun Zhu, Faizan Javed, Ozgur Ozturk
Semantic Similarity Strategies for Job Title Classification
null
null
null
null
cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic and accurate classification of items enables numerous downstream applications in many domains. These applications can range from faceted browsing of items to product recommendations and big data analytics. In the online recruitment domain, we refer to classifying job ads to pre-defined or custom occupation categories as job title classification. A large-scale job title classification system can power various downstream applications such as semantic search, job recommendations and labor market analytics. In this paper, we discuss experiments conducted to improve our in-house job title classification system. The classification component of the system is composed of a two-stage coarse and fine level classifier cascade that classifies input text such as job title and/or job ads to one of the thousands of job titles in our taxonomy. To improve classification accuracy and effectiveness, we experiment with various semantic representation strategies such as average W2V vectors and document similarity measures such as Word Movers Distance (WMD). Our initial results show an overall improvement in accuracy of Carotene[1].
[ { "created": "Tue, 20 Sep 2016 17:54:47 GMT", "version": "v1" } ]
2016-09-21
[ [ "Zhu", "Yun", "" ], [ "Javed", "Faizan", "" ], [ "Ozturk", "Ozgur", "" ] ]
Automatic and accurate classification of items enables numerous downstream applications in many domains. These applications can range from faceted browsing of items to product recommendations and big data analytics. In the online recruitment domain, we refer to classifying job ads to pre-defined or custom occupation categories as job title classification. A large-scale job title classification system can power various downstream applications such as semantic search, job recommendations and labor market analytics. In this paper, we discuss experiments conducted to improve our in-house job title classification system. The classification component of the system is composed of a two-stage coarse and fine level classifier cascade that classifies input text such as job title and/or job ads to one of the thousands of job titles in our taxonomy. To improve classification accuracy and effectiveness, we experiment with various semantic representation strategies such as average W2V vectors and document similarity measures such as Word Movers Distance (WMD). Our initial results show an overall improvement in accuracy of Carotene[1].
2104.08158
Julian D. Cortes
Julian D. Cortes, Diego Garcia, Edgar Rodriguez, Diana Pineda
Governance for Security, Risks, Competition and Cooperation: Mapping the knowledge
null
null
null
null
cs.DL
http://creativecommons.org/licenses/by-nc-nd/4.0/
The study aims to generate a map of the knowledge based on the research on topics related to governance and security, risks, competition and cooperation for the FDDI (Fudan Development Institute) proceedings publishing project: 'Reflections on Governance: Security and Risks, Competition and Cooperation.' That mapping exercise would enable a broader audience to delve into the current state, and interdisciplinary pathways of the research published worldwide for addressing complex problems of governance. Following this introduction, the second section presents the bibliometric methods used and the results' interpretation. The third section presents the results, followed by the fourth and fifth sections of discussion and conclusion, respectively.
[ { "created": "Tue, 13 Apr 2021 18:42:33 GMT", "version": "v1" } ]
2021-04-19
[ [ "Cortes", "Julian D.", "" ], [ "Garcia", "Diego", "" ], [ "Rodriguez", "Edgar", "" ], [ "Pineda", "Diana", "" ] ]
The study aims to generate a map of the knowledge based on the research on topics related to governance and security, risks, competition and cooperation for the FDDI (Fudan Development Institute) proceedings publishing project: 'Reflections on Governance: Security and Risks, Competition and Cooperation.' That mapping exercise would enable a broader audience to delve into the current state, and interdisciplinary pathways of the research published worldwide for addressing complex problems of governance. Following this introduction, the second section presents the bibliometric methods used and the results' interpretation. The third section presents the results, followed by the fourth and fifth sections of discussion and conclusion, respectively.
2209.10077
Safa Medin
Safa C. Medin, Amir Weiss, Fr\'edo Durand, William T. Freeman, Gregory W. Wornell
Can Shadows Reveal Biometric Information?
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of extracting biometric information of individuals by looking at shadows of objects cast on diffuse surfaces. We show that the biometric information leakage from shadows can be sufficient for reliable identity inference under representative scenarios via a maximum likelihood analysis. We then develop a learning-based method that demonstrates this phenomenon in real settings, exploiting the subtle cues in the shadows that are the source of the leakage without requiring any labeled real data. In particular, our approach relies on building synthetic scenes composed of 3D face models obtained from a single photograph of each identity. We transfer what we learn from the synthetic data to the real data using domain adaptation in a completely unsupervised way. Our model is able to generalize well to the real domain and is robust to several variations in the scenes. We report high classification accuracies in an identity classification task that takes place in a scene with unknown geometry and occluding objects.
[ { "created": "Wed, 21 Sep 2022 02:36:32 GMT", "version": "v1" }, { "created": "Tue, 4 Oct 2022 16:27:08 GMT", "version": "v2" } ]
2022-10-05
[ [ "Medin", "Safa C.", "" ], [ "Weiss", "Amir", "" ], [ "Durand", "Frédo", "" ], [ "Freeman", "William T.", "" ], [ "Wornell", "Gregory W.", "" ] ]
We study the problem of extracting biometric information of individuals by looking at shadows of objects cast on diffuse surfaces. We show that the biometric information leakage from shadows can be sufficient for reliable identity inference under representative scenarios via a maximum likelihood analysis. We then develop a learning-based method that demonstrates this phenomenon in real settings, exploiting the subtle cues in the shadows that are the source of the leakage without requiring any labeled real data. In particular, our approach relies on building synthetic scenes composed of 3D face models obtained from a single photograph of each identity. We transfer what we learn from the synthetic data to the real data using domain adaptation in a completely unsupervised way. Our model is able to generalize well to the real domain and is robust to several variations in the scenes. We report high classification accuracies in an identity classification task that takes place in a scene with unknown geometry and occluding objects.
1908.05012
Philip Sperl
Philip Sperl and Konstantin B\"ottinger
Side-Channel Aware Fuzzing
The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-29959-0_13
K. Sako et al. (Eds.): ESORICS 2019, LNCS 11735, pp. 1-20, 2019
10.1007/978-3-030-29959-0_13
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software testing is becoming a critical part of the development cycle of embedded devices, enabling vulnerability detection. A well-studied approach of software testing is fuzz-testing (fuzzing), during which mutated input is sent to an input-processing software while its behavior is monitored. The goal is to identify faulty states in the program, triggered by malformed inputs. Even though this technique is widely performed, fuzzing cannot be applied to embedded devices to its full extent. Due to the lack of adequately powerful I/O capabilities or an operating system the feedback needed for fuzzing cannot be acquired. In this paper we present and evaluate a new approach to extract feedback for fuzzing on embedded devices using information the power consumption leaks. Side-channel aware fuzzing is a threefold process that is initiated by sending an input to a target device and measuring its power consumption. First, we extract features from the power traces of the target device using machine learning algorithms. Subsequently, we use the features to reconstruct the code structure of the analyzed firmware. In the final step we calculate a score for the input, which is proportional to the code coverage. We carry out our proof of concept by fuzzing synthetic software and a light-weight AES implementation running on an ARM Cortex-M4 microcontroller. Our results show that the power side-channel carries information relevant for fuzzing.
[ { "created": "Wed, 14 Aug 2019 08:18:09 GMT", "version": "v1" } ]
2019-08-15
[ [ "Sperl", "Philip", "" ], [ "Böttinger", "Konstantin", "" ] ]
Software testing is becoming a critical part of the development cycle of embedded devices, enabling vulnerability detection. A well-studied approach of software testing is fuzz-testing (fuzzing), during which mutated input is sent to an input-processing software while its behavior is monitored. The goal is to identify faulty states in the program, triggered by malformed inputs. Even though this technique is widely performed, fuzzing cannot be applied to embedded devices to its full extent. Due to the lack of adequately powerful I/O capabilities or an operating system the feedback needed for fuzzing cannot be acquired. In this paper we present and evaluate a new approach to extract feedback for fuzzing on embedded devices using information the power consumption leaks. Side-channel aware fuzzing is a threefold process that is initiated by sending an input to a target device and measuring its power consumption. First, we extract features from the power traces of the target device using machine learning algorithms. Subsequently, we use the features to reconstruct the code structure of the analyzed firmware. In the final step we calculate a score for the input, which is proportional to the code coverage. We carry out our proof of concept by fuzzing synthetic software and a light-weight AES implementation running on an ARM Cortex-M4 microcontroller. Our results show that the power side-channel carries information relevant for fuzzing.
2104.10033
Manh Duong Phung
Manh Duong Phung and Quang Phuc Ha
Safety-enhanced UAV Path Planning with Spherical Vector-based Particle Swarm Optimization
null
Applied Soft Computing, Volume 107, August 2021, 107376
10.1016/j.asoc.2021.107376
null
cs.NE cs.AI cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new algorithm named spherical vector-based particle swarm optimization (SPSO) to deal with the problem of path planning for unmanned aerial vehicles (UAVs) in complicated environments subjected to multiple threats. A cost function is first formulated to convert the path planning into an optimization problem that incorporates requirements and constraints for the feasible and safe operation of the UAV. SPSO is then used to find the optimal path that minimizes the cost function by efficiently searching the configuration space of the UAV via the correspondence between the particle position and the speed, turn angle and climb/dive angle of the UAV. To evaluate the performance of SPSO, eight benchmarking scenarios have been generated from real digital elevation model maps. The results show that the proposed SPSO outperforms not only other particle swarm optimization (PSO) variants including the classic PSO, phase angle-encoded PSO and quantum-behave PSO but also other state-of-the-art metaheuristic optimization algorithms including the genetic algorithm (GA), artificial bee colony (ABC), and differential evolution (DE) in most scenarios. In addition, experiments have been conducted to demonstrate the validity of the generated paths for real UAV operations. Source code of the algorithm can be found at https://github.com/duongpm/SPSO.
[ { "created": "Tue, 13 Apr 2021 06:45:11 GMT", "version": "v1" } ]
2021-04-21
[ [ "Phung", "Manh Duong", "" ], [ "Ha", "Quang Phuc", "" ] ]
This paper presents a new algorithm named spherical vector-based particle swarm optimization (SPSO) to deal with the problem of path planning for unmanned aerial vehicles (UAVs) in complicated environments subjected to multiple threats. A cost function is first formulated to convert the path planning into an optimization problem that incorporates requirements and constraints for the feasible and safe operation of the UAV. SPSO is then used to find the optimal path that minimizes the cost function by efficiently searching the configuration space of the UAV via the correspondence between the particle position and the speed, turn angle and climb/dive angle of the UAV. To evaluate the performance of SPSO, eight benchmarking scenarios have been generated from real digital elevation model maps. The results show that the proposed SPSO outperforms not only other particle swarm optimization (PSO) variants including the classic PSO, phase angle-encoded PSO and quantum-behave PSO but also other state-of-the-art metaheuristic optimization algorithms including the genetic algorithm (GA), artificial bee colony (ABC), and differential evolution (DE) in most scenarios. In addition, experiments have been conducted to demonstrate the validity of the generated paths for real UAV operations. Source code of the algorithm can be found at https://github.com/duongpm/SPSO.
2107.08600
Huazi Zhang
Jiajie Tong, Xianbin Wang, Qifan Zhang, Huazi Zhang, Rong Li, Jun Wang, Wen Tong
Fast polar codes for terabits-per-second throughput communications
8 pages, 5 figures. Part of this paper was presented in an invited talk at the 2021 International Symposium on Information Theory (ISIT)
null
null
null
cs.IT cs.AR math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
Targeting high-throughput and low-power communications, we implement two successive cancellation (SC) decoders for polar codes. With $16nm$ ASIC technology, the area efficiency and energy efficiency are $4Tbps/mm^2$ and $0.63pJ/bit$, respectively, for the unrolled decoder, and $561Gbps/mm^2$ and $1.21pJ/bit$, respectively, for the recursive decoder. To achieve such a high throughput, a novel code construction, coined as fast polar codes, is proposed and jointly optimized with a highly-parallel SC decoding architecture. First, we reuse existing modules to fast decode more outer code blocks, and then modify code construction to facilitate faster decoding for all outer code blocks up to a degree of parallelism of $16$. Furthermore, parallel comparison circuits and bit quantization schemes are customized for hardware implementation. Collectively, they contribute to an $2.66\times$ area efficiency improvement and $33\%$ energy saving over the state of the art.
[ { "created": "Mon, 19 Jul 2021 03:32:40 GMT", "version": "v1" } ]
2021-07-20
[ [ "Tong", "Jiajie", "" ], [ "Wang", "Xianbin", "" ], [ "Zhang", "Qifan", "" ], [ "Zhang", "Huazi", "" ], [ "Li", "Rong", "" ], [ "Wang", "Jun", "" ], [ "Tong", "Wen", "" ] ]
Targeting high-throughput and low-power communications, we implement two successive cancellation (SC) decoders for polar codes. With $16nm$ ASIC technology, the area efficiency and energy efficiency are $4Tbps/mm^2$ and $0.63pJ/bit$, respectively, for the unrolled decoder, and $561Gbps/mm^2$ and $1.21pJ/bit$, respectively, for the recursive decoder. To achieve such a high throughput, a novel code construction, coined as fast polar codes, is proposed and jointly optimized with a highly-parallel SC decoding architecture. First, we reuse existing modules to fast decode more outer code blocks, and then modify code construction to facilitate faster decoding for all outer code blocks up to a degree of parallelism of $16$. Furthermore, parallel comparison circuits and bit quantization schemes are customized for hardware implementation. Collectively, they contribute to an $2.66\times$ area efficiency improvement and $33\%$ energy saving over the state of the art.
1608.03676
Emery Berger
Charlie Curtsinger and Emery D. Berger
Coz: Finding Code that Counts with Causal Profiling
Published at SOSP 2015 (Best Paper Award)
Proceedings of the 25th Symposium on Operating Systems Principles (SOSP '15), 2015, 184-197
10.1145/2815400.2815409
null
cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Improving performance is a central concern for software developers. To locate optimization opportunities, developers rely on software profilers. However, these profilers only report where programs spent their time: optimizing that code may have no impact on performance. Past profilers thus both waste developer time and make it difficult for them to uncover significant optimization opportunities. This paper introduces causal profiling. Unlike past profiling approaches, causal profiling indicates exactly where programmers should focus their optimization efforts, and quantifies their potential impact. Causal profiling works by running performance experiments during program execution. Each experiment calculates the impact of any potential optimization by virtually speeding up code: inserting pauses that slow down all other code running concurrently. The key insight is that this slowdown has the same relative effect as running that line faster, thus "virtually" speeding it up. We present Coz, a causal profiler, which we evaluate on a range of highly-tuned applications: Memcached, SQLite, and the PARSEC benchmark suite. Coz identifies previously unknown optimization opportunities that are both significant and targeted. Guided by Coz, we improve the performance of Memcached by 9%, SQLite by 25%, and accelerate six PARSEC applications by as much as 68%; in most cases, these optimizations involve modifying under 10 lines of code.
[ { "created": "Fri, 12 Aug 2016 04:58:16 GMT", "version": "v1" } ]
2016-08-15
[ [ "Curtsinger", "Charlie", "" ], [ "Berger", "Emery D.", "" ] ]
Improving performance is a central concern for software developers. To locate optimization opportunities, developers rely on software profilers. However, these profilers only report where programs spent their time: optimizing that code may have no impact on performance. Past profilers thus both waste developer time and make it difficult for them to uncover significant optimization opportunities. This paper introduces causal profiling. Unlike past profiling approaches, causal profiling indicates exactly where programmers should focus their optimization efforts, and quantifies their potential impact. Causal profiling works by running performance experiments during program execution. Each experiment calculates the impact of any potential optimization by virtually speeding up code: inserting pauses that slow down all other code running concurrently. The key insight is that this slowdown has the same relative effect as running that line faster, thus "virtually" speeding it up. We present Coz, a causal profiler, which we evaluate on a range of highly-tuned applications: Memcached, SQLite, and the PARSEC benchmark suite. Coz identifies previously unknown optimization opportunities that are both significant and targeted. Guided by Coz, we improve the performance of Memcached by 9%, SQLite by 25%, and accelerate six PARSEC applications by as much as 68%; in most cases, these optimizations involve modifying under 10 lines of code.
2104.15098
Immanuel Haffner
Immanuel Haffner, Jens Dittrich
Fast Compilation and Execution of SQL Queries with WebAssembly
12 pages
null
null
null
cs.DB
http://creativecommons.org/licenses/by-nc-sa/4.0/
Interpreted execution of queries, as in the vectorized model, suffers from interpretation overheads. By compiling queries this interpretation overhead is eliminated at the cost of a compilation phase that delays execution, sacrificing latency for throughput. For short-lived queries, minimizing latency is important, while for long-running queries throughput outweighs latency. Because neither a purely interpretive model nor a purely compiling model can provide low latency and high throughput, adaptive solutions emerged. Adaptive systems seamlessly transition from interpreted to compiled execution, achieving low latency for short-lived queries and high throughput for long-running queries. However, these adaptive systems pose an immense development effort and require expert knowledge in both interpreter and compiler design. In this work, we investigate query execution by compilation to WebAssembly. We are able to compile even complex queries in less than a millisecond to machine code with near-optimal performance. By delegating execution of WebAssembly to the V8 engine, we are able to seamlessly transition from rapidly compiled yet non-optimized code to thoroughly optimized code during execution. Our approach provides both low latency and high throughput, is adaptive out of the box, and is straight forward to implement. The drastically reduced compilation times even enable us to explore generative programming of library code, that is fully inlined by construction. Our experimental evaluation confirms that our approach yields competitive and sometimes superior performance.
[ { "created": "Fri, 30 Apr 2021 16:22:56 GMT", "version": "v1" }, { "created": "Mon, 3 May 2021 07:44:27 GMT", "version": "v2" } ]
2021-05-04
[ [ "Haffner", "Immanuel", "" ], [ "Dittrich", "Jens", "" ] ]
Interpreted execution of queries, as in the vectorized model, suffers from interpretation overheads. By compiling queries this interpretation overhead is eliminated at the cost of a compilation phase that delays execution, sacrificing latency for throughput. For short-lived queries, minimizing latency is important, while for long-running queries throughput outweighs latency. Because neither a purely interpretive model nor a purely compiling model can provide low latency and high throughput, adaptive solutions emerged. Adaptive systems seamlessly transition from interpreted to compiled execution, achieving low latency for short-lived queries and high throughput for long-running queries. However, these adaptive systems pose an immense development effort and require expert knowledge in both interpreter and compiler design. In this work, we investigate query execution by compilation to WebAssembly. We are able to compile even complex queries in less than a millisecond to machine code with near-optimal performance. By delegating execution of WebAssembly to the V8 engine, we are able to seamlessly transition from rapidly compiled yet non-optimized code to thoroughly optimized code during execution. Our approach provides both low latency and high throughput, is adaptive out of the box, and is straight forward to implement. The drastically reduced compilation times even enable us to explore generative programming of library code, that is fully inlined by construction. Our experimental evaluation confirms that our approach yields competitive and sometimes superior performance.
2102.07813
Jiwoong Im
Daniel Jiwoong Im, Cristina Savin, Kyunghyun Cho
Online hyperparameter optimization by real-time recurrent learning
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional hyperparameter optimization methods are computationally intensive and hard to generalize to scenarios that require dynamically adapting hyperparameters, such as life-long learning. Here, we propose an online hyperparameter optimization algorithm that is asymptotically exact and computationally tractable, both theoretically and practically. Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in recurrent neural networks (RNNs). It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously, without repeatedly rolling out iterative optimization. This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
[ { "created": "Mon, 15 Feb 2021 19:36:18 GMT", "version": "v1" }, { "created": "Thu, 8 Apr 2021 17:50:01 GMT", "version": "v2" } ]
2021-04-09
[ [ "Im", "Daniel Jiwoong", "" ], [ "Savin", "Cristina", "" ], [ "Cho", "Kyunghyun", "" ] ]
Conventional hyperparameter optimization methods are computationally intensive and hard to generalize to scenarios that require dynamically adapting hyperparameters, such as life-long learning. Here, we propose an online hyperparameter optimization algorithm that is asymptotically exact and computationally tractable, both theoretically and practically. Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in recurrent neural networks (RNNs). It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously, without repeatedly rolling out iterative optimization. This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
2304.08211
Jose Nunez-Yanez Dr
Jose Nunez-Yanez, Andres Otero, Eduardo de la Torre
Dynamically Reconfigurable Variable-precision Sparse-Dense Matrix Acceleration in Tensorflow Lite
null
Microprocessors and Microsystems, Volume 98, 2023, 104801, ISSN 0141-9331
10.1016/j.micpro.2023.104801
null
cs.AR cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we present a dynamically reconfigurable hardware accelerator called FADES (Fused Architecture for DEnse and Sparse matrices). The FADES design offers multiple configuration options that trade off parallelism and complexity using a dataflow model to create four stages that read, compute, scale and write results. FADES is mapped to the programmable logic (PL) and integrated with the TensorFlow Lite inference engine running on the processing system (PS) of a heterogeneous SoC device. The accelerator is used to compute the tensor operations, while the dynamically reconfigurable approach can be used to switch precision between int8 and float modes. This dynamic reconfiguration enables better performance by allowing more cores to be mapped to the resource-constrained device and lower power consumption compared with supporting both arithmetic precisions simultaneously. We compare the proposed hardware with a high-performance systolic architecture for dense matrices obtaining 25% better performance in dense mode with half the DSP blocks in the same technology. In sparse mode, we show that the core can outperform dense mode even at low sparsity levels, and a single-core achieves up to 20x acceleration over the software-optimized NEON RUY library.
[ { "created": "Mon, 17 Apr 2023 12:31:50 GMT", "version": "v1" } ]
2023-04-18
[ [ "Nunez-Yanez", "Jose", "" ], [ "Otero", "Andres", "" ], [ "de la Torre", "Eduardo", "" ] ]
In this paper, we present a dynamically reconfigurable hardware accelerator called FADES (Fused Architecture for DEnse and Sparse matrices). The FADES design offers multiple configuration options that trade off parallelism and complexity using a dataflow model to create four stages that read, compute, scale and write results. FADES is mapped to the programmable logic (PL) and integrated with the TensorFlow Lite inference engine running on the processing system (PS) of a heterogeneous SoC device. The accelerator is used to compute the tensor operations, while the dynamically reconfigurable approach can be used to switch precision between int8 and float modes. This dynamic reconfiguration enables better performance by allowing more cores to be mapped to the resource-constrained device and lower power consumption compared with supporting both arithmetic precisions simultaneously. We compare the proposed hardware with a high-performance systolic architecture for dense matrices obtaining 25% better performance in dense mode with half the DSP blocks in the same technology. In sparse mode, we show that the core can outperform dense mode even at low sparsity levels, and a single-core achieves up to 20x acceleration over the software-optimized NEON RUY library.
2407.15549
Stephen Casper
Abhay Sheshadri, Aidan Ewart, Phillip Guo, Aengus Lynch, Cindy Wu, Vivek Hebbar, Henry Sleight, Asa Cooper Stickland, Ethan Perez, Dylan Hadfield-Menell, Stephen Casper
Targeted Latent Adversarial Training Improves Robustness to Persistent Harmful Behaviors in LLMs
null
null
null
null
cs.LG cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) can often be made to behave in undesirable ways that they are explicitly fine-tuned not to. For example, the LLM red-teaming literature has produced a wide variety of `jailbreaking' techniques to elicit harmful text from models that were fine-tuned to be harmless. Recent work on red-teaming, model editing, and interpretability suggests that this challenge stems from how (adversarial) fine-tuning largely serves to suppress rather than remove undesirable capabilities from LLMs. Prior work has introduced latent adversarial training (LAT) as a way to improve robustness to broad classes of failures. These prior works have considered untargeted latent space attacks where the adversary perturbs latent activations to maximize loss on examples of desirable behavior. Untargeted LAT can provide a generic type of robustness but does not leverage information about specific failure modes. Here, we experiment with targeted LAT where the adversary seeks to minimize loss on a specific competing task. We find that it can augment a wide variety of state-of-the-art methods. First, we use targeted LAT to improve robustness to jailbreaks, outperforming a strong R2D2 baseline with orders of magnitude less compute. Second, we use it to more effectively remove backdoors with no knowledge of the trigger. Finally, we use it to more effectively unlearn knowledge for specific undesirable tasks in a way that is also more robust to re-learning. Overall, our results suggest that targeted LAT can be an effective tool for defending against harmful behaviors from LLMs.
[ { "created": "Mon, 22 Jul 2024 11:19:14 GMT", "version": "v1" } ]
2024-07-23
[ [ "Sheshadri", "Abhay", "" ], [ "Ewart", "Aidan", "" ], [ "Guo", "Phillip", "" ], [ "Lynch", "Aengus", "" ], [ "Wu", "Cindy", "" ], [ "Hebbar", "Vivek", "" ], [ "Sleight", "Henry", "" ], [ "Stickland", "Asa Cooper", "" ], [ "Perez", "Ethan", "" ], [ "Hadfield-Menell", "Dylan", "" ], [ "Casper", "Stephen", "" ] ]
Large language models (LLMs) can often be made to behave in undesirable ways that they are explicitly fine-tuned not to. For example, the LLM red-teaming literature has produced a wide variety of `jailbreaking' techniques to elicit harmful text from models that were fine-tuned to be harmless. Recent work on red-teaming, model editing, and interpretability suggests that this challenge stems from how (adversarial) fine-tuning largely serves to suppress rather than remove undesirable capabilities from LLMs. Prior work has introduced latent adversarial training (LAT) as a way to improve robustness to broad classes of failures. These prior works have considered untargeted latent space attacks where the adversary perturbs latent activations to maximize loss on examples of desirable behavior. Untargeted LAT can provide a generic type of robustness but does not leverage information about specific failure modes. Here, we experiment with targeted LAT where the adversary seeks to minimize loss on a specific competing task. We find that it can augment a wide variety of state-of-the-art methods. First, we use targeted LAT to improve robustness to jailbreaks, outperforming a strong R2D2 baseline with orders of magnitude less compute. Second, we use it to more effectively remove backdoors with no knowledge of the trigger. Finally, we use it to more effectively unlearn knowledge for specific undesirable tasks in a way that is also more robust to re-learning. Overall, our results suggest that targeted LAT can be an effective tool for defending against harmful behaviors from LLMs.
2004.04497
Monther Aldwairi
Monther Aldwairi, Suaad Mohammed, Megana Lakshmi Padmanabhan
Efficient and Secure Flash-based Gaming CAPTCH
null
Journal of Parallel and Distributed Computing, 2020
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the growth of connectivity to smart grids, new applications, and the changing interaction between customer and energy clouds, clouds are more vulnerable to denial-of-service attacks. Efficient detection methods are required to authenticate, detect and control attackers. Completely Automated Public Turing test to tell Computers and Humans Apart, CAPTCHA, is one efficient tool to thwart denial of service attacks. The server presents the user with a client puzzle to solve in order to gain access to the service or website. The puzzle should be hard enough for computers, but easy for humans to solve. Several methods have been suggested including the popular image-based, as well as video-based, and text-based CAPTCHAs. In this paper, we present a new Flash-based gaming CAPTCHA to differentiate bots from humans. We propose a drag and drop client puzzle where the user will play a simple game to answer a visual question. Our method turns out to be convenient, easy for users and challenging for bots. Additionally, it has gaming aspect, which makes it interesting to users of all age groups.
[ { "created": "Thu, 9 Apr 2020 11:52:59 GMT", "version": "v1" } ]
2020-04-10
[ [ "Aldwairi", "Monther", "" ], [ "Mohammed", "Suaad", "" ], [ "Padmanabhan", "Megana Lakshmi", "" ] ]
With the growth of connectivity to smart grids, new applications, and the changing interaction between customer and energy clouds, clouds are more vulnerable to denial-of-service attacks. Efficient detection methods are required to authenticate, detect and control attackers. Completely Automated Public Turing test to tell Computers and Humans Apart, CAPTCHA, is one efficient tool to thwart denial of service attacks. The server presents the user with a client puzzle to solve in order to gain access to the service or website. The puzzle should be hard enough for computers, but easy for humans to solve. Several methods have been suggested including the popular image-based, as well as video-based, and text-based CAPTCHAs. In this paper, we present a new Flash-based gaming CAPTCHA to differentiate bots from humans. We propose a drag and drop client puzzle where the user will play a simple game to answer a visual question. Our method turns out to be convenient, easy for users and challenging for bots. Additionally, it has gaming aspect, which makes it interesting to users of all age groups.
2104.01459
Namgil Lee
Namgil Lee, Heejung Yang, Hojin Yoo
A surrogate loss function for optimization of $F_\beta$ score in binary classification with imbalanced data
17 pages
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
The $F_\beta$ score is a commonly used measure of classification performance, which plays crucial roles in classification tasks with imbalanced data sets. However, the $F_\beta$ score cannot be used as a loss function by gradient-based learning algorithms for optimizing neural network parameters due to its non-differentiability. On the other hand, commonly used loss functions such as the binary cross-entropy (BCE) loss are not directly related to performance measures such as the $F_\beta$ score, so that neural networks optimized by using the loss functions may not yield optimal performance measures. In this study, we investigate a relationship between classification performance measures and loss functions in terms of the gradients with respect to the model parameters. Then, we propose a differentiable surrogate loss function for the optimization of the $F_\beta$ score. We show that the gradient paths of the proposed surrogate $F_\beta$ loss function approximate the gradient paths of the large sample limit of the $F_\beta$ score. Through numerical experiments using ResNets and benchmark image data sets, it is demonstrated that the proposed surrogate $F_\beta$ loss function is effective for optimizing $F_\beta$ scores under class imbalances in binary classification tasks compared with other loss functions.
[ { "created": "Sat, 3 Apr 2021 18:36:23 GMT", "version": "v1" } ]
2021-04-06
[ [ "Lee", "Namgil", "" ], [ "Yang", "Heejung", "" ], [ "Yoo", "Hojin", "" ] ]
The $F_\beta$ score is a commonly used measure of classification performance, which plays crucial roles in classification tasks with imbalanced data sets. However, the $F_\beta$ score cannot be used as a loss function by gradient-based learning algorithms for optimizing neural network parameters due to its non-differentiability. On the other hand, commonly used loss functions such as the binary cross-entropy (BCE) loss are not directly related to performance measures such as the $F_\beta$ score, so that neural networks optimized by using the loss functions may not yield optimal performance measures. In this study, we investigate a relationship between classification performance measures and loss functions in terms of the gradients with respect to the model parameters. Then, we propose a differentiable surrogate loss function for the optimization of the $F_\beta$ score. We show that the gradient paths of the proposed surrogate $F_\beta$ loss function approximate the gradient paths of the large sample limit of the $F_\beta$ score. Through numerical experiments using ResNets and benchmark image data sets, it is demonstrated that the proposed surrogate $F_\beta$ loss function is effective for optimizing $F_\beta$ scores under class imbalances in binary classification tasks compared with other loss functions.
1905.07357
Philipp Becker
Philipp Becker, Harit Pandya, Gregor Gebhardt, Cheng Zhao, James Taylor, Gerhard Neumann
Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces
accepted at ICML 2019
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) (Kalman et al., 1960) have been integrated with deep learning models, however, such approaches typically rely on approximate inference techniques such as variational inference which makes learning more complex and often less scalable due to approximation errors. We propose a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations. Our approach uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions. Moreover, we use locally linear dynamic models to efficiently propagate the latent state to the next time step. The resulting network architecture, which we call Recurrent Kalman Network (RKN), can be used for any time-series data, similar to a LSTM (Hochreiter & Schmidhuber, 1997) but uses an explicit representation of uncertainty. As shown by our experiments, the RKN obtains much more accurate uncertainty estimates than an LSTM or Gated Recurrent Units (GRUs) (Cho et al., 2014) while also showing a slightly improved prediction performance and outperforms various recent generative models on an image imputation task.
[ { "created": "Fri, 17 May 2019 16:26:44 GMT", "version": "v1" } ]
2019-05-20
[ [ "Becker", "Philipp", "" ], [ "Pandya", "Harit", "" ], [ "Gebhardt", "Gregor", "" ], [ "Zhao", "Cheng", "" ], [ "Taylor", "James", "" ], [ "Neumann", "Gerhard", "" ] ]
In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) (Kalman et al., 1960) have been integrated with deep learning models, however, such approaches typically rely on approximate inference techniques such as variational inference which makes learning more complex and often less scalable due to approximation errors. We propose a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations. Our approach uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions. Moreover, we use locally linear dynamic models to efficiently propagate the latent state to the next time step. The resulting network architecture, which we call Recurrent Kalman Network (RKN), can be used for any time-series data, similar to a LSTM (Hochreiter & Schmidhuber, 1997) but uses an explicit representation of uncertainty. As shown by our experiments, the RKN obtains much more accurate uncertainty estimates than an LSTM or Gated Recurrent Units (GRUs) (Cho et al., 2014) while also showing a slightly improved prediction performance and outperforms various recent generative models on an image imputation task.
2002.12530
Furao Shen
Hongyan Hao, Yan Wang, Siqiao Xue, Yudi Xia, Jian Zhao, Furao Shen
Temporal Convolutional Attention-based Network For Sequence Modeling
7 pages, 3 figures
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
With the development of feed-forward models, the default model for sequence modeling has gradually evolved to replace recurrent networks. Many powerful feed-forward models based on convolutional networks and attention mechanism were proposed and show more potential to handle sequence modeling tasks. We wonder that is there an architecture that can not only achieve an approximate substitution of recurrent network, but also absorb the advantages of feed-forward models. So we propose an exploratory architecture referred to Temporal Convolutional Attention-based Network (TCAN) which combines temporal convolutional network and attention mechanism. TCAN includes two parts, one is Temporal Attention (TA) which captures relevant features inside the sequence, the other is Enhanced Residual (ER) which extracts shallow layer's important information and transfers to deep layers. We improve the state-of-the-art results of bpc/perplexity to 30.28 on word-level PTB, 1.092 on character-level PTB, and 9.20 on WikiText-2.
[ { "created": "Fri, 28 Feb 2020 03:53:31 GMT", "version": "v1" }, { "created": "Thu, 5 Mar 2020 00:16:05 GMT", "version": "v2" }, { "created": "Sat, 14 Oct 2023 03:48:04 GMT", "version": "v3" } ]
2023-10-17
[ [ "Hao", "Hongyan", "" ], [ "Wang", "Yan", "" ], [ "Xue", "Siqiao", "" ], [ "Xia", "Yudi", "" ], [ "Zhao", "Jian", "" ], [ "Shen", "Furao", "" ] ]
With the development of feed-forward models, the default model for sequence modeling has gradually evolved to replace recurrent networks. Many powerful feed-forward models based on convolutional networks and attention mechanism were proposed and show more potential to handle sequence modeling tasks. We wonder that is there an architecture that can not only achieve an approximate substitution of recurrent network, but also absorb the advantages of feed-forward models. So we propose an exploratory architecture referred to Temporal Convolutional Attention-based Network (TCAN) which combines temporal convolutional network and attention mechanism. TCAN includes two parts, one is Temporal Attention (TA) which captures relevant features inside the sequence, the other is Enhanced Residual (ER) which extracts shallow layer's important information and transfers to deep layers. We improve the state-of-the-art results of bpc/perplexity to 30.28 on word-level PTB, 1.092 on character-level PTB, and 9.20 on WikiText-2.
1304.5214
Mohab Safey El Din
Bernd Bank, Marc Giusti (LIX), Joos Heintz, Mohab Safey El Din (LIP6, INRIA Paris-Rocquencourt)
Intrinsic complexity estimates in polynomial optimization
null
null
null
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is known that point searching in basic semialgebraic sets and the search for globally minimal points in polynomial optimization tasks can be carried out using $(s\,d)^{O(n)}$ arithmetic operations, where $n$ and $s$ are the numbers of variables and constraints and $d$ is the maximal degree of the polynomials involved.\spar \noindent We associate to each of these problems an intrinsic system degree which becomes in worst case of order $(n\,d)^{O(n)}$ and which measures the intrinsic complexity of the task under consideration.\spar \noindent We design non-uniformly deterministic or uniformly probabilistic algorithms of intrinsic, quasi-polynomial complexity which solve these problems.
[ { "created": "Thu, 18 Apr 2013 18:42:46 GMT", "version": "v1" }, { "created": "Mon, 10 Feb 2014 20:44:45 GMT", "version": "v2" } ]
2014-02-11
[ [ "Bank", "Bernd", "", "LIX" ], [ "Giusti", "Marc", "", "LIX" ], [ "Heintz", "Joos", "", "LIP6,\n INRIA Paris-Rocquencourt" ], [ "Din", "Mohab Safey El", "", "LIP6,\n INRIA Paris-Rocquencourt" ] ]
It is known that point searching in basic semialgebraic sets and the search for globally minimal points in polynomial optimization tasks can be carried out using $(s\,d)^{O(n)}$ arithmetic operations, where $n$ and $s$ are the numbers of variables and constraints and $d$ is the maximal degree of the polynomials involved.\spar \noindent We associate to each of these problems an intrinsic system degree which becomes in worst case of order $(n\,d)^{O(n)}$ and which measures the intrinsic complexity of the task under consideration.\spar \noindent We design non-uniformly deterministic or uniformly probabilistic algorithms of intrinsic, quasi-polynomial complexity which solve these problems.
2305.14044
Yago Fontenla-Seco
Yago Fontenla-Seco, Alberto Bugar\'in-Diz, Manuel Lama
Process-To-Text: A Framework for the Quantitative Description of Processes in Natural Language
This version of the article has been accepted for publication, after peer review and is subject to Springer Nature's AM terms of use, but is not the Version of Record and does not reflect postacceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.1007/978-3-030-73959-1_19
null
10.1007/978-3-030-73959-1_19
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present the Process-To-Text (P2T) framework for the automatic generation of textual descriptive explanations of processes. P2T integrates three AI paradigms: process mining for extracting temporal and structural information from a process, fuzzy linguistic protoforms for modelling uncertain terms, and natural language generation for building the explanations. A real use-case in the cardiology domain is presented, showing the potential of P2T for providing natural language explanations addressed to specialists.
[ { "created": "Tue, 23 May 2023 13:14:34 GMT", "version": "v1" } ]
2023-05-24
[ [ "Fontenla-Seco", "Yago", "" ], [ "Bugarín-Diz", "Alberto", "" ], [ "Lama", "Manuel", "" ] ]
In this paper we present the Process-To-Text (P2T) framework for the automatic generation of textual descriptive explanations of processes. P2T integrates three AI paradigms: process mining for extracting temporal and structural information from a process, fuzzy linguistic protoforms for modelling uncertain terms, and natural language generation for building the explanations. A real use-case in the cardiology domain is presented, showing the potential of P2T for providing natural language explanations addressed to specialists.
1911.00652
Xinyuan Yu
Jiaxiong Qiu, Xinyuan Yu, Guoqiang Yang and Shuaicheng Liu
DeepBlindness: Fast Blindness Map Estimation and Blindness Type Classification for Outdoor Scene from Single Color Image
null
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Outdoor vision robotic systems and autonomous cars suffer from many image-quality issues, particularly haze, defocus blur, and motion blur, which we will define generically as "blindness issues". These blindness issues may seriously affect the performance of robotic systems and could lead to unsafe decisions being made. However, existing solutions either focus on one type of blindness only or lack the ability to estimate the degree of blindness accurately. Besides, heavy computation is needed so that these solutions cannot run in real-time on practical systems. In this paper, we provide a method which could simultaneously detect the type of blindness and provide a blindness map indicating to what degree the vision is limited on a pixel-by-pixel basis. Both the blindness type and the estimate of per-pixel blindness are essential for tasks like deblur, dehaze, or the fail-safe functioning of robotic systems. We demonstrate the effectiveness of our approach on the KITTI and CUHK datasets where experiments show that our method outperforms other state-of-the-art approaches, achieving speeds of about 130 frames per second (fps).
[ { "created": "Sat, 2 Nov 2019 05:04:46 GMT", "version": "v1" } ]
2019-11-05
[ [ "Qiu", "Jiaxiong", "" ], [ "Yu", "Xinyuan", "" ], [ "Yang", "Guoqiang", "" ], [ "Liu", "Shuaicheng", "" ] ]
Outdoor vision robotic systems and autonomous cars suffer from many image-quality issues, particularly haze, defocus blur, and motion blur, which we will define generically as "blindness issues". These blindness issues may seriously affect the performance of robotic systems and could lead to unsafe decisions being made. However, existing solutions either focus on one type of blindness only or lack the ability to estimate the degree of blindness accurately. Besides, heavy computation is needed so that these solutions cannot run in real-time on practical systems. In this paper, we provide a method which could simultaneously detect the type of blindness and provide a blindness map indicating to what degree the vision is limited on a pixel-by-pixel basis. Both the blindness type and the estimate of per-pixel blindness are essential for tasks like deblur, dehaze, or the fail-safe functioning of robotic systems. We demonstrate the effectiveness of our approach on the KITTI and CUHK datasets where experiments show that our method outperforms other state-of-the-art approaches, achieving speeds of about 130 frames per second (fps).
2109.11018
Nicholas Mattei
Arie Glazier, Andrea Loreggia, Nicholas Mattei, Taher Rahgooy, Francesca Rossi, K. Brent Venable
Making Human-Like Trade-offs in Constrained Environments by Learning from Demonstrations
null
null
null
null
cs.AI cs.LG cs.RO
http://creativecommons.org/licenses/by/4.0/
Many real-life scenarios require humans to make difficult trade-offs: do we always follow all the traffic rules or do we violate the speed limit in an emergency? These scenarios force us to evaluate the trade-off between collective norms and our own personal objectives. To create effective AI-human teams, we must equip AI agents with a model of how humans make trade-offs in complex, constrained environments. These agents will be able to mirror human behavior or to draw human attention to situations where decision making could be improved. To this end, we propose a novel inverse reinforcement learning (IRL) method for learning implicit hard and soft constraints from demonstrations, enabling agents to quickly adapt to new settings. In addition, learning soft constraints over states, actions, and state features allows agents to transfer this knowledge to new domains that share similar aspects. We then use the constraint learning method to implement a novel system architecture that leverages a cognitive model of human decision making, multi-alternative decision field theory (MDFT), to orchestrate competing objectives. We evaluate the resulting agent on trajectory length, number of violated constraints, and total reward, demonstrating that our agent architecture is both general and achieves strong performance. Thus we are able to capture and replicate human-like trade-offs from demonstrations in environments when constraints are not explicit.
[ { "created": "Wed, 22 Sep 2021 20:12:01 GMT", "version": "v1" } ]
2021-09-24
[ [ "Glazier", "Arie", "" ], [ "Loreggia", "Andrea", "" ], [ "Mattei", "Nicholas", "" ], [ "Rahgooy", "Taher", "" ], [ "Rossi", "Francesca", "" ], [ "Venable", "K. Brent", "" ] ]
Many real-life scenarios require humans to make difficult trade-offs: do we always follow all the traffic rules or do we violate the speed limit in an emergency? These scenarios force us to evaluate the trade-off between collective norms and our own personal objectives. To create effective AI-human teams, we must equip AI agents with a model of how humans make trade-offs in complex, constrained environments. These agents will be able to mirror human behavior or to draw human attention to situations where decision making could be improved. To this end, we propose a novel inverse reinforcement learning (IRL) method for learning implicit hard and soft constraints from demonstrations, enabling agents to quickly adapt to new settings. In addition, learning soft constraints over states, actions, and state features allows agents to transfer this knowledge to new domains that share similar aspects. We then use the constraint learning method to implement a novel system architecture that leverages a cognitive model of human decision making, multi-alternative decision field theory (MDFT), to orchestrate competing objectives. We evaluate the resulting agent on trajectory length, number of violated constraints, and total reward, demonstrating that our agent architecture is both general and achieves strong performance. Thus we are able to capture and replicate human-like trade-offs from demonstrations in environments when constraints are not explicit.
2311.04648
Ruochun Zhang
Ruochun Zhang, Bonaventura Tagliafierro, Colin Vanden Heuvel, Shlok Sabarwal, Luning Bakke, Yulong Yue, Xin Wei, Radu Serban, Dan Negrut
Chrono DEM-Engine: A Discrete Element Method dual-GPU simulator with customizable contact forces and element shape
38 pages, 30 figures, 9 tables. This preprint is submitted to Computer Physics Communications
null
null
null
cs.CE cs.NA cs.SE math.NA
http://creativecommons.org/licenses/by/4.0/
This paper introduces DEM-Engine, a new submodule of Project Chrono, that is designed to carry out Discrete Element Method (DEM) simulations. Based on spherical primitive shapes, DEM-Engine can simulate polydisperse granular materials and handle complex shapes generated as assemblies of primitives, referred to as clumps. DEM-Engine has a multi-tier parallelized structure that is optimized to operate simultaneously on two GPUs. The code uses custom-defined data types to reduce memory footprint and increase bandwidth. A novel "delayed contact detection" algorithm allows the decoupling of the contact detection and force computation, thus splitting the workload into two asynchronous GPU streams. DEM-Engine uses just-in-time compilation to support user-defined contact force models. This paper discusses its C++ and Python interfaces and presents a variety of numerical tests, in which impact forces, complex-shaped particle flows, and a custom force model are validated considering well-known benchmark cases. Additionally, the full potential of the simulator is demonstrated for the investigation of extraterrestrial rover mobility on granular terrain. The chosen case study demonstrates that large-scale co-simulations (comprising 11 million elements) spanning 15 seconds, in conjunction with an external multi-body dynamics system, can be efficiently executed within a day. Lastly, a performance test suggests that DEM-Engine displays linear scaling up to 150 million elements on two NVIDIA A100 GPUs.
[ { "created": "Wed, 8 Nov 2023 12:48:35 GMT", "version": "v1" }, { "created": "Thu, 9 Nov 2023 13:57:50 GMT", "version": "v2" } ]
2023-11-10
[ [ "Zhang", "Ruochun", "" ], [ "Tagliafierro", "Bonaventura", "" ], [ "Heuvel", "Colin Vanden", "" ], [ "Sabarwal", "Shlok", "" ], [ "Bakke", "Luning", "" ], [ "Yue", "Yulong", "" ], [ "Wei", "Xin", "" ], [ "Serban", "Radu", "" ], [ "Negrut", "Dan", "" ] ]
This paper introduces DEM-Engine, a new submodule of Project Chrono, that is designed to carry out Discrete Element Method (DEM) simulations. Based on spherical primitive shapes, DEM-Engine can simulate polydisperse granular materials and handle complex shapes generated as assemblies of primitives, referred to as clumps. DEM-Engine has a multi-tier parallelized structure that is optimized to operate simultaneously on two GPUs. The code uses custom-defined data types to reduce memory footprint and increase bandwidth. A novel "delayed contact detection" algorithm allows the decoupling of the contact detection and force computation, thus splitting the workload into two asynchronous GPU streams. DEM-Engine uses just-in-time compilation to support user-defined contact force models. This paper discusses its C++ and Python interfaces and presents a variety of numerical tests, in which impact forces, complex-shaped particle flows, and a custom force model are validated considering well-known benchmark cases. Additionally, the full potential of the simulator is demonstrated for the investigation of extraterrestrial rover mobility on granular terrain. The chosen case study demonstrates that large-scale co-simulations (comprising 11 million elements) spanning 15 seconds, in conjunction with an external multi-body dynamics system, can be efficiently executed within a day. Lastly, a performance test suggests that DEM-Engine displays linear scaling up to 150 million elements on two NVIDIA A100 GPUs.