id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2101.00395
Masahiro Toyoura
Siqiang Chen, Masahiro Toyoura, Takamasa Terada, Xiaoyang Mao, Gang Xu
Image-based Textile Decoding
null
Integrated Computer-Aided Engineering, Pre-press, pp. 1-14, 2020
10.3233/ICA-200647
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
A textile fabric consists of countless parallel vertical yarns (warps) and horizontal yarns (wefts). While common looms can weave repetitive patterns, Jacquard looms can weave the patterns without repetition restrictions. A pattern in which the warps and wefts cross on a grid is defined in a binary matrix. The binary matrix can define which warp and weft is on top at each grid point of the Jacquard fabric. The process can be regarded as encoding from pattern to textile. In this work, we propose a decoding method that generates a binary pattern from a textile fabric that has been already woven. We could not use a deep neural network to learn the process based solely on the training set of patterns and observed fabric images. The crossing points in the observed image were not completely located on the grid points, so it was difficult to take a direct correspondence between the fabric images and the pattern represented by the matrix in the framework of deep learning. Therefore, we propose a method that can apply the framework of deep learning via the intermediate representation of patterns and images. We show how to convert a pattern into an intermediate representation and how to reconvert the output into a pattern and confirm its effectiveness. In this experiment, we confirmed that 93% of correct pattern was obtained by decoding the pattern from the actual fabric images and weaving them again.
[ { "created": "Sat, 2 Jan 2021 07:41:34 GMT", "version": "v1" } ]
2021-01-05
[ [ "Chen", "Siqiang", "" ], [ "Toyoura", "Masahiro", "" ], [ "Terada", "Takamasa", "" ], [ "Mao", "Xiaoyang", "" ], [ "Xu", "Gang", "" ] ]
A textile fabric consists of countless parallel vertical yarns (warps) and horizontal yarns (wefts). While common looms can weave repetitive patterns, Jacquard looms can weave the patterns without repetition restrictions. A pattern in which the warps and wefts cross on a grid is defined in a binary matrix. The binary matrix can define which warp and weft is on top at each grid point of the Jacquard fabric. The process can be regarded as encoding from pattern to textile. In this work, we propose a decoding method that generates a binary pattern from a textile fabric that has been already woven. We could not use a deep neural network to learn the process based solely on the training set of patterns and observed fabric images. The crossing points in the observed image were not completely located on the grid points, so it was difficult to take a direct correspondence between the fabric images and the pattern represented by the matrix in the framework of deep learning. Therefore, we propose a method that can apply the framework of deep learning via the intermediate representation of patterns and images. We show how to convert a pattern into an intermediate representation and how to reconvert the output into a pattern and confirm its effectiveness. In this experiment, we confirmed that 93% of correct pattern was obtained by decoding the pattern from the actual fabric images and weaving them again.
2001.03210
Porter Jenkins
Porter Jenkins, Hua Wei, J. Stockton Jenkins, Zhenhui Li
A Probabilistic Simulator of Spatial Demand for Product Allocation
8 pages, The AAAI-20 Workshop on Intelligent Process Automation
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Connecting consumers with relevant products is a very important problem in both online and offline commerce. In physical retail, product placement is an effective way to connect consumers with products. However, selecting product locations within a store can be a tedious process. Moreover, learning important spatial patterns in offline retail is challenging due to the scarcity of data and the high cost of exploration and experimentation in the physical world. To address these challenges, we propose a stochastic model of spatial demand in physical retail. We show that the proposed model is more predictive of demand than existing baselines. We also perform a preliminary study into different automation techniques and show that an optimal product allocation policy can be learned through Deep Q-Learning.
[ { "created": "Thu, 9 Jan 2020 20:18:37 GMT", "version": "v1" } ]
2020-01-13
[ [ "Jenkins", "Porter", "" ], [ "Wei", "Hua", "" ], [ "Jenkins", "J. Stockton", "" ], [ "Li", "Zhenhui", "" ] ]
Connecting consumers with relevant products is a very important problem in both online and offline commerce. In physical retail, product placement is an effective way to connect consumers with products. However, selecting product locations within a store can be a tedious process. Moreover, learning important spatial patterns in offline retail is challenging due to the scarcity of data and the high cost of exploration and experimentation in the physical world. To address these challenges, we propose a stochastic model of spatial demand in physical retail. We show that the proposed model is more predictive of demand than existing baselines. We also perform a preliminary study into different automation techniques and show that an optimal product allocation policy can be learned through Deep Q-Learning.
2204.04694
Abram Handler
Abram Handler, Narges Mahyar, Brendan O'Connor
ClioQuery: Interactive Query-Oriented Text Analytics for Comprehensive Investigation of Historical News Archives
Forthcoming in ACM Transactions on Interactive Intelligent Systems (TiiS)
null
null
null
cs.HC cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Historians and archivists often find and analyze the occurrences of query words in newspaper archives, to help answer fundamental questions about society. But much work in text analytics focuses on helping people investigate other textual units, such as events, clusters, ranked documents, entity relationships, or thematic hierarchies. Informed by a study into the needs of historians and archivists, we thus propose ClioQuery, a text analytics system uniquely organized around the analysis of query words in context. ClioQuery applies text simplification techniques from natural language processing to help historians quickly and comprehensively gather and analyze all occurrences of a query word across an archive. It also pairs these new NLP methods with more traditional features like linked views and in-text highlighting to help engender trust in summarization techniques. We evaluate ClioQuery with two separate user studies, in which historians explain how ClioQuery's novel text simplification features can help facilitate historical research. We also evaluate with a separate quantitative comparison study, which shows that ClioQuery helps crowdworkers find and remember historical information. Such results suggest possible new directions for text analytics in other query-oriented settings.
[ { "created": "Sun, 10 Apr 2022 14:21:24 GMT", "version": "v1" } ]
2022-04-12
[ [ "Handler", "Abram", "" ], [ "Mahyar", "Narges", "" ], [ "O'Connor", "Brendan", "" ] ]
Historians and archivists often find and analyze the occurrences of query words in newspaper archives, to help answer fundamental questions about society. But much work in text analytics focuses on helping people investigate other textual units, such as events, clusters, ranked documents, entity relationships, or thematic hierarchies. Informed by a study into the needs of historians and archivists, we thus propose ClioQuery, a text analytics system uniquely organized around the analysis of query words in context. ClioQuery applies text simplification techniques from natural language processing to help historians quickly and comprehensively gather and analyze all occurrences of a query word across an archive. It also pairs these new NLP methods with more traditional features like linked views and in-text highlighting to help engender trust in summarization techniques. We evaluate ClioQuery with two separate user studies, in which historians explain how ClioQuery's novel text simplification features can help facilitate historical research. We also evaluate with a separate quantitative comparison study, which shows that ClioQuery helps crowdworkers find and remember historical information. Such results suggest possible new directions for text analytics in other query-oriented settings.
1901.06958
Istv\'an Ketyk\'o
Istv\'an Ketyk\'o, Ferenc Kov\'acs and Kriszti\'an Zsolt Varga
Domain Adaptation for sEMG-based Gesture Recognition with Recurrent Neural Networks
Typos corrected
2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 2019, pp. 1-7
10.1109/IJCNN.2019.8852018
null
cs.LG cs.HC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Surface Electromyography (sEMG/EMG) is to record muscles' electrical activity from a restricted area of the skin by using electrodes. The sEMG-based gesture recognition is extremely sensitive of inter-session and inter-subject variances. We propose a model and a deep-learning-based domain adaptation method to approximate the domain shift for recognition accuracy enhancement. Analysis performed on sparse and HighDensity (HD) sEMG public datasets validate that our approach outperforms state-of-the-art methods.
[ { "created": "Mon, 21 Jan 2019 15:19:01 GMT", "version": "v1" }, { "created": "Thu, 28 Nov 2019 15:51:32 GMT", "version": "v2" } ]
2019-12-02
[ [ "Ketykó", "István", "" ], [ "Kovács", "Ferenc", "" ], [ "Varga", "Krisztián Zsolt", "" ] ]
Surface Electromyography (sEMG/EMG) is to record muscles' electrical activity from a restricted area of the skin by using electrodes. The sEMG-based gesture recognition is extremely sensitive of inter-session and inter-subject variances. We propose a model and a deep-learning-based domain adaptation method to approximate the domain shift for recognition accuracy enhancement. Analysis performed on sparse and HighDensity (HD) sEMG public datasets validate that our approach outperforms state-of-the-art methods.
2207.01166
Zhibo Yang
Zhibo Yang, Sounak Mondal, Seoyoung Ahn, Gregory Zelinsky, Minh Hoai, Dimitris Samaras
Target-absent Human Attention
Accepted to ECCV2022
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The prediction of human gaze behavior is important for building human-computer interactive systems that can anticipate a user's attention. Computer vision models have been developed to predict the fixations made by people as they search for target objects. But what about when the image has no target? Equally important is to know how people search when they cannot find a target, and when they would stop searching. In this paper, we propose the first data-driven computational model that addresses the search-termination problem and predicts the scanpath of search fixations made by people searching for targets that do not appear in images. We model visual search as an imitation learning problem and represent the internal knowledge that the viewer acquires through fixations using a novel state representation that we call Foveated Feature Maps (FFMs). FFMs integrate a simulated foveated retina into a pretrained ConvNet that produces an in-network feature pyramid, all with minimal computational overhead. Our method integrates FFMs as the state representation in inverse reinforcement learning. Experimentally, we improve the state of the art in predicting human target-absent search behavior on the COCO-Search18 dataset
[ { "created": "Mon, 4 Jul 2022 02:32:04 GMT", "version": "v1" }, { "created": "Fri, 26 Aug 2022 01:47:10 GMT", "version": "v2" }, { "created": "Wed, 2 Nov 2022 01:02:51 GMT", "version": "v3" } ]
2022-11-03
[ [ "Yang", "Zhibo", "" ], [ "Mondal", "Sounak", "" ], [ "Ahn", "Seoyoung", "" ], [ "Zelinsky", "Gregory", "" ], [ "Hoai", "Minh", "" ], [ "Samaras", "Dimitris", "" ] ]
The prediction of human gaze behavior is important for building human-computer interactive systems that can anticipate a user's attention. Computer vision models have been developed to predict the fixations made by people as they search for target objects. But what about when the image has no target? Equally important is to know how people search when they cannot find a target, and when they would stop searching. In this paper, we propose the first data-driven computational model that addresses the search-termination problem and predicts the scanpath of search fixations made by people searching for targets that do not appear in images. We model visual search as an imitation learning problem and represent the internal knowledge that the viewer acquires through fixations using a novel state representation that we call Foveated Feature Maps (FFMs). FFMs integrate a simulated foveated retina into a pretrained ConvNet that produces an in-network feature pyramid, all with minimal computational overhead. Our method integrates FFMs as the state representation in inverse reinforcement learning. Experimentally, we improve the state of the art in predicting human target-absent search behavior on the COCO-Search18 dataset
2405.19094
Julian Eisenschlos
Syrine Krichene, Francesco Piccinno, Fangyu Liu, Julian Martin Eisenschlos
Faithful Chart Summarization with ChaTS-Pi
To be published in the proceedings of the 2024 Annual Meeting of the Association for Computational Linguistics
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chart-to-summary generation can help explore data, communicate insights, and help the visually impaired people. Multi-modal generative models have been used to produce fluent summaries, but they can suffer from factual and perceptual errors. In this work we present CHATS-CRITIC, a reference-free chart summarization metric for scoring faithfulness. CHATS-CRITIC is composed of an image-to-text model to recover the table from a chart, and a tabular entailment model applied to score the summary sentence by sentence. We find that CHATS-CRITIC evaluates the summary quality according to human ratings better than reference-based metrics, either learned or n-gram based, and can be further used to fix candidate summaries by removing not supported sentences. We then introduce CHATS-PI, a chart-to-summary pipeline that leverages CHATS-CRITIC during inference to fix and rank sampled candidates from any chart-summarization model. We evaluate CHATS-PI and CHATS-CRITIC using human raters, establishing state-of-the-art results on two popular chart-to-summary datasets.
[ { "created": "Wed, 29 May 2024 13:55:06 GMT", "version": "v1" } ]
2024-05-30
[ [ "Krichene", "Syrine", "" ], [ "Piccinno", "Francesco", "" ], [ "Liu", "Fangyu", "" ], [ "Eisenschlos", "Julian Martin", "" ] ]
Chart-to-summary generation can help explore data, communicate insights, and help the visually impaired people. Multi-modal generative models have been used to produce fluent summaries, but they can suffer from factual and perceptual errors. In this work we present CHATS-CRITIC, a reference-free chart summarization metric for scoring faithfulness. CHATS-CRITIC is composed of an image-to-text model to recover the table from a chart, and a tabular entailment model applied to score the summary sentence by sentence. We find that CHATS-CRITIC evaluates the summary quality according to human ratings better than reference-based metrics, either learned or n-gram based, and can be further used to fix candidate summaries by removing not supported sentences. We then introduce CHATS-PI, a chart-to-summary pipeline that leverages CHATS-CRITIC during inference to fix and rank sampled candidates from any chart-summarization model. We evaluate CHATS-PI and CHATS-CRITIC using human raters, establishing state-of-the-art results on two popular chart-to-summary datasets.
2402.02053
Deheng Ye
Yangbin Yu, Qin Zhang, Junyou Li, Qiang Fu, Deheng Ye
Affordable Generative Agents
null
null
null
null
cs.AI cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emergence of large language models (LLMs) has significantly advanced the simulation of believable interactive agents. However, the substantial cost on maintaining the prolonged agent interactions poses challenge over the deployment of believable LLM-based agents. Therefore, in this paper, we develop Affordable Generative Agents (AGA), a framework for enabling the generation of believable and low-cost interactions on both agent-environment and inter-agents levels. Specifically, for agent-environment interactions, we substitute repetitive LLM inferences with learned policies; while for inter-agent interactions, we model the social relationships between agents and compress auxiliary dialogue information. Extensive experiments on multiple environments show the effectiveness and efficiency of our proposed framework. Also, we delve into the mechanisms of emergent believable behaviors lying in LLM agents, demonstrating that agents can only generate finite behaviors in fixed environments, based upon which, we understand ways to facilitate emergent interaction behaviors. Our code is publicly available at: \url{https://github.com/AffordableGenerativeAgents/Affordable-Generative-Agents}.
[ { "created": "Sat, 3 Feb 2024 06:16:28 GMT", "version": "v1" } ]
2024-02-06
[ [ "Yu", "Yangbin", "" ], [ "Zhang", "Qin", "" ], [ "Li", "Junyou", "" ], [ "Fu", "Qiang", "" ], [ "Ye", "Deheng", "" ] ]
The emergence of large language models (LLMs) has significantly advanced the simulation of believable interactive agents. However, the substantial cost on maintaining the prolonged agent interactions poses challenge over the deployment of believable LLM-based agents. Therefore, in this paper, we develop Affordable Generative Agents (AGA), a framework for enabling the generation of believable and low-cost interactions on both agent-environment and inter-agents levels. Specifically, for agent-environment interactions, we substitute repetitive LLM inferences with learned policies; while for inter-agent interactions, we model the social relationships between agents and compress auxiliary dialogue information. Extensive experiments on multiple environments show the effectiveness and efficiency of our proposed framework. Also, we delve into the mechanisms of emergent believable behaviors lying in LLM agents, demonstrating that agents can only generate finite behaviors in fixed environments, based upon which, we understand ways to facilitate emergent interaction behaviors. Our code is publicly available at: \url{https://github.com/AffordableGenerativeAgents/Affordable-Generative-Agents}.
2107.06243
Moniba Keymanesh
Moniba Keymanesh, Tanya Berger-Wolf, Micha Elsner, Srinivasan Parthasarathy
Fairness-aware Summarization for Justified Decision-Making
22 pages, 9 figures
null
null
null
cs.AI cs.CL cs.CY
http://creativecommons.org/licenses/by/4.0/
In consequential domains such as recidivism prediction, facility inspection, and benefit assignment, it's important for individuals to know the decision-relevant information for the model's prediction. In addition, predictions should be fair both in terms of the outcome and the justification of the outcome. In other words, decision-relevant features should provide sufficient information for the predicted outcome and should be independent of the membership of individuals in protected groups such as race and gender. In this work, we focus on the problem of (un)fairness in the justification of the text-based neural models. We tie the explanatory power of the model to fairness in the outcome and propose a fairness-aware summarization mechanism to detect and counteract the bias in such models. Given a potentially biased natural language explanation for a decision, we use a multi-task neural model and an attribution mechanism based on integrated gradients to extract high-utility and low-bias justifications in form of a summary. The extracted summary is then used for training a model to make decisions for individuals. Results on several real world datasets suggest that our method drastically limits the demographic leakage in the input (fairness in justification) while moderately enhancing the fairness in the outcome. Our model is also effective in detecting and counteracting several types of data poisoning attacks that synthesize race-coded reasoning or irrelevant justifications.
[ { "created": "Tue, 13 Jul 2021 17:04:10 GMT", "version": "v1" }, { "created": "Wed, 9 Feb 2022 21:09:28 GMT", "version": "v2" } ]
2022-02-11
[ [ "Keymanesh", "Moniba", "" ], [ "Berger-Wolf", "Tanya", "" ], [ "Elsner", "Micha", "" ], [ "Parthasarathy", "Srinivasan", "" ] ]
In consequential domains such as recidivism prediction, facility inspection, and benefit assignment, it's important for individuals to know the decision-relevant information for the model's prediction. In addition, predictions should be fair both in terms of the outcome and the justification of the outcome. In other words, decision-relevant features should provide sufficient information for the predicted outcome and should be independent of the membership of individuals in protected groups such as race and gender. In this work, we focus on the problem of (un)fairness in the justification of the text-based neural models. We tie the explanatory power of the model to fairness in the outcome and propose a fairness-aware summarization mechanism to detect and counteract the bias in such models. Given a potentially biased natural language explanation for a decision, we use a multi-task neural model and an attribution mechanism based on integrated gradients to extract high-utility and low-bias justifications in form of a summary. The extracted summary is then used for training a model to make decisions for individuals. Results on several real world datasets suggest that our method drastically limits the demographic leakage in the input (fairness in justification) while moderately enhancing the fairness in the outcome. Our model is also effective in detecting and counteracting several types of data poisoning attacks that synthesize race-coded reasoning or irrelevant justifications.
1807.11086
Sajedul Talukder
Sajedul Talukder, Shalisha Witherspoon, Kanishk Srivastava, Ryan Thompson
Mobile Technology in Healthcare Environment: Security Vulnerabilities and Countermeasures
Technical Report
null
null
null
cs.CR cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile devices and technologies offer a tremendous amount of benefits to users, although it is also understood that it introduces a set of challenges when it comes to security, compliance, and risks. More and more healthcare organizations have been seeking to update their outdated technology, and have considered the adoption of mobile devices to meet these needs. However, introducing mobile devices and technology also introduces new risks and threats to the organization. As a test case, we examine Epic Rover, a mobile application that has been identified as a viable solution to manage the electronic medical system. In this paper, we study the insights that the security team needs to investigate, before the adoption of this mobile technology, as well as provide a thorough examination of the vulnerabilities and threats that the use of mobile devices in the healthcare environment brings, and introduce countermeasures and mitigations to reduce the risk while maintaining regulatory compliance.
[ { "created": "Sun, 29 Jul 2018 17:22:24 GMT", "version": "v1" } ]
2018-07-31
[ [ "Talukder", "Sajedul", "" ], [ "Witherspoon", "Shalisha", "" ], [ "Srivastava", "Kanishk", "" ], [ "Thompson", "Ryan", "" ] ]
Mobile devices and technologies offer a tremendous amount of benefits to users, although it is also understood that it introduces a set of challenges when it comes to security, compliance, and risks. More and more healthcare organizations have been seeking to update their outdated technology, and have considered the adoption of mobile devices to meet these needs. However, introducing mobile devices and technology also introduces new risks and threats to the organization. As a test case, we examine Epic Rover, a mobile application that has been identified as a viable solution to manage the electronic medical system. In this paper, we study the insights that the security team needs to investigate, before the adoption of this mobile technology, as well as provide a thorough examination of the vulnerabilities and threats that the use of mobile devices in the healthcare environment brings, and introduce countermeasures and mitigations to reduce the risk while maintaining regulatory compliance.
1403.0240
Ivo Sbalzarini
Ivo F. Sbalzarini, Sophie Schneider, Janick Cardinale
Particle methods enable fast and simple approximation of Sobolev gradients in image segmentation
21 pages, 10 figures
null
null
null
cs.CV cs.CE cs.NA q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bio-image analysis is challenging due to inhomogeneous intensity distributions and high levels of noise in the images. Bayesian inference provides a principled way for regularizing the problem using prior knowledge. A fundamental choice is how one measures "distances" between shapes in an image. It has been shown that the straightforward geometric L2 distance is degenerate and leads to pathological situations. This is avoided when using Sobolev gradients, rendering the segmentation problem less ill-posed. The high computational cost and implementation overhead of Sobolev gradients, however, have hampered practical applications. We show how particle methods as applied to image segmentation allow for a simple and computationally efficient implementation of Sobolev gradients. We show that the evaluation of Sobolev gradients amounts to particle-particle interactions along the contour in an image. We extend an existing particle-based segmentation algorithm to using Sobolev gradients. Using synthetic and real-world images, we benchmark the results for both 2D and 3D images using piecewise smooth and piecewise constant region models. The present particle approximation of Sobolev gradients is 2.8 to 10 times faster than the previous reference implementation, but retains the known favorable properties of Sobolev gradients. This speedup is achieved by using local particle-particle interactions instead of solving a global Poisson equation at each iteration. The computational time per iteration is higher for Sobolev gradients than for L2 gradients. Since Sobolev gradients precondition the optimization problem, however, a smaller number of overall iterations may be necessary for the algorithm to converge, which can in some cases amortize the higher per-iteration cost.
[ { "created": "Sun, 2 Mar 2014 16:58:29 GMT", "version": "v1" } ]
2014-03-04
[ [ "Sbalzarini", "Ivo F.", "" ], [ "Schneider", "Sophie", "" ], [ "Cardinale", "Janick", "" ] ]
Bio-image analysis is challenging due to inhomogeneous intensity distributions and high levels of noise in the images. Bayesian inference provides a principled way for regularizing the problem using prior knowledge. A fundamental choice is how one measures "distances" between shapes in an image. It has been shown that the straightforward geometric L2 distance is degenerate and leads to pathological situations. This is avoided when using Sobolev gradients, rendering the segmentation problem less ill-posed. The high computational cost and implementation overhead of Sobolev gradients, however, have hampered practical applications. We show how particle methods as applied to image segmentation allow for a simple and computationally efficient implementation of Sobolev gradients. We show that the evaluation of Sobolev gradients amounts to particle-particle interactions along the contour in an image. We extend an existing particle-based segmentation algorithm to using Sobolev gradients. Using synthetic and real-world images, we benchmark the results for both 2D and 3D images using piecewise smooth and piecewise constant region models. The present particle approximation of Sobolev gradients is 2.8 to 10 times faster than the previous reference implementation, but retains the known favorable properties of Sobolev gradients. This speedup is achieved by using local particle-particle interactions instead of solving a global Poisson equation at each iteration. The computational time per iteration is higher for Sobolev gradients than for L2 gradients. Since Sobolev gradients precondition the optimization problem, however, a smaller number of overall iterations may be necessary for the algorithm to converge, which can in some cases amortize the higher per-iteration cost.
cs/0612022
Willemien Visser
Willemien Visser (INRIA Rocquencourt)
Both Generic Design and Different Forms of Designing
null
Dans Wonderground, the 2006 DRS (Design Research Society) International Conference (2006)
null
null
cs.HC
null
This paper defends an augmented cognitively oriented "generic-design hypothesis": There are both significant similarities between the design activities implemented in different situations and crucial differences between these and other cognitive activities; yet, characteristics of a design situation (i.e., related to the designers, the artefact, and other task variables influencing these two) introduce specificities in the corresponding design activities and cognitive structures that are used. We thus combine the generic-design hypothesis with that of different "forms" of designing. In this paper, outlining a number of directions that need further elaboration, we propose a series of candidate dimensions underlying such forms of design.
[ { "created": "Mon, 4 Dec 2006 16:28:44 GMT", "version": "v1" } ]
2007-05-23
[ [ "Visser", "Willemien", "", "INRIA Rocquencourt" ] ]
This paper defends an augmented cognitively oriented "generic-design hypothesis": There are both significant similarities between the design activities implemented in different situations and crucial differences between these and other cognitive activities; yet, characteristics of a design situation (i.e., related to the designers, the artefact, and other task variables influencing these two) introduce specificities in the corresponding design activities and cognitive structures that are used. We thus combine the generic-design hypothesis with that of different "forms" of designing. In this paper, outlining a number of directions that need further elaboration, we propose a series of candidate dimensions underlying such forms of design.
2012.08419
Tarasha Khurana
Tarasha Khurana, Achal Dave, Deva Ramanan
Detecting Invisible People
Project page: http://www.cs.cmu.edu/~tkhurana/invisible.htm
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Monocular object detection and tracking have improved drastically in recent years, but rely on a key assumption: that objects are visible to the camera. Many offline tracking approaches reason about occluded objects post-hoc, by linking together tracklets after the object re-appears, making use of reidentification (ReID). However, online tracking in embodied robotic agents (such as a self-driving vehicle) fundamentally requires object permanence, which is the ability to reason about occluded objects before they re-appear. In this work, we re-purpose tracking benchmarks and propose new metrics for the task of detecting invisible objects, focusing on the illustrative case of people. We demonstrate that current detection and tracking systems perform dramatically worse on this task. We introduce two key innovations to recover much of this performance drop. We treat occluded object detection in temporal sequences as a short-term forecasting challenge, bringing to bear tools from dynamic sequence prediction. Second, we build dynamic models that explicitly reason in 3D, making use of observations produced by state-of-the-art monocular depth estimation networks. To our knowledge, ours is the first work to demonstrate the effectiveness of monocular depth estimation for the task of tracking and detecting occluded objects. Our approach strongly improves by 11.4% over the baseline in ablations and by 5.0% over the state-of-the-art in F1 score.
[ { "created": "Tue, 15 Dec 2020 16:54:45 GMT", "version": "v1" } ]
2020-12-16
[ [ "Khurana", "Tarasha", "" ], [ "Dave", "Achal", "" ], [ "Ramanan", "Deva", "" ] ]
Monocular object detection and tracking have improved drastically in recent years, but rely on a key assumption: that objects are visible to the camera. Many offline tracking approaches reason about occluded objects post-hoc, by linking together tracklets after the object re-appears, making use of reidentification (ReID). However, online tracking in embodied robotic agents (such as a self-driving vehicle) fundamentally requires object permanence, which is the ability to reason about occluded objects before they re-appear. In this work, we re-purpose tracking benchmarks and propose new metrics for the task of detecting invisible objects, focusing on the illustrative case of people. We demonstrate that current detection and tracking systems perform dramatically worse on this task. We introduce two key innovations to recover much of this performance drop. We treat occluded object detection in temporal sequences as a short-term forecasting challenge, bringing to bear tools from dynamic sequence prediction. Second, we build dynamic models that explicitly reason in 3D, making use of observations produced by state-of-the-art monocular depth estimation networks. To our knowledge, ours is the first work to demonstrate the effectiveness of monocular depth estimation for the task of tracking and detecting occluded objects. Our approach strongly improves by 11.4% over the baseline in ablations and by 5.0% over the state-of-the-art in F1 score.
2312.02771
Thomas Dalgaty Dr
Thomas Dalgaty, Shogo Yamada, Anca Molnos, Eiji Kawasaki, Thomas Mesquida, Fran\c{c}ois Rummens, Tatsuo Shibata, Yukihiro Urakawa, Yukio Terasaki, Tomoyuki Sasaki, Marc Duranton
Scaling-up Memristor Monte Carlo with magnetic domain-wall physics
Presented at the 1st workshop on Machine Learning with New Compute Paradigms (MLNCP) at NeurIPS 2023 (New Orleans, USA)
null
null
null
cs.ET physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By exploiting the intrinsic random nature of nanoscale devices, Memristor Monte Carlo (MMC) is a promising enabler of edge learning systems. However, due to multiple algorithmic and device-level limitations, existing demonstrations have been restricted to very small neural network models and datasets. We discuss these limitations, and describe how they can be overcome, by mapping the stochastic gradient Langevin dynamics (SGLD) algorithm onto the physics of magnetic domain-wall Memristors to scale-up MMC models by five orders of magnitude. We propose the push-pull pulse programming method that realises SGLD in-physics, and use it to train a domain-wall based ResNet18 on the CIFAR-10 dataset. On this task, we observe no performance degradation relative to a floating point model down to an update precision of between 6 and 7-bits, indicating we have made a step towards a large-scale edge learning system leveraging noisy analogue devices.
[ { "created": "Tue, 5 Dec 2023 14:01:28 GMT", "version": "v1" } ]
2023-12-06
[ [ "Dalgaty", "Thomas", "" ], [ "Yamada", "Shogo", "" ], [ "Molnos", "Anca", "" ], [ "Kawasaki", "Eiji", "" ], [ "Mesquida", "Thomas", "" ], [ "Rummens", "François", "" ], [ "Shibata", "Tatsuo", "" ], [ "Urakawa", "Yukihiro", "" ], [ "Terasaki", "Yukio", "" ], [ "Sasaki", "Tomoyuki", "" ], [ "Duranton", "Marc", "" ] ]
By exploiting the intrinsic random nature of nanoscale devices, Memristor Monte Carlo (MMC) is a promising enabler of edge learning systems. However, due to multiple algorithmic and device-level limitations, existing demonstrations have been restricted to very small neural network models and datasets. We discuss these limitations, and describe how they can be overcome, by mapping the stochastic gradient Langevin dynamics (SGLD) algorithm onto the physics of magnetic domain-wall Memristors to scale-up MMC models by five orders of magnitude. We propose the push-pull pulse programming method that realises SGLD in-physics, and use it to train a domain-wall based ResNet18 on the CIFAR-10 dataset. On this task, we observe no performance degradation relative to a floating point model down to an update precision of between 6 and 7-bits, indicating we have made a step towards a large-scale edge learning system leveraging noisy analogue devices.
2206.06780
Vivek Parmar
Vivek Parmar, Syed Shakib Sarwar, Ziyun Li, Hsien-Hsin S. Lee, Barbara De Salvo, Manan Suri
Memory-Oriented Design-Space Exploration of Edge-AI Hardware for XR Applications
Accepted as a full paper by the TinyML Research Symposium 2023
null
null
null
cs.AR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Low-Power Edge-AI capabilities are essential for on-device extended reality (XR) applications to support the vision of Metaverse. In this work, we investigate two representative XR workloads: (i) Hand detection and (ii) Eye segmentation, for hardware design space exploration. For both applications, we train deep neural networks and analyze the impact of quantization and hardware specific bottlenecks. Through simulations, we evaluate a CPU and two systolic inference accelerator implementations. Next, we compare these hardware solutions with advanced technology nodes. The impact of integrating state-of-the-art emerging non-volatile memory technology (STT/SOT/VGSOT MRAM) into the XR-AI inference pipeline is evaluated. We found that significant energy benefits (>=24%) can be achieved for hand detection (IPS=10) and eye segmentation (IPS=0.1) by introducing non-volatile memory in the memory hierarchy for designs at 7nm node while meeting minimum IPS (inference per second). Moreover, we can realize substantial reduction in area (>=30%) owing to the small form factor of MRAM compared to traditional SRAM.
[ { "created": "Wed, 8 Jun 2022 11:18:02 GMT", "version": "v1" }, { "created": "Fri, 17 Feb 2023 07:13:36 GMT", "version": "v2" }, { "created": "Tue, 28 Mar 2023 07:13:06 GMT", "version": "v3" } ]
2023-03-29
[ [ "Parmar", "Vivek", "" ], [ "Sarwar", "Syed Shakib", "" ], [ "Li", "Ziyun", "" ], [ "Lee", "Hsien-Hsin S.", "" ], [ "De Salvo", "Barbara", "" ], [ "Suri", "Manan", "" ] ]
Low-Power Edge-AI capabilities are essential for on-device extended reality (XR) applications to support the vision of Metaverse. In this work, we investigate two representative XR workloads: (i) Hand detection and (ii) Eye segmentation, for hardware design space exploration. For both applications, we train deep neural networks and analyze the impact of quantization and hardware specific bottlenecks. Through simulations, we evaluate a CPU and two systolic inference accelerator implementations. Next, we compare these hardware solutions with advanced technology nodes. The impact of integrating state-of-the-art emerging non-volatile memory technology (STT/SOT/VGSOT MRAM) into the XR-AI inference pipeline is evaluated. We found that significant energy benefits (>=24%) can be achieved for hand detection (IPS=10) and eye segmentation (IPS=0.1) by introducing non-volatile memory in the memory hierarchy for designs at 7nm node while meeting minimum IPS (inference per second). Moreover, we can realize substantial reduction in area (>=30%) owing to the small form factor of MRAM compared to traditional SRAM.
2407.01782
Ali Borji
Ali Borji
Addressing a fundamental limitation in deep vision models: lack of spatial attention
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
The primary aim of this manuscript is to underscore a significant limitation in current deep learning models, particularly vision models. Unlike human vision, which efficiently selects only the essential visual areas for further processing, leading to high speed and low energy consumption, deep vision models process the entire image. In this work, we examine this issue from a broader perspective and propose a solution that could pave the way for the next generation of more efficient vision models. Basically, convolution and pooling operations are selectively applied to altered regions, with a change map sent to subsequent layers. This map indicates which computations need to be repeated. The code is available at https://github.com/aliborji/spatial_attention.
[ { "created": "Mon, 1 Jul 2024 20:21:09 GMT", "version": "v1" } ]
2024-07-03
[ [ "Borji", "Ali", "" ] ]
The primary aim of this manuscript is to underscore a significant limitation in current deep learning models, particularly vision models. Unlike human vision, which efficiently selects only the essential visual areas for further processing, leading to high speed and low energy consumption, deep vision models process the entire image. In this work, we examine this issue from a broader perspective and propose a solution that could pave the way for the next generation of more efficient vision models. Basically, convolution and pooling operations are selectively applied to altered regions, with a change map sent to subsequent layers. This map indicates which computations need to be repeated. The code is available at https://github.com/aliborji/spatial_attention.
2401.12535
Seungho Lee
Seungho Lee, Seoungyoon Kang, Hyunjung Shim
Self-Supervised Vision Transformers Are Efficient Segmentation Learners for Imperfect Labels
AAAI2024 Edge Intelligence Workshop (EIW) accepted
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
This study demonstrates a cost-effective approach to semantic segmentation using self-supervised vision transformers (SSVT). By freezing the SSVT backbone and training a lightweight segmentation head, our approach effectively utilizes imperfect labels, thereby improving robustness to label imperfections. Empirical experiments show significant performance improvements over existing methods for various annotation types, including scribble, point-level, and image-level labels. The research highlights the effectiveness of self-supervised vision transformers in dealing with imperfect labels, providing a practical and efficient solution for semantic segmentation while reducing annotation costs. Through extensive experiments, we confirm that our method outperforms baseline models for all types of imperfect labels. Especially under the zero-shot vision-language-model-based label, our model exhibits 11.5\%p performance gain compared to the baseline.
[ { "created": "Tue, 23 Jan 2024 07:24:16 GMT", "version": "v1" } ]
2024-01-24
[ [ "Lee", "Seungho", "" ], [ "Kang", "Seoungyoon", "" ], [ "Shim", "Hyunjung", "" ] ]
This study demonstrates a cost-effective approach to semantic segmentation using self-supervised vision transformers (SSVT). By freezing the SSVT backbone and training a lightweight segmentation head, our approach effectively utilizes imperfect labels, thereby improving robustness to label imperfections. Empirical experiments show significant performance improvements over existing methods for various annotation types, including scribble, point-level, and image-level labels. The research highlights the effectiveness of self-supervised vision transformers in dealing with imperfect labels, providing a practical and efficient solution for semantic segmentation while reducing annotation costs. Through extensive experiments, we confirm that our method outperforms baseline models for all types of imperfect labels. Especially under the zero-shot vision-language-model-based label, our model exhibits 11.5\%p performance gain compared to the baseline.
1507.00136
Zhouye Chen
Zhouye Chen, Adrian Basarab, Denis Kouam\'e
Compressive Deconvolution in Medical Ultrasound Imaging
null
null
10.1109/TMI.2015.2493241
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to US wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this paper, we propose a novel framework, named compressive deconvolution, that reconstructs enhanced RF images from compressed measurements. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of our approach is the joint data volume reduction and image quality improvement. The proposed optimization method, based on the Alternating Direction Method of Multipliers, is evaluated on both simulated and in vivo data.
[ { "created": "Wed, 1 Jul 2015 07:48:18 GMT", "version": "v1" }, { "created": "Fri, 4 Dec 2015 14:25:44 GMT", "version": "v2" } ]
2015-12-07
[ [ "Chen", "Zhouye", "" ], [ "Basarab", "Adrian", "" ], [ "Kouamé", "Denis", "" ] ]
The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to US wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this paper, we propose a novel framework, named compressive deconvolution, that reconstructs enhanced RF images from compressed measurements. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of our approach is the joint data volume reduction and image quality improvement. The proposed optimization method, based on the Alternating Direction Method of Multipliers, is evaluated on both simulated and in vivo data.
2206.05717
Juncheng Wang
Juncheng Wang, Junyu Gao, Yuan Yuan, Qi Wang
Crowd Localization from Gaussian Mixture Scoped Knowledge and Scoped Teacher
Accepted by IEEE TIP
null
10.1109/TIP.2023.3251727
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Crowd localization is to predict each instance head position in crowd scenarios. Since the distance of instances being to the camera are variant, there exists tremendous gaps among scales of instances within an image, which is called the intrinsic scale shift. The core reason of intrinsic scale shift being one of the most essential issues in crowd localization is that it is ubiquitous in crowd scenes and makes scale distribution chaotic. To this end, the paper concentrates on access to tackle the chaos of the scale distribution incurred by intrinsic scale shift. We propose Gaussian Mixture Scope (GMS) to regularize the chaotic scale distribution. Concretely, the GMS utilizes a Gaussian mixture distribution to adapt to scale distribution and decouples the mixture model into sub-normal distributions to regularize the chaos within the sub-distributions. Then, an alignment is introduced to regularize the chaos among sub-distributions. However, despite that GMS is effective in regularizing the data distribution, it amounts to dislodging the hard samples in training set, which incurs overfitting. We assert that it is blamed on the block of transferring the latent knowledge exploited by GMS from data to model. Therefore, a Scoped Teacher playing a role of bridge in knowledge transform is proposed. What' s more, the consistency regularization is also introduced to implement knowledge transform. To that effect, the further constraints are deployed on Scoped Teacher to derive feature consistence between teacher and student end. With proposed GMS and Scoped Teacher implemented on five mainstream datasets of crowd localization, the extensive experiments demonstrate the superiority of our work. Moreover, comparing with existing crowd locators, our work achieves state-of-the-art via F1-meansure comprehensively on five datasets.
[ { "created": "Sun, 12 Jun 2022 11:07:15 GMT", "version": "v1" }, { "created": "Thu, 23 Feb 2023 10:24:41 GMT", "version": "v2" } ]
2023-03-29
[ [ "Wang", "Juncheng", "" ], [ "Gao", "Junyu", "" ], [ "Yuan", "Yuan", "" ], [ "Wang", "Qi", "" ] ]
Crowd localization is to predict each instance head position in crowd scenarios. Since the distance of instances being to the camera are variant, there exists tremendous gaps among scales of instances within an image, which is called the intrinsic scale shift. The core reason of intrinsic scale shift being one of the most essential issues in crowd localization is that it is ubiquitous in crowd scenes and makes scale distribution chaotic. To this end, the paper concentrates on access to tackle the chaos of the scale distribution incurred by intrinsic scale shift. We propose Gaussian Mixture Scope (GMS) to regularize the chaotic scale distribution. Concretely, the GMS utilizes a Gaussian mixture distribution to adapt to scale distribution and decouples the mixture model into sub-normal distributions to regularize the chaos within the sub-distributions. Then, an alignment is introduced to regularize the chaos among sub-distributions. However, despite that GMS is effective in regularizing the data distribution, it amounts to dislodging the hard samples in training set, which incurs overfitting. We assert that it is blamed on the block of transferring the latent knowledge exploited by GMS from data to model. Therefore, a Scoped Teacher playing a role of bridge in knowledge transform is proposed. What' s more, the consistency regularization is also introduced to implement knowledge transform. To that effect, the further constraints are deployed on Scoped Teacher to derive feature consistence between teacher and student end. With proposed GMS and Scoped Teacher implemented on five mainstream datasets of crowd localization, the extensive experiments demonstrate the superiority of our work. Moreover, comparing with existing crowd locators, our work achieves state-of-the-art via F1-meansure comprehensively on five datasets.
1905.04437
Yiwen Zhang
Yiwen Zhang, Yue Tan, Brent Stephens, and Mosharaf Chowdhury
RDMA Performance Isolation With Justitia
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite its increasing popularity, most of RDMA's benefits such as ultra-low latency can be achieved only when running an application in isolation. Using microbenchmarks and real open-source RDMA applications, we identify a series of performance anomalies when multiple applications coexist and show that such anomalies are pervasive across InfiniBand, RoCEv2, and iWARP. They arise due to a fundamental tradeoff between performance isolation and work conservation, which the state-of-the-art RDMA congestion control protocols such as DCQCN cannot resolve. We present Justitia to address these performance anomalies. Justitia is a software-only, host-based, and easy-to-deploy solution that maximizes RNIC utilization while guaranteeing performance isolation via shaping, rate limiting, and pacing at senders. Our evaluation of Justitia on multiple RDMA implementations show that Justitia effectively isolates different types of traffic and significantly improves latency (by up to 56.9x) and throughput (by up to 9.7x) of real-world RDMA-based applications without compromising low CPU usage or modifying the applications.
[ { "created": "Sat, 11 May 2019 03:38:37 GMT", "version": "v1" } ]
2019-05-14
[ [ "Zhang", "Yiwen", "" ], [ "Tan", "Yue", "" ], [ "Stephens", "Brent", "" ], [ "Chowdhury", "Mosharaf", "" ] ]
Despite its increasing popularity, most of RDMA's benefits such as ultra-low latency can be achieved only when running an application in isolation. Using microbenchmarks and real open-source RDMA applications, we identify a series of performance anomalies when multiple applications coexist and show that such anomalies are pervasive across InfiniBand, RoCEv2, and iWARP. They arise due to a fundamental tradeoff between performance isolation and work conservation, which the state-of-the-art RDMA congestion control protocols such as DCQCN cannot resolve. We present Justitia to address these performance anomalies. Justitia is a software-only, host-based, and easy-to-deploy solution that maximizes RNIC utilization while guaranteeing performance isolation via shaping, rate limiting, and pacing at senders. Our evaluation of Justitia on multiple RDMA implementations show that Justitia effectively isolates different types of traffic and significantly improves latency (by up to 56.9x) and throughput (by up to 9.7x) of real-world RDMA-based applications without compromising low CPU usage or modifying the applications.
1901.04626
Liudmyla Nechepurenko
Liudmyla Nechepurenko, Viktor Voss, and Vyacheslav Gritsenko
Comparing Knowledge-based Reinforcement Learning to Neural Networks in a Strategy Game
7 pages, 6 figures
Hybrid Artificial Intelligent Systems. HAIS 2020. Lecture Notes in Computer Science, vol 12344
10.1007/978-3-030-61705-9_26
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper reports on an experiment, in which a Knowledge-Based Reinforcement Learning (KB-RL) method was compared to a Neural Network (NN) approach in solving a classical Artificial Intelligence (AI) task. In contrast to NNs, which require a substantial amount of data to learn a good policy, the KB-RL method seeks to encode human knowledge into the solution, considerably reducing the amount of data needed for a good policy. By means of Reinforcement Learning (RL), KB-RL learns to optimize the model and improves the output of the system. Furthermore, KB-RL offers the advantage of a clear explanation of the taken decisions as well as transparent reasoning behind the solution. The goal of the reported experiment was to examine the performance of the KB-RL method in contrast to the Neural Network and to explore the capabilities of KB-RL to deliver a strong solution for the AI tasks. The results show that, within the designed settings, KB-RL outperformed the NN, and was able to learn a better policy from the available amount of data. These results support the opinion that Artificial Intelligence can benefit from the discovery and study of alternative approaches, potentially extending the frontiers of AI.
[ { "created": "Tue, 15 Jan 2019 01:23:38 GMT", "version": "v1" }, { "created": "Fri, 17 Jan 2020 11:01:33 GMT", "version": "v2" } ]
2020-11-11
[ [ "Nechepurenko", "Liudmyla", "" ], [ "Voss", "Viktor", "" ], [ "Gritsenko", "Vyacheslav", "" ] ]
The paper reports on an experiment, in which a Knowledge-Based Reinforcement Learning (KB-RL) method was compared to a Neural Network (NN) approach in solving a classical Artificial Intelligence (AI) task. In contrast to NNs, which require a substantial amount of data to learn a good policy, the KB-RL method seeks to encode human knowledge into the solution, considerably reducing the amount of data needed for a good policy. By means of Reinforcement Learning (RL), KB-RL learns to optimize the model and improves the output of the system. Furthermore, KB-RL offers the advantage of a clear explanation of the taken decisions as well as transparent reasoning behind the solution. The goal of the reported experiment was to examine the performance of the KB-RL method in contrast to the Neural Network and to explore the capabilities of KB-RL to deliver a strong solution for the AI tasks. The results show that, within the designed settings, KB-RL outperformed the NN, and was able to learn a better policy from the available amount of data. These results support the opinion that Artificial Intelligence can benefit from the discovery and study of alternative approaches, potentially extending the frontiers of AI.
1604.00990
Hatem Alismail
Hatem Alismail, Brett Browning, Simon Lucey
Direct Visual Odometry using Bit-Planes
null
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feature descriptors, such as SIFT and ORB, are well-known for their robustness to illumination changes, which has made them popular for feature-based VSLAM\@. However, in degraded imaging conditions such as low light, low texture, blur and specular reflections, feature extraction is often unreliable. In contrast, direct VSLAM methods which estimate the camera pose by minimizing the photometric error using raw pixel intensities are often more robust to low textured environments and blur. Nonetheless, at the core of direct VSLAM is the reliance on a consistent photometric appearance across images, otherwise known as the brightness constancy assumption. Unfortunately, brightness constancy seldom holds in real world applications. In this work, we overcome brightness constancy by incorporating feature descriptors into a direct visual odometry framework. This combination results in an efficient algorithm that combines the strength of both feature-based algorithms and direct methods. Namely, we achieve robustness to arbitrary photometric variations while operating in low-textured and poorly lit environments. Our approach utilizes an efficient binary descriptor, which we call Bit-Planes, and show how it can be used in the gradient-based optimization required by direct methods. Moreover, we show that the squared Euclidean distance between Bit-Planes is equivalent to the Hamming distance. Hence, the descriptor may be used in least squares optimization without sacrificing its photometric invariance. Finally, we present empirical results that demonstrate the robustness of the approach in poorly lit underground environments.
[ { "created": "Mon, 4 Apr 2016 19:02:45 GMT", "version": "v1" } ]
2016-04-05
[ [ "Alismail", "Hatem", "" ], [ "Browning", "Brett", "" ], [ "Lucey", "Simon", "" ] ]
Feature descriptors, such as SIFT and ORB, are well-known for their robustness to illumination changes, which has made them popular for feature-based VSLAM\@. However, in degraded imaging conditions such as low light, low texture, blur and specular reflections, feature extraction is often unreliable. In contrast, direct VSLAM methods which estimate the camera pose by minimizing the photometric error using raw pixel intensities are often more robust to low textured environments and blur. Nonetheless, at the core of direct VSLAM is the reliance on a consistent photometric appearance across images, otherwise known as the brightness constancy assumption. Unfortunately, brightness constancy seldom holds in real world applications. In this work, we overcome brightness constancy by incorporating feature descriptors into a direct visual odometry framework. This combination results in an efficient algorithm that combines the strength of both feature-based algorithms and direct methods. Namely, we achieve robustness to arbitrary photometric variations while operating in low-textured and poorly lit environments. Our approach utilizes an efficient binary descriptor, which we call Bit-Planes, and show how it can be used in the gradient-based optimization required by direct methods. Moreover, we show that the squared Euclidean distance between Bit-Planes is equivalent to the Hamming distance. Hence, the descriptor may be used in least squares optimization without sacrificing its photometric invariance. Finally, we present empirical results that demonstrate the robustness of the approach in poorly lit underground environments.
2401.13357
Qi Cai
Qi Cai, Xinrui Li, Yuanxin Wu
Linear Relative Pose Estimation Founded on Pose-only Imaging Geometry
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How to efficiently and accurately handle image matching outliers is a critical issue in two-view relative estimation. The prevailing RANSAC method necessitates that the minimal point pairs be inliers. This paper introduces a linear relative pose estimation algorithm for n $( n \geq 6$) point pairs, which is founded on the recent pose-only imaging geometry to filter out outliers by proper reweighting. The proposed algorithm is able to handle planar degenerate scenes, and enhance robustness and accuracy in the presence of a substantial ratio of outliers. Specifically, we embed the linear global translation (LiGT) constraint into the strategies of iteratively reweighted least-squares (IRLS) and RANSAC so as to realize robust outlier removal. Simulations and real tests of the Strecha dataset show that the proposed algorithm achieves relative rotation accuracy improvement of 2 $\sim$ 10 times in face of as large as 80% outliers.
[ { "created": "Wed, 24 Jan 2024 10:35:34 GMT", "version": "v1" } ]
2024-01-25
[ [ "Cai", "Qi", "" ], [ "Li", "Xinrui", "" ], [ "Wu", "Yuanxin", "" ] ]
How to efficiently and accurately handle image matching outliers is a critical issue in two-view relative estimation. The prevailing RANSAC method necessitates that the minimal point pairs be inliers. This paper introduces a linear relative pose estimation algorithm for n $( n \geq 6$) point pairs, which is founded on the recent pose-only imaging geometry to filter out outliers by proper reweighting. The proposed algorithm is able to handle planar degenerate scenes, and enhance robustness and accuracy in the presence of a substantial ratio of outliers. Specifically, we embed the linear global translation (LiGT) constraint into the strategies of iteratively reweighted least-squares (IRLS) and RANSAC so as to realize robust outlier removal. Simulations and real tests of the Strecha dataset show that the proposed algorithm achieves relative rotation accuracy improvement of 2 $\sim$ 10 times in face of as large as 80% outliers.
2405.15028
Revanth Reddy
Revanth Gangi Reddy, Omar Attia, Yunyao Li, Heng Ji, Saloni Potdar
AGRaME: Any-Granularity Ranking with Multi-Vector Embeddings
null
null
null
null
cs.CL cs.IR
http://creativecommons.org/licenses/by/4.0/
Ranking is a fundamental and popular problem in search. However, existing ranking algorithms usually restrict the granularity of ranking to full passages or require a specific dense index for each desired level of granularity. Such lack of flexibility in granularity negatively affects many applications that can benefit from more granular ranking, such as sentence-level ranking for open-domain question-answering, or proposition-level ranking for attribution. In this work, we introduce the idea of any-granularity ranking, which leverages multi-vector embeddings to rank at varying levels of granularity while maintaining encoding at a single (coarser) level of granularity. We propose a multi-granular contrastive loss for training multi-vector approaches, and validate its utility with both sentences and propositions as ranking units. Finally, we demonstrate the application of proposition-level ranking to post-hoc citation addition in retrieval-augmented generation, surpassing the performance of prompt-driven citation generation.
[ { "created": "Thu, 23 May 2024 20:04:54 GMT", "version": "v1" } ]
2024-05-27
[ [ "Reddy", "Revanth Gangi", "" ], [ "Attia", "Omar", "" ], [ "Li", "Yunyao", "" ], [ "Ji", "Heng", "" ], [ "Potdar", "Saloni", "" ] ]
Ranking is a fundamental and popular problem in search. However, existing ranking algorithms usually restrict the granularity of ranking to full passages or require a specific dense index for each desired level of granularity. Such lack of flexibility in granularity negatively affects many applications that can benefit from more granular ranking, such as sentence-level ranking for open-domain question-answering, or proposition-level ranking for attribution. In this work, we introduce the idea of any-granularity ranking, which leverages multi-vector embeddings to rank at varying levels of granularity while maintaining encoding at a single (coarser) level of granularity. We propose a multi-granular contrastive loss for training multi-vector approaches, and validate its utility with both sentences and propositions as ranking units. Finally, we demonstrate the application of proposition-level ranking to post-hoc citation addition in retrieval-augmented generation, surpassing the performance of prompt-driven citation generation.
1304.6007
Enoch Peserico
Enoch Peserico
Paging with dynamic memory capacity
null
null
null
null
cs.DS cs.OS cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a generalization of the classic paging problem that allows the amount of available memory to vary over time - capturing a fundamental property of many modern computing realities, from cloud computing to multi-core and energy-optimized processors. It turns out that good performance in the "classic" case provides no performance guarantees when memory capacity fluctuates: roughly speaking, moving from static to dynamic capacity can mean the difference between optimality within a factor 2 in space and time, and suboptimality by an arbitrarily large factor. More precisely, adopting the competitive analysis framework, we show that some online paging algorithms, despite having an optimal (h,k)-competitive ratio when capacity remains constant, are not (3,k)-competitive for any arbitrarily large k in the presence of minimal capacity fluctuations. In this light it is surprising that several classic paging algorithms perform remarkably well even if memory capacity changes adversarially - even without taking those changes into explicit account! In particular, we prove that LFD still achieves the minimum number of faults, and that several classic online algorithms such as LRU have a "dynamic" (h,k)-competitive ratio that is the best one can achieve without knowledge of future page requests, even if one had perfect knowledge of future capacity fluctuations (an exact characterization of this ratio shows it is almost, albeit not quite, equal to the "classic" ratio k/(k-h+1)). In other words, with careful management, knowing/predicting future memory resources appears far less crucial to performance than knowing/predicting future data accesses.
[ { "created": "Mon, 22 Apr 2013 16:23:24 GMT", "version": "v1" } ]
2013-04-23
[ [ "Peserico", "Enoch", "" ] ]
We study a generalization of the classic paging problem that allows the amount of available memory to vary over time - capturing a fundamental property of many modern computing realities, from cloud computing to multi-core and energy-optimized processors. It turns out that good performance in the "classic" case provides no performance guarantees when memory capacity fluctuates: roughly speaking, moving from static to dynamic capacity can mean the difference between optimality within a factor 2 in space and time, and suboptimality by an arbitrarily large factor. More precisely, adopting the competitive analysis framework, we show that some online paging algorithms, despite having an optimal (h,k)-competitive ratio when capacity remains constant, are not (3,k)-competitive for any arbitrarily large k in the presence of minimal capacity fluctuations. In this light it is surprising that several classic paging algorithms perform remarkably well even if memory capacity changes adversarially - even without taking those changes into explicit account! In particular, we prove that LFD still achieves the minimum number of faults, and that several classic online algorithms such as LRU have a "dynamic" (h,k)-competitive ratio that is the best one can achieve without knowledge of future page requests, even if one had perfect knowledge of future capacity fluctuations (an exact characterization of this ratio shows it is almost, albeit not quite, equal to the "classic" ratio k/(k-h+1)). In other words, with careful management, knowing/predicting future memory resources appears far less crucial to performance than knowing/predicting future data accesses.
1910.13849
Jaber Kakar
Jaber Kakar, Anton Khristoforov, Seyedhamed Ebadifar, Aydin Sezgin
Uplink-Downlink Tradeoff in Secure Distributed Matrix Multiplication
Amazon EC2 results includes now encoding time. Second-Order Encoding Strategy added
null
null
null
cs.IT cs.CR cs.DC cs.IR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In secure distributed matrix multiplication (SDMM) the multiplication $\mathbf{A}\mathbf{B}$ from two private matrices $\mathbf{A}$ and $\mathbf{B}$ is outsourced by a user to $N$ distributed servers. In $\ell$-SDMM, the goal is to a design a joint communication-computation procedure that optimally balances conflicting communication and computation metrics without leaking any information on both $\mathbf{A}$ and $\mathbf{B}$ to any set of $\ell\leq N$ servers. To this end, the user applies coding with $\tilde{\mathbf{A}}_i$ and $\tilde{\mathbf{B}}_i$ representing encoded versions of $\mathbf{A}$ and $\mathbf{B}$ destined to the $i$-th server. Now, SDMM involves multiple tradeoffs. One such tradeoff is the tradeoff between uplink (UL) and downlink (DL) costs. To find a good balance between these two metrics, we propose two schemes which we term USCSA and GSCSA that are based on secure cross subspace alignment (SCSA). We show that there are various scenarios where they outperform existing SDMM schemes from the literature with respect to the UL-DL efficiency. Next, we implement schemes from the literature, including USCSA and GSCSA, and test their performance on Amazon EC2. Our numerical results show that USCSA and GSCSA establish a good balance between the time spend on the communication and computation in SDMMs. This is because they combine advantages of polynomial codes, namely low time for the upload of $\left(\tilde{\mathbf{A}}_i,\tilde{\mathbf{B}}_i\right)_{i=1}^{N}$ and the computation of $\mathbf{O}_i=\tilde{\mathbf{A}}_i\tilde{\mathbf{B}}_i$, with those of SCSA, being a low timing overhead for the download of $\left(\mathbf{O}_i\right)_{i=1}^{N}$ and the decoding of $\mathbf{A}\mathbf{B}$.
[ { "created": "Wed, 30 Oct 2019 13:46:55 GMT", "version": "v1" }, { "created": "Mon, 4 Nov 2019 15:34:14 GMT", "version": "v2" }, { "created": "Mon, 2 Dec 2019 13:22:00 GMT", "version": "v3" }, { "created": "Sat, 2 May 2020 12:45:03 GMT", "version": "v4" } ]
2020-05-05
[ [ "Kakar", "Jaber", "" ], [ "Khristoforov", "Anton", "" ], [ "Ebadifar", "Seyedhamed", "" ], [ "Sezgin", "Aydin", "" ] ]
In secure distributed matrix multiplication (SDMM) the multiplication $\mathbf{A}\mathbf{B}$ from two private matrices $\mathbf{A}$ and $\mathbf{B}$ is outsourced by a user to $N$ distributed servers. In $\ell$-SDMM, the goal is to a design a joint communication-computation procedure that optimally balances conflicting communication and computation metrics without leaking any information on both $\mathbf{A}$ and $\mathbf{B}$ to any set of $\ell\leq N$ servers. To this end, the user applies coding with $\tilde{\mathbf{A}}_i$ and $\tilde{\mathbf{B}}_i$ representing encoded versions of $\mathbf{A}$ and $\mathbf{B}$ destined to the $i$-th server. Now, SDMM involves multiple tradeoffs. One such tradeoff is the tradeoff between uplink (UL) and downlink (DL) costs. To find a good balance between these two metrics, we propose two schemes which we term USCSA and GSCSA that are based on secure cross subspace alignment (SCSA). We show that there are various scenarios where they outperform existing SDMM schemes from the literature with respect to the UL-DL efficiency. Next, we implement schemes from the literature, including USCSA and GSCSA, and test their performance on Amazon EC2. Our numerical results show that USCSA and GSCSA establish a good balance between the time spend on the communication and computation in SDMMs. This is because they combine advantages of polynomial codes, namely low time for the upload of $\left(\tilde{\mathbf{A}}_i,\tilde{\mathbf{B}}_i\right)_{i=1}^{N}$ and the computation of $\mathbf{O}_i=\tilde{\mathbf{A}}_i\tilde{\mathbf{B}}_i$, with those of SCSA, being a low timing overhead for the download of $\left(\mathbf{O}_i\right)_{i=1}^{N}$ and the decoding of $\mathbf{A}\mathbf{B}$.
2211.09945
Sawinder Kaur
Sawinder Kaur, Yi Xiao, Asif Salekin
VeriCompress: A Tool to Streamline the Synthesis of Verified Robust Compressed Neural Networks from Scratch
9 pages, 5 tables, 2 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
AI's widespread integration has led to neural networks (NNs) deployment on edge and similar limited-resource platforms for safety-critical scenarios. Yet, NN's fragility raises concerns about reliable inference. Moreover, constrained platforms demand compact networks. This study introduces VeriCompress, a tool that automates the search and training of compressed models with robustness guarantees. These models are well-suited for safety-critical applications and adhere to predefined architecture and size limitations, making them deployable on resource-restricted platforms. The method trains models 2-3 times faster than the state-of-the-art approaches, surpassing relevant baseline approaches by average accuracy and robustness gains of 15.1 and 9.8 percentage points, respectively. When deployed on a resource-restricted generic platform, these models require 5-8 times less memory and 2-4 times less inference time than models used in verified robustness literature. Our comprehensive evaluation across various model architectures and datasets, including MNIST, CIFAR, SVHN, and a relevant pedestrian detection dataset, showcases VeriCompress's capacity to identify compressed verified robust models with reduced computation overhead compared to current standards. This underscores its potential as a valuable tool for end users, such as developers of safety-critical applications on edge or Internet of Things platforms, empowering them to create suitable models for safety-critical, resource-constrained platforms in their respective domains.
[ { "created": "Thu, 17 Nov 2022 23:42:10 GMT", "version": "v1" }, { "created": "Fri, 2 Dec 2022 05:45:57 GMT", "version": "v2" }, { "created": "Fri, 27 Jan 2023 18:37:18 GMT", "version": "v3" }, { "created": "Mon, 30 Jan 2023 08:40:26 GMT", "version": "v4" }, { "created": "Tue, 28 Mar 2023 15:18:08 GMT", "version": "v5" }, { "created": "Thu, 24 Aug 2023 14:35:54 GMT", "version": "v6" }, { "created": "Tue, 21 Nov 2023 18:03:06 GMT", "version": "v7" } ]
2023-11-22
[ [ "Kaur", "Sawinder", "" ], [ "Xiao", "Yi", "" ], [ "Salekin", "Asif", "" ] ]
AI's widespread integration has led to neural networks (NNs) deployment on edge and similar limited-resource platforms for safety-critical scenarios. Yet, NN's fragility raises concerns about reliable inference. Moreover, constrained platforms demand compact networks. This study introduces VeriCompress, a tool that automates the search and training of compressed models with robustness guarantees. These models are well-suited for safety-critical applications and adhere to predefined architecture and size limitations, making them deployable on resource-restricted platforms. The method trains models 2-3 times faster than the state-of-the-art approaches, surpassing relevant baseline approaches by average accuracy and robustness gains of 15.1 and 9.8 percentage points, respectively. When deployed on a resource-restricted generic platform, these models require 5-8 times less memory and 2-4 times less inference time than models used in verified robustness literature. Our comprehensive evaluation across various model architectures and datasets, including MNIST, CIFAR, SVHN, and a relevant pedestrian detection dataset, showcases VeriCompress's capacity to identify compressed verified robust models with reduced computation overhead compared to current standards. This underscores its potential as a valuable tool for end users, such as developers of safety-critical applications on edge or Internet of Things platforms, empowering them to create suitable models for safety-critical, resource-constrained platforms in their respective domains.
2003.01912
Ethan Fetaya
Ethan Fetaya, Yonatan Lifshitz, Elad Aaron and Shai Gordin
Restoration of Fragmentary Babylonian Texts Using Recurrent Neural Networks
null
null
10.1073/pnas.2003794117
null
cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The main source of information regarding ancient Mesopotamian history and culture are clay cuneiform tablets. Despite being an invaluable resource, many tablets are fragmented leading to missing information. Currently these missing parts are manually completed by experts. In this work we investigate the possibility of assisting scholars and even automatically completing the breaks in ancient Akkadian texts from Achaemenid period Babylonia by modelling the language using recurrent neural networks.
[ { "created": "Wed, 4 Mar 2020 06:36:50 GMT", "version": "v1" } ]
2022-06-08
[ [ "Fetaya", "Ethan", "" ], [ "Lifshitz", "Yonatan", "" ], [ "Aaron", "Elad", "" ], [ "Gordin", "Shai", "" ] ]
The main source of information regarding ancient Mesopotamian history and culture are clay cuneiform tablets. Despite being an invaluable resource, many tablets are fragmented leading to missing information. Currently these missing parts are manually completed by experts. In this work we investigate the possibility of assisting scholars and even automatically completing the breaks in ancient Akkadian texts from Achaemenid period Babylonia by modelling the language using recurrent neural networks.
2012.12291
Kapil Katyal
Kapil Katyal, Yuxiang Gao, Jared Markowitz, Sara Pohland, Corban Rivera, I-Jeng Wang, Chien-Ming Huang
Learning a Group-Aware Policy for Robot Navigation
8 pages, 4 figures
null
null
null
cs.RO cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human-aware robot navigation promises a range of applications in which mobile robots bring versatile assistance to people in common human environments. While prior research has mostly focused on modeling pedestrians as independent, intentional individuals, people move in groups; consequently, it is imperative for mobile robots to respect human groups when navigating around people. This paper explores learning group-aware navigation policies based on dynamic group formation using deep reinforcement learning. Through simulation experiments, we show that group-aware policies, compared to baseline policies that neglect human groups, achieve greater robot navigation performance (e.g., fewer collisions), minimize violation of social norms and discomfort, and reduce the robot's movement impact on pedestrians. Our results contribute to the development of social navigation and the integration of mobile robots into human environments.
[ { "created": "Tue, 22 Dec 2020 19:04:40 GMT", "version": "v1" }, { "created": "Wed, 17 Nov 2021 21:56:48 GMT", "version": "v2" }, { "created": "Fri, 29 Jul 2022 19:53:12 GMT", "version": "v3" } ]
2022-08-02
[ [ "Katyal", "Kapil", "" ], [ "Gao", "Yuxiang", "" ], [ "Markowitz", "Jared", "" ], [ "Pohland", "Sara", "" ], [ "Rivera", "Corban", "" ], [ "Wang", "I-Jeng", "" ], [ "Huang", "Chien-Ming", "" ] ]
Human-aware robot navigation promises a range of applications in which mobile robots bring versatile assistance to people in common human environments. While prior research has mostly focused on modeling pedestrians as independent, intentional individuals, people move in groups; consequently, it is imperative for mobile robots to respect human groups when navigating around people. This paper explores learning group-aware navigation policies based on dynamic group formation using deep reinforcement learning. Through simulation experiments, we show that group-aware policies, compared to baseline policies that neglect human groups, achieve greater robot navigation performance (e.g., fewer collisions), minimize violation of social norms and discomfort, and reduce the robot's movement impact on pedestrians. Our results contribute to the development of social navigation and the integration of mobile robots into human environments.
2208.06631
Nicolas Kruchten
Nicolas Kruchten, Jon Mease, Dominik Moritz
VegaFusion: Automatic Server-Side Scaling for Interactive Vega Visualizations
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
The Vega grammar has been broadly adopted by a growing ecosystem of browser-based visualization tools. However, the reference Vega renderer does not scale well to large datasets (e.g., millions of rows or hundreds of megabytes) because it requires the entire dataset to be loaded into browser memory. We introduce VegaFusion, which brings automatic server-side scaling to the Vega ecosystem. VegaFusion accepts generic Vega specifications and partitions the required computation between the client and an out-of-browser, natively-compiled server-side process. Large datasets can be processed server-side to avoid loading them into the browser and to take advantage of multi-threading, more powerful server hardware and caching. We demonstrate how VegaFusion can be integrated into the existing Vega ecosystem, and show that VegaFusion greatly outperforms the reference implementation. We demonstrate these benefits with VegaFusion running on the same machine as the client as well as on a remote machine.
[ { "created": "Sat, 13 Aug 2022 11:38:06 GMT", "version": "v1" } ]
2022-08-16
[ [ "Kruchten", "Nicolas", "" ], [ "Mease", "Jon", "" ], [ "Moritz", "Dominik", "" ] ]
The Vega grammar has been broadly adopted by a growing ecosystem of browser-based visualization tools. However, the reference Vega renderer does not scale well to large datasets (e.g., millions of rows or hundreds of megabytes) because it requires the entire dataset to be loaded into browser memory. We introduce VegaFusion, which brings automatic server-side scaling to the Vega ecosystem. VegaFusion accepts generic Vega specifications and partitions the required computation between the client and an out-of-browser, natively-compiled server-side process. Large datasets can be processed server-side to avoid loading them into the browser and to take advantage of multi-threading, more powerful server hardware and caching. We demonstrate how VegaFusion can be integrated into the existing Vega ecosystem, and show that VegaFusion greatly outperforms the reference implementation. We demonstrate these benefits with VegaFusion running on the same machine as the client as well as on a remote machine.
2007.03181
Xinyuan Liu
Xinyuan Liu, Jihua Zhu, Qinghai Zheng, Zhongyu Li, Ruixin Liu and Jun Wang
Bidirectional Loss Function for Label Enhancement and Distribution Learning
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Label distribution learning (LDL) is an interpretable and general learning paradigm that has been applied in many real-world applications. In contrast to the simple logical vector in single-label learning (SLL) and multi-label learning (MLL), LDL assigns labels with a description degree to each instance. In practice, two challenges exist in LDL, namely, how to address the dimensional gap problem during the learning process of LDL and how to exactly recover label distributions from existing logical labels, i.e., Label Enhancement (LE). For most existing LDL and LE algorithms, the fact that the dimension of the input matrix is much higher than that of the output one is alway ignored and it typically leads to the dimensional reduction owing to the unidirectional projection. The valuable information hidden in the feature space is lost during the mapping process. To this end, this study considers bidirectional projections function which can be applied in LE and LDL problems simultaneously. More specifically, this novel loss function not only considers the mapping errors generated from the projection of the input space into the output one but also accounts for the reconstruction errors generated from the projection of the output space back to the input one. This loss function aims to potentially reconstruct the input data from the output data. Therefore, it is expected to obtain more accurate results. Finally, experiments on several real-world datasets are carried out to demonstrate the superiority of the proposed method for both LE and LDL.
[ { "created": "Tue, 7 Jul 2020 03:02:54 GMT", "version": "v1" } ]
2020-07-08
[ [ "Liu", "Xinyuan", "" ], [ "Zhu", "Jihua", "" ], [ "Zheng", "Qinghai", "" ], [ "Li", "Zhongyu", "" ], [ "Liu", "Ruixin", "" ], [ "Wang", "Jun", "" ] ]
Label distribution learning (LDL) is an interpretable and general learning paradigm that has been applied in many real-world applications. In contrast to the simple logical vector in single-label learning (SLL) and multi-label learning (MLL), LDL assigns labels with a description degree to each instance. In practice, two challenges exist in LDL, namely, how to address the dimensional gap problem during the learning process of LDL and how to exactly recover label distributions from existing logical labels, i.e., Label Enhancement (LE). For most existing LDL and LE algorithms, the fact that the dimension of the input matrix is much higher than that of the output one is alway ignored and it typically leads to the dimensional reduction owing to the unidirectional projection. The valuable information hidden in the feature space is lost during the mapping process. To this end, this study considers bidirectional projections function which can be applied in LE and LDL problems simultaneously. More specifically, this novel loss function not only considers the mapping errors generated from the projection of the input space into the output one but also accounts for the reconstruction errors generated from the projection of the output space back to the input one. This loss function aims to potentially reconstruct the input data from the output data. Therefore, it is expected to obtain more accurate results. Finally, experiments on several real-world datasets are carried out to demonstrate the superiority of the proposed method for both LE and LDL.
1102.0451
Boris \v{S}kori\'c
Antonino Simone and Boris Skoric
Asymptotically false-positive-maximizing attack on non-binary Tardos codes
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We use a method recently introduced by Simone and Skoric to study accusation probabilities for non-binary Tardos fingerprinting codes. We generalize the pre-computation steps in this approach to include a broad class of collusion attack strategies. We analytically derive properties of a special attack that asymptotically maximizes false accusation probabilities. We present numerical results on sufficient code lengths for this attack, and explain the abrupt transitions that occur in these results.
[ { "created": "Wed, 2 Feb 2011 14:49:58 GMT", "version": "v1" } ]
2011-02-03
[ [ "Simone", "Antonino", "" ], [ "Skoric", "Boris", "" ] ]
We use a method recently introduced by Simone and Skoric to study accusation probabilities for non-binary Tardos fingerprinting codes. We generalize the pre-computation steps in this approach to include a broad class of collusion attack strategies. We analytically derive properties of a special attack that asymptotically maximizes false accusation probabilities. We present numerical results on sufficient code lengths for this attack, and explain the abrupt transitions that occur in these results.
1909.07053
Yuantao Fan
Yuantao Fan and S{\l}awomir Nowaczyk and Thorsteinn R\"ognvaldsson
Transfer learning for Remaining Useful Life Prediction Based on Consensus Self-Organizing Models
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The traditional paradigm for developing machine prognostics usually relies on generalization from data acquired in experiments under controlled conditions prior to deployment of the equipment. Detecting or predicting failures and estimating machine health in this way assumes that future field data will have a very similar distribution to the experiment data. However, many complex machines operate under dynamic environmental conditions and are used in many different ways. This makes collecting comprehensive data very challenging, and the assumption that pre-deployment data and post-deployment data follow very similar distributions is unlikely to hold. Transfer Learning (TL) refers to methods for transferring knowledge learned in one setting (the source domain) to another setting (the target domain). In this work, we present a TL method for predicting Remaining Useful Life (RUL) of equipment, under the assumption that labels are available only for the source domain and not the target domain. This setting corresponds to generalizing from a limited number of run-to-failure experiments performed prior to deployment into making prognostics with data coming from deployed equipment that is being used under multiple new operating conditions and experiencing previously unseen faults. We employ a deviation detection method, Consensus Self-Organizing Models (COSMO), to create transferable features for building the RUL regression model. These features capture how different target equipment is in comparison to its peers. The efficiency of the proposed TL method is demonstrated using the NASA Turbofan Engine Degradation Simulation Data Set. Models using the COSMO transferable features show better performance than other methods on predicting RUL when the target domain is more complex than the source domain.
[ { "created": "Mon, 16 Sep 2019 08:31:08 GMT", "version": "v1" }, { "created": "Mon, 23 Sep 2019 02:45:27 GMT", "version": "v2" }, { "created": "Mon, 30 Sep 2019 18:00:37 GMT", "version": "v3" } ]
2019-10-02
[ [ "Fan", "Yuantao", "" ], [ "Nowaczyk", "Sławomir", "" ], [ "Rögnvaldsson", "Thorsteinn", "" ] ]
The traditional paradigm for developing machine prognostics usually relies on generalization from data acquired in experiments under controlled conditions prior to deployment of the equipment. Detecting or predicting failures and estimating machine health in this way assumes that future field data will have a very similar distribution to the experiment data. However, many complex machines operate under dynamic environmental conditions and are used in many different ways. This makes collecting comprehensive data very challenging, and the assumption that pre-deployment data and post-deployment data follow very similar distributions is unlikely to hold. Transfer Learning (TL) refers to methods for transferring knowledge learned in one setting (the source domain) to another setting (the target domain). In this work, we present a TL method for predicting Remaining Useful Life (RUL) of equipment, under the assumption that labels are available only for the source domain and not the target domain. This setting corresponds to generalizing from a limited number of run-to-failure experiments performed prior to deployment into making prognostics with data coming from deployed equipment that is being used under multiple new operating conditions and experiencing previously unseen faults. We employ a deviation detection method, Consensus Self-Organizing Models (COSMO), to create transferable features for building the RUL regression model. These features capture how different target equipment is in comparison to its peers. The efficiency of the proposed TL method is demonstrated using the NASA Turbofan Engine Degradation Simulation Data Set. Models using the COSMO transferable features show better performance than other methods on predicting RUL when the target domain is more complex than the source domain.
2206.08882
Rui Song
Rui Song, Anupama Hegde, Numan Senel, Alois Knoll, Andreas Festag
Edge-Aided Sensor Data Sharing in Vehicular Communication Networks
Accepted for IEEE 95th Vehicular Technology Conference (VTC2022-Spring)
null
null
null
cs.MA cs.CV eess.SP
http://creativecommons.org/licenses/by/4.0/
Sensor data sharing in vehicular networks can significantly improve the range and accuracy of environmental perception for connected automated vehicles. Different concepts and schemes for dissemination and fusion of sensor data have been developed. It is common to these schemes that measurement errors of the sensors impair the perception quality and can result in road traffic accidents. Specifically, when the measurement error from the sensors (also referred as measurement noise) is unknown and time varying, the performance of the data fusion process is restricted, which represents a major challenge in the calibration of sensors. In this paper, we consider sensor data sharing and fusion in a vehicular network with both, vehicle-to-infrastructure and vehicle-to-vehicle communication. We propose a method, named Bidirectional Feedback Noise Estimation (BiFNoE), in which an edge server collects and caches sensor measurement data from vehicles. The edge estimates the noise and the targets alternately in double dynamic sliding time windows and enhances the distributed cooperative environment sensing at each vehicle with low communication costs. We evaluate the proposed algorithm and data dissemination strategy in an application scenario by simulation and show that the perception accuracy is on average improved by around 80 % with only 12 kbps uplink and 28 kbps downlink bandwidth.
[ { "created": "Fri, 17 Jun 2022 16:30:56 GMT", "version": "v1" } ]
2022-06-20
[ [ "Song", "Rui", "" ], [ "Hegde", "Anupama", "" ], [ "Senel", "Numan", "" ], [ "Knoll", "Alois", "" ], [ "Festag", "Andreas", "" ] ]
Sensor data sharing in vehicular networks can significantly improve the range and accuracy of environmental perception for connected automated vehicles. Different concepts and schemes for dissemination and fusion of sensor data have been developed. It is common to these schemes that measurement errors of the sensors impair the perception quality and can result in road traffic accidents. Specifically, when the measurement error from the sensors (also referred as measurement noise) is unknown and time varying, the performance of the data fusion process is restricted, which represents a major challenge in the calibration of sensors. In this paper, we consider sensor data sharing and fusion in a vehicular network with both, vehicle-to-infrastructure and vehicle-to-vehicle communication. We propose a method, named Bidirectional Feedback Noise Estimation (BiFNoE), in which an edge server collects and caches sensor measurement data from vehicles. The edge estimates the noise and the targets alternately in double dynamic sliding time windows and enhances the distributed cooperative environment sensing at each vehicle with low communication costs. We evaluate the proposed algorithm and data dissemination strategy in an application scenario by simulation and show that the perception accuracy is on average improved by around 80 % with only 12 kbps uplink and 28 kbps downlink bandwidth.
1804.09321
Xi Rao
Xi Rao and Zhenxing Ke
Hierarchical RNN for Information Extraction from Lawsuit Documents
IMECS2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Every lawsuit document contains the information about the party's claim, court's analysis, decision and others, and all of this information are helpful to understand the case better and predict the judge's decision on similar case in the future. However, the extraction of these information from the document is difficult because the language is too complicated and sentences varied at length. We treat this problem as a task of sequence labeling, and this paper presents the first research to extract relevant information from the civil lawsuit document in China with the hierarchical RNN framework.
[ { "created": "Wed, 25 Apr 2018 02:18:51 GMT", "version": "v1" } ]
2018-04-26
[ [ "Rao", "Xi", "" ], [ "Ke", "Zhenxing", "" ] ]
Every lawsuit document contains the information about the party's claim, court's analysis, decision and others, and all of this information are helpful to understand the case better and predict the judge's decision on similar case in the future. However, the extraction of these information from the document is difficult because the language is too complicated and sentences varied at length. We treat this problem as a task of sequence labeling, and this paper presents the first research to extract relevant information from the civil lawsuit document in China with the hierarchical RNN framework.
2205.01663
Daniel M. Ziegler
Daniel M. Ziegler, Seraphina Nix, Lawrence Chan, Tim Bauman, Peter Schmidt-Nielsen, Tao Lin, Adam Scherlis, Noa Nabeshima, Ben Weinstein-Raun, Daniel de Haas, Buck Shlegeris, Nate Thomas
Adversarial Training for High-Stakes Reliability
30 pages, 7 figures, NeurIPS camera-ready
null
null
null
cs.LG cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
In the future, powerful AI systems may be deployed in high-stakes settings, where a single failure could be catastrophic. One technique for improving AI safety in high-stakes settings is adversarial training, which uses an adversary to generate examples to train on in order to achieve better worst-case performance. In this work, we used a safe language generation task (``avoid injuries'') as a testbed for achieving high reliability through adversarial training. We created a series of adversarial training techniques -- including a tool that assists human adversaries -- to find and eliminate failures in a classifier that filters text completions suggested by a generator. In our task, we determined that we can set very conservative classifier thresholds without significantly impacting the quality of the filtered outputs. We found that adversarial training increased robustness to the adversarial attacks that we trained on -- doubling the time for our contractors to find adversarial examples both with our tool (from 13 to 26 minutes) and without (from 20 to 44 minutes) -- without affecting in-distribution performance. We hope to see further work in the high-stakes reliability setting, including more powerful tools for enhancing human adversaries and better ways to measure high levels of reliability, until we can confidently rule out the possibility of catastrophic deployment-time failures of powerful models.
[ { "created": "Tue, 3 May 2022 17:50:06 GMT", "version": "v1" }, { "created": "Wed, 4 May 2022 17:58:20 GMT", "version": "v2" }, { "created": "Thu, 15 Sep 2022 17:36:48 GMT", "version": "v3" }, { "created": "Fri, 7 Oct 2022 01:30:53 GMT", "version": "v4" }, { "created": "Thu, 10 Nov 2022 01:02:29 GMT", "version": "v5" } ]
2022-11-11
[ [ "Ziegler", "Daniel M.", "" ], [ "Nix", "Seraphina", "" ], [ "Chan", "Lawrence", "" ], [ "Bauman", "Tim", "" ], [ "Schmidt-Nielsen", "Peter", "" ], [ "Lin", "Tao", "" ], [ "Scherlis", "Adam", "" ], [ "Nabeshima", "Noa", "" ], [ "Weinstein-Raun", "Ben", "" ], [ "de Haas", "Daniel", "" ], [ "Shlegeris", "Buck", "" ], [ "Thomas", "Nate", "" ] ]
In the future, powerful AI systems may be deployed in high-stakes settings, where a single failure could be catastrophic. One technique for improving AI safety in high-stakes settings is adversarial training, which uses an adversary to generate examples to train on in order to achieve better worst-case performance. In this work, we used a safe language generation task (``avoid injuries'') as a testbed for achieving high reliability through adversarial training. We created a series of adversarial training techniques -- including a tool that assists human adversaries -- to find and eliminate failures in a classifier that filters text completions suggested by a generator. In our task, we determined that we can set very conservative classifier thresholds without significantly impacting the quality of the filtered outputs. We found that adversarial training increased robustness to the adversarial attacks that we trained on -- doubling the time for our contractors to find adversarial examples both with our tool (from 13 to 26 minutes) and without (from 20 to 44 minutes) -- without affecting in-distribution performance. We hope to see further work in the high-stakes reliability setting, including more powerful tools for enhancing human adversaries and better ways to measure high levels of reliability, until we can confidently rule out the possibility of catastrophic deployment-time failures of powerful models.
2005.00545
Ines Chami
Ines Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi and Christopher R\'e
Low-Dimensional Hyperbolic Knowledge Graph Embeddings
null
null
null
null
cs.LG cs.AI cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge graph (KG) embeddings learn low-dimensional representations of entities and relations to predict missing facts. KGs often exhibit hierarchical and logical patterns which must be preserved in the embedding space. For hierarchical data, hyperbolic embedding methods have shown promise for high-fidelity and parsimonious representations. However, existing hyperbolic embedding methods do not account for the rich logical patterns in KGs. In this work, we introduce a class of hyperbolic KG embedding models that simultaneously capture hierarchical and logical patterns. Our approach combines hyperbolic reflections and rotations with attention to model complex relational patterns. Experimental results on standard KG benchmarks show that our method improves over previous Euclidean- and hyperbolic-based efforts by up to 6.1% in mean reciprocal rank (MRR) in low dimensions. Furthermore, we observe that different geometric transformations capture different types of relations while attention-based transformations generalize to multiple relations. In high dimensions, our approach yields new state-of-the-art MRRs of 49.6% on WN18RR and 57.7% on YAGO3-10.
[ { "created": "Fri, 1 May 2020 18:00:02 GMT", "version": "v1" } ]
2020-05-05
[ [ "Chami", "Ines", "" ], [ "Wolf", "Adva", "" ], [ "Juan", "Da-Cheng", "" ], [ "Sala", "Frederic", "" ], [ "Ravi", "Sujith", "" ], [ "Ré", "Christopher", "" ] ]
Knowledge graph (KG) embeddings learn low-dimensional representations of entities and relations to predict missing facts. KGs often exhibit hierarchical and logical patterns which must be preserved in the embedding space. For hierarchical data, hyperbolic embedding methods have shown promise for high-fidelity and parsimonious representations. However, existing hyperbolic embedding methods do not account for the rich logical patterns in KGs. In this work, we introduce a class of hyperbolic KG embedding models that simultaneously capture hierarchical and logical patterns. Our approach combines hyperbolic reflections and rotations with attention to model complex relational patterns. Experimental results on standard KG benchmarks show that our method improves over previous Euclidean- and hyperbolic-based efforts by up to 6.1% in mean reciprocal rank (MRR) in low dimensions. Furthermore, we observe that different geometric transformations capture different types of relations while attention-based transformations generalize to multiple relations. In high dimensions, our approach yields new state-of-the-art MRRs of 49.6% on WN18RR and 57.7% on YAGO3-10.
2209.11178
Yilun Xu
Yilun Xu, Ziming Liu, Max Tegmark, Tommi Jaakkola
Poisson Flow Generative Models
Accepted by NeurIPS 2022
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new "Poisson flow" generative model (PFGM) that maps a uniform distribution on a high-dimensional hemisphere into any data distribution. We interpret the data points as electrical charges on the $z=0$ hyperplane in a space augmented with an additional dimension $z$, generating a high-dimensional electric field (the gradient of the solution to Poisson equation). We prove that if these charges flow upward along electric field lines, their initial distribution in the $z=0$ plane transforms into a distribution on the hemisphere of radius $r$ that becomes uniform in the $r \to\infty$ limit. To learn the bijective transformation, we estimate the normalized field in the augmented space. For sampling, we devise a backward ODE that is anchored by the physically meaningful additional dimension: the samples hit the unaugmented data manifold when the $z$ reaches zero. Experimentally, PFGM achieves current state-of-the-art performance among the normalizing flow models on CIFAR-10, with an Inception score of $9.68$ and a FID score of $2.35$. It also performs on par with the state-of-the-art SDE approaches while offering $10\times $ to $20 \times$ acceleration on image generation tasks. Additionally, PFGM appears more tolerant of estimation errors on a weaker network architecture and robust to the step size in the Euler method. The code is available at https://github.com/Newbeeer/poisson_flow .
[ { "created": "Thu, 22 Sep 2022 17:26:58 GMT", "version": "v1" }, { "created": "Thu, 13 Oct 2022 17:11:46 GMT", "version": "v2" }, { "created": "Fri, 14 Oct 2022 17:49:01 GMT", "version": "v3" }, { "created": "Thu, 20 Oct 2022 00:29:38 GMT", "version": "v4" } ]
2022-10-21
[ [ "Xu", "Yilun", "" ], [ "Liu", "Ziming", "" ], [ "Tegmark", "Max", "" ], [ "Jaakkola", "Tommi", "" ] ]
We propose a new "Poisson flow" generative model (PFGM) that maps a uniform distribution on a high-dimensional hemisphere into any data distribution. We interpret the data points as electrical charges on the $z=0$ hyperplane in a space augmented with an additional dimension $z$, generating a high-dimensional electric field (the gradient of the solution to Poisson equation). We prove that if these charges flow upward along electric field lines, their initial distribution in the $z=0$ plane transforms into a distribution on the hemisphere of radius $r$ that becomes uniform in the $r \to\infty$ limit. To learn the bijective transformation, we estimate the normalized field in the augmented space. For sampling, we devise a backward ODE that is anchored by the physically meaningful additional dimension: the samples hit the unaugmented data manifold when the $z$ reaches zero. Experimentally, PFGM achieves current state-of-the-art performance among the normalizing flow models on CIFAR-10, with an Inception score of $9.68$ and a FID score of $2.35$. It also performs on par with the state-of-the-art SDE approaches while offering $10\times $ to $20 \times$ acceleration on image generation tasks. Additionally, PFGM appears more tolerant of estimation errors on a weaker network architecture and robust to the step size in the Euler method. The code is available at https://github.com/Newbeeer/poisson_flow .
1903.08605
Rui Gao
Rui Gao and Filip Tronarp and Simo S\"arkk\"a
Iterated Extended Kalman Smoother-based Variable Splitting for $L_1$-Regularized State Estimation
16 pages, 9 figures
null
10.1109/TSP.2019.2935868
null
cs.IT math.IT stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a new framework for solving state estimation problems with an additional sparsity-promoting $L_1$-regularizer term. We first formulate such problems as minimization of the sum of linear or nonlinear quadratic error terms and an extra regularizer, and then present novel algorithms which solve the linear and nonlinear cases. The methods are based on a combination of the iterated extended Kalman smoother and variable splitting techniques such as alternating direction method of multipliers (ADMM). We present a general algorithmic framework for variable splitting methods, where the iterative steps involving minimization of the nonlinear quadratic terms can be computed efficiently by iterated smoothing. Due to the use of state estimation algorithms, the proposed framework has a low per-iteration time complexity, which makes it suitable for solving a large-scale or high-dimensional state estimation problem. We also provide convergence results for the proposed algorithms. The experiments show the promising performance and speed-ups provided by the methods.
[ { "created": "Wed, 20 Mar 2019 16:38:22 GMT", "version": "v1" }, { "created": "Thu, 27 Jun 2019 11:12:17 GMT", "version": "v2" }, { "created": "Fri, 2 Aug 2019 16:50:05 GMT", "version": "v3" } ]
2019-10-02
[ [ "Gao", "Rui", "" ], [ "Tronarp", "Filip", "" ], [ "Särkkä", "Simo", "" ] ]
In this paper, we propose a new framework for solving state estimation problems with an additional sparsity-promoting $L_1$-regularizer term. We first formulate such problems as minimization of the sum of linear or nonlinear quadratic error terms and an extra regularizer, and then present novel algorithms which solve the linear and nonlinear cases. The methods are based on a combination of the iterated extended Kalman smoother and variable splitting techniques such as alternating direction method of multipliers (ADMM). We present a general algorithmic framework for variable splitting methods, where the iterative steps involving minimization of the nonlinear quadratic terms can be computed efficiently by iterated smoothing. Due to the use of state estimation algorithms, the proposed framework has a low per-iteration time complexity, which makes it suitable for solving a large-scale or high-dimensional state estimation problem. We also provide convergence results for the proposed algorithms. The experiments show the promising performance and speed-ups provided by the methods.
1601.01648
Viorica Sofronie-Stokkermans
Werner Damm and Matthias Horbach and Viorica Sofronie-Stokkermans
Decidability of Verification of Safety Properties of Spatial Families of Linear Hybrid Automata
50 pages, AVACS Technical Report No. 111 (SFB/TR 14 AVACS)
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider systems composed of an unbounded number of uniformly designed linear hybrid automata, whose dynamic behavior is determined by their relation to neighboring systems. We present a class of such systems and a class of safety properties whose verification can be reduced to the verification of (small) families of neighbouring systems of bounded size, and identify situations in which such verification problems are decidable, resp. fixed parameter tractable. We illustrate the approach with an example from coordinated vehicle guidance, and describe an implementation which allows us to perform such verification tasks automatically.
[ { "created": "Thu, 7 Jan 2016 19:48:39 GMT", "version": "v1" } ]
2016-01-08
[ [ "Damm", "Werner", "" ], [ "Horbach", "Matthias", "" ], [ "Sofronie-Stokkermans", "Viorica", "" ] ]
We consider systems composed of an unbounded number of uniformly designed linear hybrid automata, whose dynamic behavior is determined by their relation to neighboring systems. We present a class of such systems and a class of safety properties whose verification can be reduced to the verification of (small) families of neighbouring systems of bounded size, and identify situations in which such verification problems are decidable, resp. fixed parameter tractable. We illustrate the approach with an example from coordinated vehicle guidance, and describe an implementation which allows us to perform such verification tasks automatically.
1305.0907
Mostafa Dahshan
Mostafa H. Dahshan
Maximum-Bandwidth Node-Disjoint Paths
null
International Journal of Advanced Computer Science and Applications - IJACSA, Volume 3, Issue 3, March 2012, pp 48-56
null
null
cs.NI math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new method for finding the node-disjoint paths with maximum combined bandwidth in communication networks. This problem is an NP-complete problem which can be optimally solved in exponential time using integer linear programming (ILP). The presented method uses a maximum-cost variant of Dijkstra algorithm and a virtual-node representation to obtain the maximum-bandwidth node-disjoint path. Through several simulations, we compare the performance of our method to a modern heuristic technique and to the ILP solution. We show that, in a polynomial execution time, our proposed method produces results that are almost identical to ILP in a significantly lower execution time.
[ { "created": "Sat, 4 May 2013 10:16:51 GMT", "version": "v1" } ]
2013-05-07
[ [ "Dahshan", "Mostafa H.", "" ] ]
This paper presents a new method for finding the node-disjoint paths with maximum combined bandwidth in communication networks. This problem is an NP-complete problem which can be optimally solved in exponential time using integer linear programming (ILP). The presented method uses a maximum-cost variant of Dijkstra algorithm and a virtual-node representation to obtain the maximum-bandwidth node-disjoint path. Through several simulations, we compare the performance of our method to a modern heuristic technique and to the ILP solution. We show that, in a polynomial execution time, our proposed method produces results that are almost identical to ILP in a significantly lower execution time.
2310.00867
Duc Hoang
Duc N.M Hoang, Minsik Cho, Thomas Merth, Mohammad Rastegari, Zhangyang Wang
Do Compressed LLMs Forget Knowledge? An Experimental Study with Practical Implications
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compressing Large Language Models (LLMs) often leads to reduced performance, especially for knowledge-intensive tasks. In this work, we dive into how compression damages LLMs' inherent knowledge and the possible remedies. We start by proposing two conjectures on the nature of the damage: one is certain knowledge being forgotten (or erased) after LLM compression, hence necessitating the compressed model to (re)learn from data with additional parameters; the other presumes that knowledge is internally displaced and hence one requires merely "inference re-direction" with input-side augmentation such as prompting, to recover the knowledge-related performance. Extensive experiments are then designed to (in)validate the two conjectures. We observe the promise of prompting in comparison to model tuning; we further unlock prompting's potential by introducing a variant called Inference-time Dynamic Prompting (IDP), that can effectively increase prompt diversity without incurring any inference overhead. Our experiments consistently suggest that compared to the classical re-training alternatives such as LoRA, prompting with IDP leads to better or comparable post-compression performance recovery, while saving the extra parameter size by 21x and reducing inference latency by 60%. Our experiments hence strongly endorse the conjecture of "knowledge displaced" over "knowledge forgotten", and shed light on a new efficient mechanism to restore compressed LLM performance. We additionally visualize and analyze the different attention and activation patterns between prompted and re-trained models, demonstrating they achieve performance recovery in two different regimes.
[ { "created": "Mon, 2 Oct 2023 03:12:06 GMT", "version": "v1" }, { "created": "Sat, 14 Oct 2023 05:12:54 GMT", "version": "v2" }, { "created": "Fri, 16 Feb 2024 18:39:45 GMT", "version": "v3" } ]
2024-02-19
[ [ "Hoang", "Duc N. M", "" ], [ "Cho", "Minsik", "" ], [ "Merth", "Thomas", "" ], [ "Rastegari", "Mohammad", "" ], [ "Wang", "Zhangyang", "" ] ]
Compressing Large Language Models (LLMs) often leads to reduced performance, especially for knowledge-intensive tasks. In this work, we dive into how compression damages LLMs' inherent knowledge and the possible remedies. We start by proposing two conjectures on the nature of the damage: one is certain knowledge being forgotten (or erased) after LLM compression, hence necessitating the compressed model to (re)learn from data with additional parameters; the other presumes that knowledge is internally displaced and hence one requires merely "inference re-direction" with input-side augmentation such as prompting, to recover the knowledge-related performance. Extensive experiments are then designed to (in)validate the two conjectures. We observe the promise of prompting in comparison to model tuning; we further unlock prompting's potential by introducing a variant called Inference-time Dynamic Prompting (IDP), that can effectively increase prompt diversity without incurring any inference overhead. Our experiments consistently suggest that compared to the classical re-training alternatives such as LoRA, prompting with IDP leads to better or comparable post-compression performance recovery, while saving the extra parameter size by 21x and reducing inference latency by 60%. Our experiments hence strongly endorse the conjecture of "knowledge displaced" over "knowledge forgotten", and shed light on a new efficient mechanism to restore compressed LLM performance. We additionally visualize and analyze the different attention and activation patterns between prompted and re-trained models, demonstrating they achieve performance recovery in two different regimes.
1711.06232
Jia-Hong Huang
Jia-Hong Huang, Cuong Duc Dao, Modar Alfadly, Bernard Ghanem
A Novel Framework for Robustness Analysis of Visual QA Models
Accepted by the Thirty-Third AAAI Conference on Artificial Intelligence, (AAAI-19), as an oral paper
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks have been playing an essential role in many computer vision tasks including Visual Question Answering (VQA). Until recently, the study of their accuracy was the main focus of research but now there is a trend toward assessing the robustness of these models against adversarial attacks by evaluating their tolerance to varying noise levels. In VQA, adversarial attacks can target the image and/or the proposed main question and yet there is a lack of proper analysis of the later. In this work, we propose a flexible framework that focuses on the language part of VQA that uses semantically relevant questions, dubbed basic questions, acting as controllable noise to evaluate the robustness of VQA models. We hypothesize that the level of noise is positively correlated to the similarity of a basic question to the main question. Hence, to apply noise on any given main question, we rank a pool of basic questions based on their similarity by casting this ranking task as a LASSO optimization problem. Then, we propose a novel robustness measure, R_score, and two large-scale basic question datasets (BQDs) in order to standardize robustness analysis for VQA models.
[ { "created": "Thu, 16 Nov 2017 18:27:49 GMT", "version": "v1" }, { "created": "Sun, 19 Nov 2017 05:47:07 GMT", "version": "v2" }, { "created": "Tue, 25 Dec 2018 04:08:27 GMT", "version": "v3" } ]
2018-12-27
[ [ "Huang", "Jia-Hong", "" ], [ "Dao", "Cuong Duc", "" ], [ "Alfadly", "Modar", "" ], [ "Ghanem", "Bernard", "" ] ]
Deep neural networks have been playing an essential role in many computer vision tasks including Visual Question Answering (VQA). Until recently, the study of their accuracy was the main focus of research but now there is a trend toward assessing the robustness of these models against adversarial attacks by evaluating their tolerance to varying noise levels. In VQA, adversarial attacks can target the image and/or the proposed main question and yet there is a lack of proper analysis of the later. In this work, we propose a flexible framework that focuses on the language part of VQA that uses semantically relevant questions, dubbed basic questions, acting as controllable noise to evaluate the robustness of VQA models. We hypothesize that the level of noise is positively correlated to the similarity of a basic question to the main question. Hence, to apply noise on any given main question, we rank a pool of basic questions based on their similarity by casting this ranking task as a LASSO optimization problem. Then, we propose a novel robustness measure, R_score, and two large-scale basic question datasets (BQDs) in order to standardize robustness analysis for VQA models.
1603.01080
Hossein Shokri Ghadikolaei
Federico Boccardi and Hossein Shokri-Ghadikolaei and Gabor Fodor and Elza Erkip and Carlo Fischione and Marios Kountouris and Petar Popovski and and Michele Zorzi
Spectrum Pooling in MmWave Networks: Opportunities, Challenges, and Enablers
null
null
null
null
cs.IT cs.NI cs.SY math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the intrinsic characteristics of mmWave technologies, we discuss the possibility of an authorization regime that allows spectrum sharing between multiple operators, also referred to as spectrum pooling. In particular, considering user rate as the performance measure, we assess the benefit of coordination among the networks of different operators, study the impact of beamforming both at the base stations and at the user terminals, and analyze the pooling performance at different frequency carriers. We also discuss the enabling spectrum mechanisms, architectures, and protocols required to make spectrum pooling work in real networks. Our initial results show that, from a technical perspective, spectrum pooling at mmWave has the potential for a more efficient spectrum use than a traditional exclusive spectrum allocation to a single operator. However, further studies are needed in order to reach a thorough understanding of this matter, and we hope that this paper will help stimulate further research in this area.
[ { "created": "Thu, 3 Mar 2016 12:59:45 GMT", "version": "v1" } ]
2016-03-04
[ [ "Boccardi", "Federico", "" ], [ "Shokri-Ghadikolaei", "Hossein", "" ], [ "Fodor", "Gabor", "" ], [ "Erkip", "Elza", "" ], [ "Fischione", "Carlo", "" ], [ "Kountouris", "Marios", "" ], [ "Popovski", "Petar", "" ], [ "Zorzi", "and Michele", "" ] ]
Motivated by the intrinsic characteristics of mmWave technologies, we discuss the possibility of an authorization regime that allows spectrum sharing between multiple operators, also referred to as spectrum pooling. In particular, considering user rate as the performance measure, we assess the benefit of coordination among the networks of different operators, study the impact of beamforming both at the base stations and at the user terminals, and analyze the pooling performance at different frequency carriers. We also discuss the enabling spectrum mechanisms, architectures, and protocols required to make spectrum pooling work in real networks. Our initial results show that, from a technical perspective, spectrum pooling at mmWave has the potential for a more efficient spectrum use than a traditional exclusive spectrum allocation to a single operator. However, further studies are needed in order to reach a thorough understanding of this matter, and we hope that this paper will help stimulate further research in this area.
1810.12670
Ciriaco Andrea D'Angelo
Giovanni Abramo, Ciriaco Andrea D'Angelo
Accounting for gender research performance differences in ranking universities
null
Abramo, G., D'Angelo, C. A. (2015). Accounting for gender research performance differences in ranking universities. Current Science, 109(10), 1783-1789
10.18520/v109/i10/1783-1789
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The literature on the theme of gender differences in research performance indicates a quite evident gap in favor of men over women. Beyond the understanding of the factors that could be at the basis of this phenomenon, it is worthwhile understanding if it would be appropriate to conduct the evaluation per population in a manner distinguished by gender. In fact if there is some factor that structurally determines a penalization of performance by women researchers compared to men then the comparative evaluation of organizations' performance that does not take gender into account will lead to an advantage for those that employ more men, under parity in the capacities of their staffs. In this work we measure the differences of the performance and the rank of research institutions as observed when gender is taken into account compared to when it is ignored. The study population consists of all Italian universities and the performance measured in the hard sciences for the period 2006-2010.
[ { "created": "Tue, 30 Oct 2018 11:32:25 GMT", "version": "v1" } ]
2018-10-31
[ [ "Abramo", "Giovanni", "" ], [ "D'Angelo", "Ciriaco Andrea", "" ] ]
The literature on the theme of gender differences in research performance indicates a quite evident gap in favor of men over women. Beyond the understanding of the factors that could be at the basis of this phenomenon, it is worthwhile understanding if it would be appropriate to conduct the evaluation per population in a manner distinguished by gender. In fact if there is some factor that structurally determines a penalization of performance by women researchers compared to men then the comparative evaluation of organizations' performance that does not take gender into account will lead to an advantage for those that employ more men, under parity in the capacities of their staffs. In this work we measure the differences of the performance and the rank of research institutions as observed when gender is taken into account compared to when it is ignored. The study population consists of all Italian universities and the performance measured in the hard sciences for the period 2006-2010.
2302.14453
Jo\~ao Henrique Inacio de Souza
Jo\~ao Henrique Inacio de Souza, Jos\'e Carlos Marinello Filho, Taufik Abr\~ao, Cristiano Panazio
Energy Efficiency and Throughput of Random Access Protocols for RIS-Aided IoT Networks
To appear in Proceedings of the IEEE 8th World Forum on Internet of Things
null
10.1109/WF-IoT54382.2022.10152283
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Green Internet of Things (IoT) aims to enable a sustainable smart world by making energy efficiency (EE) the main performance indicator for IoT hardware and software. With respect to network design, this implies in developing energy-efficient communication protocols and network architectures adapted to the ubiquitousness of the IoT machine-type devices (MTDs) and the sporadic traffic generated by them, keeping a low power consumption at the MTDs-side. In this sense, reconfigurable intelligent surfaces (RISs) have presented the capacity of significantly improving the network coverage using mostly passive reflecting elements, drastically reducing the power expenditure. In this paper, we develop a realistic power consumption model and an expression for the overall system EE for RIS-aided IoT networks that adopt a two time-scale random access (RA) protocol to handle the uplink transmissions. Specifically, during each time slot of the RA protocol, the RIS covers a specific area of interest in the communication cell with a predefined set of phase-shift configurations, changing the channel qualities of the contending MTDs. Numerical results comparing the RA protocol performance reveal that access policies that exploit information of the channel qualities are suitable for green IoT networks, simultaneously attaining competitive EE and throughput combined with low power consumption at the MTDs-side.
[ { "created": "Tue, 28 Feb 2023 09:58:13 GMT", "version": "v1" } ]
2023-10-12
[ [ "de Souza", "João Henrique Inacio", "" ], [ "Filho", "José Carlos Marinello", "" ], [ "Abrão", "Taufik", "" ], [ "Panazio", "Cristiano", "" ] ]
Green Internet of Things (IoT) aims to enable a sustainable smart world by making energy efficiency (EE) the main performance indicator for IoT hardware and software. With respect to network design, this implies in developing energy-efficient communication protocols and network architectures adapted to the ubiquitousness of the IoT machine-type devices (MTDs) and the sporadic traffic generated by them, keeping a low power consumption at the MTDs-side. In this sense, reconfigurable intelligent surfaces (RISs) have presented the capacity of significantly improving the network coverage using mostly passive reflecting elements, drastically reducing the power expenditure. In this paper, we develop a realistic power consumption model and an expression for the overall system EE for RIS-aided IoT networks that adopt a two time-scale random access (RA) protocol to handle the uplink transmissions. Specifically, during each time slot of the RA protocol, the RIS covers a specific area of interest in the communication cell with a predefined set of phase-shift configurations, changing the channel qualities of the contending MTDs. Numerical results comparing the RA protocol performance reveal that access policies that exploit information of the channel qualities are suitable for green IoT networks, simultaneously attaining competitive EE and throughput combined with low power consumption at the MTDs-side.
1904.01561
Kevin Yang
Kevin Yang, Kyle Swanson, Wengong Jin, Connor Coley, Philipp Eiden, Hua Gao, Angel Guzman-Perez, Timothy Hopper, Brian Kelley, Miriam Mathea, Andrew Palmer, Volker Settels, Tommi Jaakkola, Klavs Jensen, Regina Barzilay
Analyzing Learned Molecular Representations for Property Prediction
null
Journal of chemical information and modeling 59.8 (2019): 3370-3388
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advancements in neural machinery have led to a wide range of algorithmic solutions for molecular property prediction. Two classes of models in particular have yielded promising results: neural networks applied to computed molecular fingerprints or expert-crafted descriptors, and graph convolutional neural networks that construct a learned molecular representation by operating on the graph structure of the molecule. However, recent literature has yet to clearly determine which of these two methods is superior when generalizing to new chemical space. Furthermore, prior research has rarely examined these new models in industry research settings in comparison to existing employed models. In this paper, we benchmark models extensively on 19 public and 16 proprietary industrial datasets spanning a wide variety of chemical endpoints. In addition, we introduce a graph convolutional model that consistently matches or outperforms models using fixed molecular descriptors as well as previous graph neural architectures on both public and proprietary datasets. Our empirical findings indicate that while approaches based on these representations have yet to reach the level of experimental reproducibility, our proposed model nevertheless offers significant improvements over models currently used in industrial workflows.
[ { "created": "Tue, 2 Apr 2019 17:35:27 GMT", "version": "v1" }, { "created": "Tue, 18 Jun 2019 18:13:11 GMT", "version": "v2" }, { "created": "Fri, 12 Jul 2019 23:11:43 GMT", "version": "v3" }, { "created": "Tue, 30 Jul 2019 17:36:39 GMT", "version": "v4" }, { "created": "Wed, 20 Nov 2019 19:51:07 GMT", "version": "v5" } ]
2019-11-22
[ [ "Yang", "Kevin", "" ], [ "Swanson", "Kyle", "" ], [ "Jin", "Wengong", "" ], [ "Coley", "Connor", "" ], [ "Eiden", "Philipp", "" ], [ "Gao", "Hua", "" ], [ "Guzman-Perez", "Angel", "" ], [ "Hopper", "Timothy", "" ], [ "Kelley", "Brian", "" ], [ "Mathea", "Miriam", "" ], [ "Palmer", "Andrew", "" ], [ "Settels", "Volker", "" ], [ "Jaakkola", "Tommi", "" ], [ "Jensen", "Klavs", "" ], [ "Barzilay", "Regina", "" ] ]
Advancements in neural machinery have led to a wide range of algorithmic solutions for molecular property prediction. Two classes of models in particular have yielded promising results: neural networks applied to computed molecular fingerprints or expert-crafted descriptors, and graph convolutional neural networks that construct a learned molecular representation by operating on the graph structure of the molecule. However, recent literature has yet to clearly determine which of these two methods is superior when generalizing to new chemical space. Furthermore, prior research has rarely examined these new models in industry research settings in comparison to existing employed models. In this paper, we benchmark models extensively on 19 public and 16 proprietary industrial datasets spanning a wide variety of chemical endpoints. In addition, we introduce a graph convolutional model that consistently matches or outperforms models using fixed molecular descriptors as well as previous graph neural architectures on both public and proprietary datasets. Our empirical findings indicate that while approaches based on these representations have yet to reach the level of experimental reproducibility, our proposed model nevertheless offers significant improvements over models currently used in industrial workflows.
1804.02573
Sankalp Arora
Sankalp Arora, Sanjiban Choudhury and Sebastian Scherer
Hindsight is Only 50/50: Unsuitability of MDP based Approximate POMDP Solvers for Multi-resolution Information Gathering
6 pages, 1 figure
null
null
null
cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Partially Observable Markov Decision Processes (POMDPs) offer an elegant framework to model sequential decision making in uncertain environments. Solving POMDPs online is an active area of research and given the size of real-world problems approximate solvers are used. Recently, a few approaches have been suggested for solving POMDPs by using MDP solvers in conjunction with imitation learning. MDP based POMDP solvers work well for some cases, while catastrophically failing for others. The main failure point of such solvers is the lack of motivation for MDP solvers to gain information, since under their assumption the environment is either already known as much as it can be or the uncertainty will disappear after the next step. However for solving POMDP problems gaining information can lead to efficient solutions. In this paper we derive a set of conditions where MDP based POMDP solvers are provably sub-optimal. We then use the well-known tiger problem to demonstrate such sub-optimality. We show that multi-resolution, budgeted information gathering cannot be addressed using MDP based POMDP solvers. The contribution of the paper helps identify the properties of a POMDP problem for which the use of MDP based POMDP solvers is inappropriate, enabling better design choices.
[ { "created": "Sat, 7 Apr 2018 16:27:33 GMT", "version": "v1" } ]
2018-04-10
[ [ "Arora", "Sankalp", "" ], [ "Choudhury", "Sanjiban", "" ], [ "Scherer", "Sebastian", "" ] ]
Partially Observable Markov Decision Processes (POMDPs) offer an elegant framework to model sequential decision making in uncertain environments. Solving POMDPs online is an active area of research and given the size of real-world problems approximate solvers are used. Recently, a few approaches have been suggested for solving POMDPs by using MDP solvers in conjunction with imitation learning. MDP based POMDP solvers work well for some cases, while catastrophically failing for others. The main failure point of such solvers is the lack of motivation for MDP solvers to gain information, since under their assumption the environment is either already known as much as it can be or the uncertainty will disappear after the next step. However for solving POMDP problems gaining information can lead to efficient solutions. In this paper we derive a set of conditions where MDP based POMDP solvers are provably sub-optimal. We then use the well-known tiger problem to demonstrate such sub-optimality. We show that multi-resolution, budgeted information gathering cannot be addressed using MDP based POMDP solvers. The contribution of the paper helps identify the properties of a POMDP problem for which the use of MDP based POMDP solvers is inappropriate, enabling better design choices.
2107.07662
EPTCS
Gilles Dowek (Inria and ENS Paris-Saclay)
Interacting Safely with an Unsafe Environment
In Proceedings LFMTP 2021, arXiv:2107.07376
EPTCS 337, 2021, pp. 30-38
10.4204/EPTCS.337.3
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
We give a presentation of Pure type systems where contexts need not be well-formed and show that this presentation is equivalent to the usual one. The main motivation for this presentation is that, when we extend Pure type systems with computation rules, like in the logical framework Dedukti, we want to declare the constants before the computation rules that are needed to check the well-typedness of their type.
[ { "created": "Fri, 16 Jul 2021 01:44:04 GMT", "version": "v1" } ]
2021-07-19
[ [ "Dowek", "Gilles", "", "Inria and ENS Paris-Saclay" ] ]
We give a presentation of Pure type systems where contexts need not be well-formed and show that this presentation is equivalent to the usual one. The main motivation for this presentation is that, when we extend Pure type systems with computation rules, like in the logical framework Dedukti, we want to declare the constants before the computation rules that are needed to check the well-typedness of their type.
2111.12167
Hanhan Zhou
Hanhan Zhou, Tian Lan, Guru Venkataramani
PT-VTON: an Image-Based Virtual Try-On Network with Progressive Pose Attention Transfer
Short Version with 4 pages
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The virtual try-on system has gained great attention due to its potential to give customers a realistic, personalized product presentation in virtualized settings. In this paper, we present PT-VTON, a novel pose-transfer-based framework for cloth transfer that enables virtual try-on with arbitrary poses. PT-VTON can be applied to the fashion industry within minimal modification of existing systems while satisfying the overall visual fashionability and detailed fabric appearance requirements. It enables efficient clothes transferring between model and user images with arbitrary pose and body shape. We implement a prototype of PT-VTON and demonstrate that our system can match or surpass many other approaches when facing a drastic variation of poses by preserving detailed human and fabric characteristic appearances. PT-VTON is shown to outperform alternative approaches both on machine-based quantitative metrics and qualitative results.
[ { "created": "Tue, 23 Nov 2021 21:51:08 GMT", "version": "v1" } ]
2021-11-25
[ [ "Zhou", "Hanhan", "" ], [ "Lan", "Tian", "" ], [ "Venkataramani", "Guru", "" ] ]
The virtual try-on system has gained great attention due to its potential to give customers a realistic, personalized product presentation in virtualized settings. In this paper, we present PT-VTON, a novel pose-transfer-based framework for cloth transfer that enables virtual try-on with arbitrary poses. PT-VTON can be applied to the fashion industry within minimal modification of existing systems while satisfying the overall visual fashionability and detailed fabric appearance requirements. It enables efficient clothes transferring between model and user images with arbitrary pose and body shape. We implement a prototype of PT-VTON and demonstrate that our system can match or surpass many other approaches when facing a drastic variation of poses by preserving detailed human and fabric characteristic appearances. PT-VTON is shown to outperform alternative approaches both on machine-based quantitative metrics and qualitative results.
1804.02792
Zeyu Chen
Jiaxuan Zhuo, Zeyu Chen, Jianhuang Lai, Guangcong Wang
Occluded Person Re-identification
6 pages, 7 figures, IEEE International Conference of Multimedia and Expo 2018
null
null
null
cs.CV cs.AI cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Person re-identification (re-id) suffers from a serious occlusion problem when applied to crowded public places. In this paper, we propose to retrieve a full-body person image by using a person image with occlusions. This differs significantly from the conventional person re-id problem where it is assumed that person images are detected without any occlusion. We thus call this new problem the occluded person re-identitification. To address this new problem, we propose a novel Attention Framework of Person Body (AFPB) based on deep learning, consisting of 1) an Occlusion Simulator (OS) which automatically generates artificial occlusions for full-body person images, and 2) multi-task losses that force the neural network not only to discriminate a person's identity but also to determine whether a sample is from the occluded data distribution or the full-body data distribution. Experiments on a new occluded person re-id dataset and three existing benchmarks modified to include full-body person images and occluded person images show the superiority of the proposed method.
[ { "created": "Mon, 9 Apr 2018 01:56:53 GMT", "version": "v1" }, { "created": "Sun, 15 Apr 2018 02:00:47 GMT", "version": "v2" }, { "created": "Fri, 20 Apr 2018 14:22:34 GMT", "version": "v3" } ]
2018-04-23
[ [ "Zhuo", "Jiaxuan", "" ], [ "Chen", "Zeyu", "" ], [ "Lai", "Jianhuang", "" ], [ "Wang", "Guangcong", "" ] ]
Person re-identification (re-id) suffers from a serious occlusion problem when applied to crowded public places. In this paper, we propose to retrieve a full-body person image by using a person image with occlusions. This differs significantly from the conventional person re-id problem where it is assumed that person images are detected without any occlusion. We thus call this new problem the occluded person re-identitification. To address this new problem, we propose a novel Attention Framework of Person Body (AFPB) based on deep learning, consisting of 1) an Occlusion Simulator (OS) which automatically generates artificial occlusions for full-body person images, and 2) multi-task losses that force the neural network not only to discriminate a person's identity but also to determine whether a sample is from the occluded data distribution or the full-body data distribution. Experiments on a new occluded person re-id dataset and three existing benchmarks modified to include full-body person images and occluded person images show the superiority of the proposed method.
2403.00841
Jingxiao Chen
Jingxiao Chen, Weiji Xie, Weinan Zhang, Yong yu, Ying Wen
Offline Fictitious Self-Play for Competitive Games
null
null
null
null
cs.MA cs.AI cs.GT cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Offline Reinforcement Learning (RL) has received significant interest due to its ability to improve policies in previously collected datasets without online interactions. Despite its success in the single-agent setting, offline multi-agent RL remains a challenge, especially in competitive games. Firstly, unaware of the game structure, it is impossible to interact with the opponents and conduct a major learning paradigm, self-play, for competitive games. Secondly, real-world datasets cannot cover all the state and action space in the game, resulting in barriers to identifying Nash equilibrium (NE). To address these issues, this paper introduces Off-FSP, the first practical model-free offline RL algorithm for competitive games. We start by simulating interactions with various opponents by adjusting the weights of the fixed dataset with importance sampling. This technique allows us to learn best responses to different opponents and employ the Offline Self-Play learning framework. In this framework, we further implement Fictitious Self-Play (FSP) to approximate NE. In partially covered real-world datasets, our methods show the potential to approach NE by incorporating any single-agent offline RL method. Experimental results in Leduc Hold'em Poker show that our method significantly improves performances compared with state-of-the-art baselines.
[ { "created": "Thu, 29 Feb 2024 11:36:48 GMT", "version": "v1" } ]
2024-03-05
[ [ "Chen", "Jingxiao", "" ], [ "Xie", "Weiji", "" ], [ "Zhang", "Weinan", "" ], [ "yu", "Yong", "" ], [ "Wen", "Ying", "" ] ]
Offline Reinforcement Learning (RL) has received significant interest due to its ability to improve policies in previously collected datasets without online interactions. Despite its success in the single-agent setting, offline multi-agent RL remains a challenge, especially in competitive games. Firstly, unaware of the game structure, it is impossible to interact with the opponents and conduct a major learning paradigm, self-play, for competitive games. Secondly, real-world datasets cannot cover all the state and action space in the game, resulting in barriers to identifying Nash equilibrium (NE). To address these issues, this paper introduces Off-FSP, the first practical model-free offline RL algorithm for competitive games. We start by simulating interactions with various opponents by adjusting the weights of the fixed dataset with importance sampling. This technique allows us to learn best responses to different opponents and employ the Offline Self-Play learning framework. In this framework, we further implement Fictitious Self-Play (FSP) to approximate NE. In partially covered real-world datasets, our methods show the potential to approach NE by incorporating any single-agent offline RL method. Experimental results in Leduc Hold'em Poker show that our method significantly improves performances compared with state-of-the-art baselines.
1908.04422
Rui Chen
Rui Chen, Songfang Han, Jing Xu, Hao Su
Point-Based Multi-View Stereo Network
Accepted as ICCV 2019 oral presentation
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Point-MVSNet, a novel point-based deep framework for multi-view stereo (MVS). Distinct from existing cost volume approaches, our method directly processes the target scene as point clouds. More specifically, our method predicts the depth in a coarse-to-fine manner. We first generate a coarse depth map, convert it into a point cloud and refine the point cloud iteratively by estimating the residual between the depth of the current iteration and that of the ground truth. Our network leverages 3D geometry priors and 2D texture information jointly and effectively by fusing them into a feature-augmented point cloud, and processes the point cloud to estimate the 3D flow for each point. This point-based architecture allows higher accuracy, more computational efficiency and more flexibility than cost-volume-based counterparts. Experimental results show that our approach achieves a significant improvement in reconstruction quality compared with state-of-the-art methods on the DTU and the Tanks and Temples dataset. Our source code and trained models are available at https://github.com/callmeray/PointMVSNet .
[ { "created": "Mon, 12 Aug 2019 22:21:52 GMT", "version": "v1" } ]
2019-08-14
[ [ "Chen", "Rui", "" ], [ "Han", "Songfang", "" ], [ "Xu", "Jing", "" ], [ "Su", "Hao", "" ] ]
We introduce Point-MVSNet, a novel point-based deep framework for multi-view stereo (MVS). Distinct from existing cost volume approaches, our method directly processes the target scene as point clouds. More specifically, our method predicts the depth in a coarse-to-fine manner. We first generate a coarse depth map, convert it into a point cloud and refine the point cloud iteratively by estimating the residual between the depth of the current iteration and that of the ground truth. Our network leverages 3D geometry priors and 2D texture information jointly and effectively by fusing them into a feature-augmented point cloud, and processes the point cloud to estimate the 3D flow for each point. This point-based architecture allows higher accuracy, more computational efficiency and more flexibility than cost-volume-based counterparts. Experimental results show that our approach achieves a significant improvement in reconstruction quality compared with state-of-the-art methods on the DTU and the Tanks and Temples dataset. Our source code and trained models are available at https://github.com/callmeray/PointMVSNet .
2205.02270
Tian-Sheuan Chang
Kuo-Wei Chang, Tian-Sheuan Chang
VWA: Hardware Efficient Vectorwise Accelerator for Convolutional Neural Network
11 pages, 21 figures
in IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 67, no. 1, pp. 145-154, Jan. 2020
10.1109/TCSI.2019.2942529
null
cs.AR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Hardware accelerators for convolution neural networks (CNNs) enable real-time applications of artificial intelligence technology. However, most of the existing designs suffer from low hardware utilization or high area cost due to complex dataflow. This paper proposes a hardware efficient vectorwise CNN accelerator that adopts a 3$\times$3 filter optimized systolic array using 1-D broadcast dataflow to generate partial sum. This enables easy reconfiguration for different kinds of kernels with interleaved input or elementwise input dataflow. This simple and regular data flow results in low area cost while attains high hardware utilization. The presented design achieves 99\%, 97\%, 93.7\%, 94\% hardware utilization for VGG-16, ResNet-34, GoogLeNet, and Mobilenet, respectively. Hardware implementation with TSMC 40nm technology takes 266.9K NAND gate count and 191KB SRAM to support 168GOPS throughput and consumes only 154.98mW when running at 500MHz operating frequency, which has superior area and power efficiency than other designs.
[ { "created": "Mon, 2 May 2022 09:55:58 GMT", "version": "v1" } ]
2022-05-06
[ [ "Chang", "Kuo-Wei", "" ], [ "Chang", "Tian-Sheuan", "" ] ]
Hardware accelerators for convolution neural networks (CNNs) enable real-time applications of artificial intelligence technology. However, most of the existing designs suffer from low hardware utilization or high area cost due to complex dataflow. This paper proposes a hardware efficient vectorwise CNN accelerator that adopts a 3$\times$3 filter optimized systolic array using 1-D broadcast dataflow to generate partial sum. This enables easy reconfiguration for different kinds of kernels with interleaved input or elementwise input dataflow. This simple and regular data flow results in low area cost while attains high hardware utilization. The presented design achieves 99\%, 97\%, 93.7\%, 94\% hardware utilization for VGG-16, ResNet-34, GoogLeNet, and Mobilenet, respectively. Hardware implementation with TSMC 40nm technology takes 266.9K NAND gate count and 191KB SRAM to support 168GOPS throughput and consumes only 154.98mW when running at 500MHz operating frequency, which has superior area and power efficiency than other designs.
2302.09584
Xiangyu Zhou
Xiangyu Zhou, Qianru Wei, Yuhui Zhang
DGP-Net: Dense Graph Prototype Network for Few-Shot SAR Target Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The inevitable feature deviation of synthetic aperture radar (SAR) image due to the special imaging principle (depression angle variation) leads to poor recognition accuracy, especially in few-shot learning (FSL). To deal with this problem, we propose a dense graph prototype network (DGP-Net) to eliminate the feature deviation by learning potential features, and classify by learning feature distribution. The role of the prototype in this model is to solve the problem of large distance between congeneric samples taken due to the contingency of single sampling in FSL, and enhance the robustness of the model. Experimental results on the MSTAR dataset show that the DGP-Net has good classification results for SAR images with different depression angles and the recognition accuracy of it is higher than typical FSL methods.
[ { "created": "Sun, 19 Feb 2023 14:33:28 GMT", "version": "v1" } ]
2023-02-21
[ [ "Zhou", "Xiangyu", "" ], [ "Wei", "Qianru", "" ], [ "Zhang", "Yuhui", "" ] ]
The inevitable feature deviation of synthetic aperture radar (SAR) image due to the special imaging principle (depression angle variation) leads to poor recognition accuracy, especially in few-shot learning (FSL). To deal with this problem, we propose a dense graph prototype network (DGP-Net) to eliminate the feature deviation by learning potential features, and classify by learning feature distribution. The role of the prototype in this model is to solve the problem of large distance between congeneric samples taken due to the contingency of single sampling in FSL, and enhance the robustness of the model. Experimental results on the MSTAR dataset show that the DGP-Net has good classification results for SAR images with different depression angles and the recognition accuracy of it is higher than typical FSL methods.
2111.03751
Peng Gao
Peng Gao, Brian Reily, Rui Guo, Hongsheng Lu, Qingzhao Zhu and Hao Zhang
Asynchronous Collaborative Localization by Integrating Spatiotemporal Graph Learning with Model-Based Estimation
null
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collaborative localization is an essential capability for a team of robots such as connected vehicles to collaboratively estimate object locations from multiple perspectives with reliant cooperation. To enable collaborative localization, four key challenges must be addressed, including modeling complex relationships between observed objects, fusing observations from an arbitrary number of collaborating robots, quantifying localization uncertainty, and addressing latency of robot communications. In this paper, we introduce a novel approach that integrates uncertainty-aware spatiotemporal graph learning and model-based state estimation for a team of robots to collaboratively localize objects. Specifically, we introduce a new uncertainty-aware graph learning model that learns spatiotemporal graphs to represent historical motions of the objects observed by each robot over time and provides uncertainties in object localization. Moreover, we propose a novel method for integrated learning and model-based state estimation, which fuses asynchronous observations obtained from an arbitrary number of robots for collaborative localization. We evaluate our approach in two collaborative object localization scenarios in simulations and on real robots. Experimental results show that our approach outperforms previous methods and achieves state-of-the-art performance on asynchronous collaborative localization.
[ { "created": "Fri, 5 Nov 2021 22:48:13 GMT", "version": "v1" } ]
2021-11-09
[ [ "Gao", "Peng", "" ], [ "Reily", "Brian", "" ], [ "Guo", "Rui", "" ], [ "Lu", "Hongsheng", "" ], [ "Zhu", "Qingzhao", "" ], [ "Zhang", "Hao", "" ] ]
Collaborative localization is an essential capability for a team of robots such as connected vehicles to collaboratively estimate object locations from multiple perspectives with reliant cooperation. To enable collaborative localization, four key challenges must be addressed, including modeling complex relationships between observed objects, fusing observations from an arbitrary number of collaborating robots, quantifying localization uncertainty, and addressing latency of robot communications. In this paper, we introduce a novel approach that integrates uncertainty-aware spatiotemporal graph learning and model-based state estimation for a team of robots to collaboratively localize objects. Specifically, we introduce a new uncertainty-aware graph learning model that learns spatiotemporal graphs to represent historical motions of the objects observed by each robot over time and provides uncertainties in object localization. Moreover, we propose a novel method for integrated learning and model-based state estimation, which fuses asynchronous observations obtained from an arbitrary number of robots for collaborative localization. We evaluate our approach in two collaborative object localization scenarios in simulations and on real robots. Experimental results show that our approach outperforms previous methods and achieves state-of-the-art performance on asynchronous collaborative localization.
2206.03544
Roman Beliy
Ganit Kupershmidt, Roman Beliy, Guy Gaziv, Michal Irani
A Penny for Your (visual) Thoughts: Self-Supervised Reconstruction of Natural Movies from Brain Activity
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Reconstructing natural videos from fMRI brain recordings is very challenging, for two main reasons: (i) As fMRI data acquisition is difficult, we only have a limited amount of supervised samples, which is not enough to cover the huge space of natural videos; and (ii) The temporal resolution of fMRI recordings is much lower than the frame rate of natural videos. In this paper, we propose a self-supervised approach for natural-movie reconstruction. By employing cycle-consistency over Encoding-Decoding natural videos, we can: (i) exploit the full framerate of the training videos, and not be limited only to clips that correspond to fMRI recordings; (ii) exploit massive amounts of external natural videos which the subjects never saw inside the fMRI machine. These enable increasing the applicable training data by several orders of magnitude, introducing natural video priors to the decoding network, as well as temporal coherence. Our approach significantly outperforms competing methods, since those train only on the limited supervised data. We further introduce a new and simple temporal prior of natural videos, which - when folded into our fMRI decoder further - allows us to reconstruct videos at a higher frame-rate (HFR) of up to x8 of the original fMRI sample rate.
[ { "created": "Tue, 7 Jun 2022 19:27:22 GMT", "version": "v1" }, { "created": "Thu, 9 Jun 2022 01:16:19 GMT", "version": "v2" }, { "created": "Fri, 10 Jun 2022 22:15:21 GMT", "version": "v3" } ]
2022-06-14
[ [ "Kupershmidt", "Ganit", "" ], [ "Beliy", "Roman", "" ], [ "Gaziv", "Guy", "" ], [ "Irani", "Michal", "" ] ]
Reconstructing natural videos from fMRI brain recordings is very challenging, for two main reasons: (i) As fMRI data acquisition is difficult, we only have a limited amount of supervised samples, which is not enough to cover the huge space of natural videos; and (ii) The temporal resolution of fMRI recordings is much lower than the frame rate of natural videos. In this paper, we propose a self-supervised approach for natural-movie reconstruction. By employing cycle-consistency over Encoding-Decoding natural videos, we can: (i) exploit the full framerate of the training videos, and not be limited only to clips that correspond to fMRI recordings; (ii) exploit massive amounts of external natural videos which the subjects never saw inside the fMRI machine. These enable increasing the applicable training data by several orders of magnitude, introducing natural video priors to the decoding network, as well as temporal coherence. Our approach significantly outperforms competing methods, since those train only on the limited supervised data. We further introduce a new and simple temporal prior of natural videos, which - when folded into our fMRI decoder further - allows us to reconstruct videos at a higher frame-rate (HFR) of up to x8 of the original fMRI sample rate.
2310.01937
Ziqi Xu
Ziqi Xu, Debo Cheng, Jiuyong Li, Jixue Liu, Lin Liu, Kui Yu
Causal Inference with Conditional Front-Door Adjustment and Identifiable Variational Autoencoder
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An essential and challenging problem in causal inference is causal effect estimation from observational data. The problem becomes more difficult with the presence of unobserved confounding variables. The front-door adjustment is a practical approach for dealing with unobserved confounding variables. However, the restriction for the standard front-door adjustment is difficult to satisfy in practice. In this paper, we relax some of the restrictions by proposing the concept of conditional front-door (CFD) adjustment and develop the theorem that guarantees the causal effect identifiability of CFD adjustment. Furthermore, as it is often impossible for a CFD variable to be given in practice, it is desirable to learn it from data. By leveraging the ability of deep generative models, we propose CFDiVAE to learn the representation of the CFD adjustment variable directly from data with the identifiable Variational AutoEncoder and formally prove the model identifiability. Extensive experiments on synthetic datasets validate the effectiveness of CFDiVAE and its superiority over existing methods. The experiments also show that the performance of CFDiVAE is less sensitive to the causal strength of unobserved confounding variables. We further apply CFDiVAE to a real-world dataset to demonstrate its potential application.
[ { "created": "Tue, 3 Oct 2023 10:24:44 GMT", "version": "v1" } ]
2023-10-04
[ [ "Xu", "Ziqi", "" ], [ "Cheng", "Debo", "" ], [ "Li", "Jiuyong", "" ], [ "Liu", "Jixue", "" ], [ "Liu", "Lin", "" ], [ "Yu", "Kui", "" ] ]
An essential and challenging problem in causal inference is causal effect estimation from observational data. The problem becomes more difficult with the presence of unobserved confounding variables. The front-door adjustment is a practical approach for dealing with unobserved confounding variables. However, the restriction for the standard front-door adjustment is difficult to satisfy in practice. In this paper, we relax some of the restrictions by proposing the concept of conditional front-door (CFD) adjustment and develop the theorem that guarantees the causal effect identifiability of CFD adjustment. Furthermore, as it is often impossible for a CFD variable to be given in practice, it is desirable to learn it from data. By leveraging the ability of deep generative models, we propose CFDiVAE to learn the representation of the CFD adjustment variable directly from data with the identifiable Variational AutoEncoder and formally prove the model identifiability. Extensive experiments on synthetic datasets validate the effectiveness of CFDiVAE and its superiority over existing methods. The experiments also show that the performance of CFDiVAE is less sensitive to the causal strength of unobserved confounding variables. We further apply CFDiVAE to a real-world dataset to demonstrate its potential application.
1511.07480
Radu Curticapean
Radu Curticapean
Parity Separation: A Scientifically Proven Method for Permanent Weight Loss
14 pages
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given an edge-weighted graph G, let PerfMatch(G) denote the weighted sum over all perfect matchings M in G, weighting each matching M by the product of weights of edges in M. If G is unweighted, this plainly counts the perfect matchings of G. In this paper, we introduce parity separation, a new method for reducing PerfMatch to unweighted instances: For graphs G with edge-weights -1 and 1, we construct two unweighted graphs G1 and G2 such that PerfMatch(G) = PerfMatch(G1) - PerfMatch(G2). This yields a novel weight removal technique for counting perfect matchings, in addition to those known from classical #P-hardness proofs. We derive the following applications: 1. An alternative #P-completeness proof for counting unweighted perfect matchings. 2. C=P-completeness for deciding whether two given unweighted graphs have the same number of perfect matchings. To the best of our knowledge, this is the first C=P-completeness result for the "equality-testing version" of any natural counting problem that is not already #P-hard under parsimonious reductions. 3. An alternative tight lower bound for counting unweighted perfect matchings under the counting exponential-time hypothesis #ETH. Our technique is based upon matchgates and the Holant framework. To make our #P-hardness proof self-contained, we also apply matchgates for an alternative #P-hardness proof of PerfMatch on graphs with edge-weights -1 and 1.
[ { "created": "Mon, 23 Nov 2015 21:59:00 GMT", "version": "v1" } ]
2015-11-25
[ [ "Curticapean", "Radu", "" ] ]
Given an edge-weighted graph G, let PerfMatch(G) denote the weighted sum over all perfect matchings M in G, weighting each matching M by the product of weights of edges in M. If G is unweighted, this plainly counts the perfect matchings of G. In this paper, we introduce parity separation, a new method for reducing PerfMatch to unweighted instances: For graphs G with edge-weights -1 and 1, we construct two unweighted graphs G1 and G2 such that PerfMatch(G) = PerfMatch(G1) - PerfMatch(G2). This yields a novel weight removal technique for counting perfect matchings, in addition to those known from classical #P-hardness proofs. We derive the following applications: 1. An alternative #P-completeness proof for counting unweighted perfect matchings. 2. C=P-completeness for deciding whether two given unweighted graphs have the same number of perfect matchings. To the best of our knowledge, this is the first C=P-completeness result for the "equality-testing version" of any natural counting problem that is not already #P-hard under parsimonious reductions. 3. An alternative tight lower bound for counting unweighted perfect matchings under the counting exponential-time hypothesis #ETH. Our technique is based upon matchgates and the Holant framework. To make our #P-hardness proof self-contained, we also apply matchgates for an alternative #P-hardness proof of PerfMatch on graphs with edge-weights -1 and 1.
2401.17451
Chen-Feng Liu
Chen-Feng Liu, Nirmal D. Wickramasinghe, Himal A. Suraweera, Mehdi Bennis, Merouane Debbah
URLLC-Aware Proactive UAV Placement in Internet of Vehicles
Accepted in the IEEE Transactions on Intelligent Transportation Systems
null
10.1109/TITS.2024.3352971
null
cs.NI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unmanned aerial vehicles (UAVs) are envisioned to provide diverse services from the air. The service quality may rely on the wireless performance which is affected by the UAV's position. In this paper, we focus on the UAV placement problem in the Internet of Vehicles, where the UAV is deployed to monitor the road traffic and sends the monitored videos to vehicles. The studied problem is formulated as video resolution maximization by optimizing over the UAV's position. Moreover, we take into account the maximal transmission delay and impose a probabilistic constraint. To solve the formulated problem, we first leverage the techniques in extreme value theory (EVT) and Gaussian process regression (GPR) to characterize the influence of the UAV's position on the delay performance. Based on this characterization, we subsequently propose a proactive resolution selection and UAV placement approach, which adaptively places the UAV according to the geographic distribution of vehicles. Numerical results justify the joint usage of EVT and GPR for maximal delay characterization. Through investigating the maximal transmission delay, the proposed approach nearly achieves the optimal performance when vehicles are evenly distributed, and reduces 10% and 19% of the 999-th 1000-quantile over two baselines when vehicles are biased distributed.
[ { "created": "Tue, 30 Jan 2024 21:28:31 GMT", "version": "v1" } ]
2024-02-01
[ [ "Liu", "Chen-Feng", "" ], [ "Wickramasinghe", "Nirmal D.", "" ], [ "Suraweera", "Himal A.", "" ], [ "Bennis", "Mehdi", "" ], [ "Debbah", "Merouane", "" ] ]
Unmanned aerial vehicles (UAVs) are envisioned to provide diverse services from the air. The service quality may rely on the wireless performance which is affected by the UAV's position. In this paper, we focus on the UAV placement problem in the Internet of Vehicles, where the UAV is deployed to monitor the road traffic and sends the monitored videos to vehicles. The studied problem is formulated as video resolution maximization by optimizing over the UAV's position. Moreover, we take into account the maximal transmission delay and impose a probabilistic constraint. To solve the formulated problem, we first leverage the techniques in extreme value theory (EVT) and Gaussian process regression (GPR) to characterize the influence of the UAV's position on the delay performance. Based on this characterization, we subsequently propose a proactive resolution selection and UAV placement approach, which adaptively places the UAV according to the geographic distribution of vehicles. Numerical results justify the joint usage of EVT and GPR for maximal delay characterization. Through investigating the maximal transmission delay, the proposed approach nearly achieves the optimal performance when vehicles are evenly distributed, and reduces 10% and 19% of the 999-th 1000-quantile over two baselines when vehicles are biased distributed.
2205.06376
Heinrich van Deventer Mr
Heinrich van Deventer, Pieter Janse van Rensburg, Anna Bosman
KASAM: Spline Additive Models for Function Approximation
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Neural networks have been criticised for their inability to perform continual learning due to catastrophic forgetting and rapid unlearning of a past concept when a new concept is introduced. Catastrophic forgetting can be alleviated by specifically designed models and training techniques. This paper outlines a novel Spline Additive Model (SAM). SAM exhibits intrinsic memory retention with sufficient expressive power for many practical tasks, but is not a universal function approximator. SAM is extended with the Kolmogorov-Arnold representation theorem to a novel universal function approximator, called the Kolmogorov-Arnold Spline Additive Model - KASAM. The memory retention, expressive power and limitations of SAM and KASAM are illustrated analytically and empirically. SAM exhibited robust but imperfect memory retention, with small regions of overlapping interference in sequential learning tasks. KASAM exhibited greater susceptibility to catastrophic forgetting. KASAM in combination with pseudo-rehearsal training techniques exhibited superior performance in regression tasks and memory retention.
[ { "created": "Thu, 12 May 2022 21:50:04 GMT", "version": "v1" } ]
2022-05-16
[ [ "van Deventer", "Heinrich", "" ], [ "van Rensburg", "Pieter Janse", "" ], [ "Bosman", "Anna", "" ] ]
Neural networks have been criticised for their inability to perform continual learning due to catastrophic forgetting and rapid unlearning of a past concept when a new concept is introduced. Catastrophic forgetting can be alleviated by specifically designed models and training techniques. This paper outlines a novel Spline Additive Model (SAM). SAM exhibits intrinsic memory retention with sufficient expressive power for many practical tasks, but is not a universal function approximator. SAM is extended with the Kolmogorov-Arnold representation theorem to a novel universal function approximator, called the Kolmogorov-Arnold Spline Additive Model - KASAM. The memory retention, expressive power and limitations of SAM and KASAM are illustrated analytically and empirically. SAM exhibited robust but imperfect memory retention, with small regions of overlapping interference in sequential learning tasks. KASAM exhibited greater susceptibility to catastrophic forgetting. KASAM in combination with pseudo-rehearsal training techniques exhibited superior performance in regression tasks and memory retention.
2108.07827
Tharindu Adikari
Tharindu B. Adikari, Stark C. Draper
Compressing gradients by exploiting temporal correlation in momentum-SGD
This paper was presented in part at the 11th International Symposium on Topics in Coding (ISTC), Montreal, QC, Canada, August 2021, and the paper has been accepted for publication in the IEEE Journal on Selected Areas in Information Theory (JSAIT) Volume 2, Issue 3 (2021), https://ieeexplore.ieee.org/document/9511618
IEEE Journal on Selected Areas in Information Theory (JSAIT) Volume 2, Issue 3 (2021)
10.1109/JSAIT.2021.3103494
null
cs.LG cs.AI cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
An increasing bottleneck in decentralized optimization is communication. Bigger models and growing datasets mean that decentralization of computation is important and that the amount of information exchanged is quickly growing. While compression techniques have been introduced to cope with the latter, none has considered leveraging the temporal correlations that exist in consecutive vector updates. An important example is distributed momentum-SGD where temporal correlation is enhanced by the low-pass-filtering effect of applying momentum. In this paper we design and analyze compression methods that exploit temporal correlation in systems both with and without error-feedback. Experiments with the ImageNet dataset demonstrate that our proposed methods offer significant reduction in the rate of communication at only a negligible increase in computation complexity. We further analyze the convergence of SGD when compression is applied with error-feedback. In the literature, convergence guarantees are developed only for compressors that provide error-bounds point-wise, i.e., for each input to the compressor. In contrast, many important codes (e.g. rate-distortion codes) provide error-bounds only in expectation and thus provide a more general guarantee. In this paper we prove the convergence of SGD under an expected error assumption by establishing a bound for the minimum gradient norm.
[ { "created": "Tue, 17 Aug 2021 18:04:06 GMT", "version": "v1" } ]
2021-08-19
[ [ "Adikari", "Tharindu B.", "" ], [ "Draper", "Stark C.", "" ] ]
An increasing bottleneck in decentralized optimization is communication. Bigger models and growing datasets mean that decentralization of computation is important and that the amount of information exchanged is quickly growing. While compression techniques have been introduced to cope with the latter, none has considered leveraging the temporal correlations that exist in consecutive vector updates. An important example is distributed momentum-SGD where temporal correlation is enhanced by the low-pass-filtering effect of applying momentum. In this paper we design and analyze compression methods that exploit temporal correlation in systems both with and without error-feedback. Experiments with the ImageNet dataset demonstrate that our proposed methods offer significant reduction in the rate of communication at only a negligible increase in computation complexity. We further analyze the convergence of SGD when compression is applied with error-feedback. In the literature, convergence guarantees are developed only for compressors that provide error-bounds point-wise, i.e., for each input to the compressor. In contrast, many important codes (e.g. rate-distortion codes) provide error-bounds only in expectation and thus provide a more general guarantee. In this paper we prove the convergence of SGD under an expected error assumption by establishing a bound for the minimum gradient norm.
1812.07613
Daniel Hernandez Garcia
Pablo G. Esteban, Daniel Hern\'andez Garc\'ia, Hee Rin Lee, Pauline Chevalier, Paul Baxter, Cindy L. Bethel, Jainendra Shukla, Joan Oliver, Dom\`enec Puig, Jason R. Wilson, Linda Tickle-Degnen, Madeleine Bartlett, Tony Belpaeme, Serge Thill, Kim Baraka, Francisco S. Melo, Manuela Veloso, David Becerra, Maja Matari\'c, Eduard Fosch-Villaronga, Jordi Albo-Canals, Gloria Beraldo, Emanuele Menegatti, Valentina De Tommasi, Roberto Mancin, Franca Benini, Zachary Henkel, Kenna Baugus, David C. May, Lucile Dupuy, Wendy A. Rogers, Ronit Feingold Polak, Shelly Levy-Tzedek, Dagoberto Cruz-Sandoval, Jesus Favela, Michelle J. Johnson, Mayumi Mohan, Rochelle Mendonca
Proceedings of the Workshop on Social Robots in Therapy: Focusing on Autonomy and Ethical Challenges
25 pages, editors for the proceedings: Pablo G. Esteban, Daniel Hern\'andez Garc\'ia, Hee Rin Lee, Pauline Chevalier, Paul Baxter, Cindy Bethel
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robot-Assisted Therapy (RAT) has successfully been used in HRI research by including social robots in health-care interventions by virtue of their ability to engage human users both social and emotional dimensions. Research projects on this topic exist all over the globe in the USA, Europe, and Asia. All of these projects have the overall ambitious goal to increase the well-being of a vulnerable population. Typical work in RAT is performed using remote controlled robots; a technique called Wizard-of-Oz (WoZ). The robot is usually controlled, unbeknownst to the patient, by a human operator. However, WoZ has been demonstrated to not be a sustainable technique in the long-term. Providing the robots with autonomy (while remaining under the supervision of the therapist) has the potential to lighten the therapists burden, not only in the therapeutic session itself but also in longer-term diagnostic tasks. Therefore, there is a need for exploring several degrees of autonomy in social robots used in therapy. Increasing the autonomy of robots might also bring about a new set of challenges. In particular, there will be a need to answer new ethical questions regarding the use of robots with a vulnerable population, as well as a need to ensure ethically-compliant robot behaviours. Therefore, in this workshop we want to gather findings and explore which degree of autonomy might help to improve health-care interventions and how we can overcome the ethical challenges inherent to it.
[ { "created": "Tue, 18 Dec 2018 19:30:04 GMT", "version": "v1" } ]
2018-12-20
[ [ "Esteban", "Pablo G.", "" ], [ "García", "Daniel Hernández", "" ], [ "Lee", "Hee Rin", "" ], [ "Chevalier", "Pauline", "" ], [ "Baxter", "Paul", "" ], [ "Bethel", "Cindy L.", "" ], [ "Shukla", "Jainendra", "" ], [ "Oliver", "Joan", "" ], [ "Puig", "Domènec", "" ], [ "Wilson", "Jason R.", "" ], [ "Tickle-Degnen", "Linda", "" ], [ "Bartlett", "Madeleine", "" ], [ "Belpaeme", "Tony", "" ], [ "Thill", "Serge", "" ], [ "Baraka", "Kim", "" ], [ "Melo", "Francisco S.", "" ], [ "Veloso", "Manuela", "" ], [ "Becerra", "David", "" ], [ "Matarić", "Maja", "" ], [ "Fosch-Villaronga", "Eduard", "" ], [ "Albo-Canals", "Jordi", "" ], [ "Beraldo", "Gloria", "" ], [ "Menegatti", "Emanuele", "" ], [ "De Tommasi", "Valentina", "" ], [ "Mancin", "Roberto", "" ], [ "Benini", "Franca", "" ], [ "Henkel", "Zachary", "" ], [ "Baugus", "Kenna", "" ], [ "May", "David C.", "" ], [ "Dupuy", "Lucile", "" ], [ "Rogers", "Wendy A.", "" ], [ "Polak", "Ronit Feingold", "" ], [ "Levy-Tzedek", "Shelly", "" ], [ "Cruz-Sandoval", "Dagoberto", "" ], [ "Favela", "Jesus", "" ], [ "Johnson", "Michelle J.", "" ], [ "Mohan", "Mayumi", "" ], [ "Mendonca", "Rochelle", "" ] ]
Robot-Assisted Therapy (RAT) has successfully been used in HRI research by including social robots in health-care interventions by virtue of their ability to engage human users both social and emotional dimensions. Research projects on this topic exist all over the globe in the USA, Europe, and Asia. All of these projects have the overall ambitious goal to increase the well-being of a vulnerable population. Typical work in RAT is performed using remote controlled robots; a technique called Wizard-of-Oz (WoZ). The robot is usually controlled, unbeknownst to the patient, by a human operator. However, WoZ has been demonstrated to not be a sustainable technique in the long-term. Providing the robots with autonomy (while remaining under the supervision of the therapist) has the potential to lighten the therapists burden, not only in the therapeutic session itself but also in longer-term diagnostic tasks. Therefore, there is a need for exploring several degrees of autonomy in social robots used in therapy. Increasing the autonomy of robots might also bring about a new set of challenges. In particular, there will be a need to answer new ethical questions regarding the use of robots with a vulnerable population, as well as a need to ensure ethically-compliant robot behaviours. Therefore, in this workshop we want to gather findings and explore which degree of autonomy might help to improve health-care interventions and how we can overcome the ethical challenges inherent to it.
2402.08006
Laura Santos
Laura Santos, Bernardo Carvalho, Catarina Barata, Jos\'e Santos-Victor
Extending 3D body pose estimation for robotic-assistive therapies of autistic children
null
null
null
null
cs.RO cs.CV cs.HC
http://creativecommons.org/licenses/by/4.0/
Robotic-assistive therapy has demonstrated very encouraging results for children with Autism. Accurate estimation of the child's pose is essential both for human-robot interaction and for therapy assessment purposes. Non-intrusive methods are the sole viable option since these children are sensitive to touch. While depth cameras have been used extensively, existing methods face two major limitations: (i) they are usually trained with adult-only data and do not correctly estimate a child's pose, and (ii) they fail in scenarios with a high number of occlusions. Therefore, our goal was to develop a 3D pose estimator for children, by adapting an existing state-of-the-art 3D body modelling method and incorporating a linear regression model to fine-tune one of its inputs, thereby correcting the pose of children's 3D meshes. In controlled settings, our method has an error below $0.3m$, which is considered acceptable for this kind of application and lower than current state-of-the-art methods. In real-world settings, the proposed model performs similarly to a Kinect depth camera and manages to successfully estimate the 3D body poses in a much higher number of frames.
[ { "created": "Mon, 12 Feb 2024 19:11:03 GMT", "version": "v1" } ]
2024-02-14
[ [ "Santos", "Laura", "" ], [ "Carvalho", "Bernardo", "" ], [ "Barata", "Catarina", "" ], [ "Santos-Victor", "José", "" ] ]
Robotic-assistive therapy has demonstrated very encouraging results for children with Autism. Accurate estimation of the child's pose is essential both for human-robot interaction and for therapy assessment purposes. Non-intrusive methods are the sole viable option since these children are sensitive to touch. While depth cameras have been used extensively, existing methods face two major limitations: (i) they are usually trained with adult-only data and do not correctly estimate a child's pose, and (ii) they fail in scenarios with a high number of occlusions. Therefore, our goal was to develop a 3D pose estimator for children, by adapting an existing state-of-the-art 3D body modelling method and incorporating a linear regression model to fine-tune one of its inputs, thereby correcting the pose of children's 3D meshes. In controlled settings, our method has an error below $0.3m$, which is considered acceptable for this kind of application and lower than current state-of-the-art methods. In real-world settings, the proposed model performs similarly to a Kinect depth camera and manages to successfully estimate the 3D body poses in a much higher number of frames.
2103.12021
Paria Rashidinejad
Paria Rashidinejad, Banghua Zhu, Cong Ma, Jiantao Jiao, Stuart Russell
Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism
null
Published at NeurIPS 2021 and IEEE Transactions on Information Theory
null
null
cs.LG cs.AI math.OC math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Offline (or batch) reinforcement learning (RL) algorithms seek to learn an optimal policy from a fixed dataset without active data collection. Based on the composition of the offline dataset, two main categories of methods are used: imitation learning which is suitable for expert datasets and vanilla offline RL which often requires uniform coverage datasets. From a practical standpoint, datasets often deviate from these two extremes and the exact data composition is usually unknown a priori. To bridge this gap, we present a new offline RL framework that smoothly interpolates between the two extremes of data composition, hence unifying imitation learning and vanilla offline RL. The new framework is centered around a weak version of the concentrability coefficient that measures the deviation from the behavior policy to the expert policy alone. Under this new framework, we further investigate the question on algorithm design: can one develop an algorithm that achieves a minimax optimal rate and also adapts to unknown data composition? To address this question, we consider a lower confidence bound (LCB) algorithm developed based on pessimism in the face of uncertainty in offline RL. We study finite-sample properties of LCB as well as information-theoretic limits in multi-armed bandits, contextual bandits, and Markov decision processes (MDPs). Our analysis reveals surprising facts about optimality rates. In particular, in all three settings, LCB achieves a faster rate of $1/N$ for nearly-expert datasets compared to the usual rate of $1/\sqrt{N}$ in offline RL, where $N$ is the number of samples in the batch dataset. In the case of contextual bandits with at least two contexts, we prove that LCB is adaptively optimal for the entire data composition range, achieving a smooth transition from imitation learning to offline RL. We further show that LCB is almost adaptively optimal in MDPs.
[ { "created": "Mon, 22 Mar 2021 17:27:08 GMT", "version": "v1" }, { "created": "Mon, 3 Jul 2023 04:47:42 GMT", "version": "v2" } ]
2023-07-04
[ [ "Rashidinejad", "Paria", "" ], [ "Zhu", "Banghua", "" ], [ "Ma", "Cong", "" ], [ "Jiao", "Jiantao", "" ], [ "Russell", "Stuart", "" ] ]
Offline (or batch) reinforcement learning (RL) algorithms seek to learn an optimal policy from a fixed dataset without active data collection. Based on the composition of the offline dataset, two main categories of methods are used: imitation learning which is suitable for expert datasets and vanilla offline RL which often requires uniform coverage datasets. From a practical standpoint, datasets often deviate from these two extremes and the exact data composition is usually unknown a priori. To bridge this gap, we present a new offline RL framework that smoothly interpolates between the two extremes of data composition, hence unifying imitation learning and vanilla offline RL. The new framework is centered around a weak version of the concentrability coefficient that measures the deviation from the behavior policy to the expert policy alone. Under this new framework, we further investigate the question on algorithm design: can one develop an algorithm that achieves a minimax optimal rate and also adapts to unknown data composition? To address this question, we consider a lower confidence bound (LCB) algorithm developed based on pessimism in the face of uncertainty in offline RL. We study finite-sample properties of LCB as well as information-theoretic limits in multi-armed bandits, contextual bandits, and Markov decision processes (MDPs). Our analysis reveals surprising facts about optimality rates. In particular, in all three settings, LCB achieves a faster rate of $1/N$ for nearly-expert datasets compared to the usual rate of $1/\sqrt{N}$ in offline RL, where $N$ is the number of samples in the batch dataset. In the case of contextual bandits with at least two contexts, we prove that LCB is adaptively optimal for the entire data composition range, achieving a smooth transition from imitation learning to offline RL. We further show that LCB is almost adaptively optimal in MDPs.
2109.02471
Bencheng Yan
Bencheng Yan, Pengjie Wang, Jinquan Liu, Wei Lin, Kuang-Chih Lee, Jian Xu and Bo Zheng
Binary Code based Hash Embedding for Web-scale Applications
CIKM 2021, 5 pages; The first two authors contributed equally to this work
null
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, deep learning models are widely adopted in web-scale applications such as recommender systems, and online advertising. In these applications, embedding learning of categorical features is crucial to the success of deep learning models. In these models, a standard method is that each categorical feature value is assigned a unique embedding vector which can be learned and optimized. Although this method can well capture the characteristics of the categorical features and promise good performance, it can incur a huge memory cost to store the embedding table, especially for those web-scale applications. Such a huge memory cost significantly holds back the effectiveness and usability of EDRMs. In this paper, we propose a binary code based hash embedding method which allows the size of the embedding table to be reduced in arbitrary scale without compromising too much performance. Experimental evaluation results show that one can still achieve 99\% performance even if the embedding table size is reduced 1000$\times$ smaller than the original one with our proposed method.
[ { "created": "Tue, 24 Aug 2021 11:51:15 GMT", "version": "v1" } ]
2021-09-07
[ [ "Yan", "Bencheng", "" ], [ "Wang", "Pengjie", "" ], [ "Liu", "Jinquan", "" ], [ "Lin", "Wei", "" ], [ "Lee", "Kuang-Chih", "" ], [ "Xu", "Jian", "" ], [ "Zheng", "Bo", "" ] ]
Nowadays, deep learning models are widely adopted in web-scale applications such as recommender systems, and online advertising. In these applications, embedding learning of categorical features is crucial to the success of deep learning models. In these models, a standard method is that each categorical feature value is assigned a unique embedding vector which can be learned and optimized. Although this method can well capture the characteristics of the categorical features and promise good performance, it can incur a huge memory cost to store the embedding table, especially for those web-scale applications. Such a huge memory cost significantly holds back the effectiveness and usability of EDRMs. In this paper, we propose a binary code based hash embedding method which allows the size of the embedding table to be reduced in arbitrary scale without compromising too much performance. Experimental evaluation results show that one can still achieve 99\% performance even if the embedding table size is reduced 1000$\times$ smaller than the original one with our proposed method.
2306.07802
Abhishek Gupta
Abhishek Gupta and C.M. Markan
Empirical Measurement of Aesthetic Experience of Music
null
null
null
null
cs.HC q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chills or goosebumps, also called frisson, is a phenomenon that is often associated with an aesthetic experience e.g., music or some other ecstatic experience. The temporal and spatial cause of frisson in the brain has been one of the biggest mysteries of human nature. Accumulating evidence suggests that aesthetic, namely subjective, affective, and evaluative processes are at play while listening to music, hence, it is an important subjective stimulus for systematic investigation. Advances in neuroimaging and cognitive neuroscience, have given impetus to neuro-aesthetics, a novel approach to music providing a phenomenological brain-based framework for the aesthetic experience of music with the potential to open the scope for future research. In this paper, we present an affordable, wearable, easy-to-carry device to measure phenomenological goosebumps intensity on our skin with respect to real-time data using IoT devices (Raspberry pi 3, model B). To test the device subjects were asked to provide a list of songs that elicit goosebumps. Wireless earphones were provided, allowing participants to walk around and dance while listening to their music. (Some subjects moved during sessions). Results indicate that goosebumps were reliably detected by the device after visual inspection of the videos/music. The effective measurement when interfaced with neurophysiological devices such as electroencephalography (EEG) can help interpret biomarkers of ecstatic emotions. The second part of the study focuses on identifying primary brain regions involved in goosebump experience during musical stimulation.
[ { "created": "Tue, 13 Jun 2023 14:24:26 GMT", "version": "v1" } ]
2023-06-14
[ [ "Gupta", "Abhishek", "" ], [ "Markan", "C. M.", "" ] ]
Chills or goosebumps, also called frisson, is a phenomenon that is often associated with an aesthetic experience e.g., music or some other ecstatic experience. The temporal and spatial cause of frisson in the brain has been one of the biggest mysteries of human nature. Accumulating evidence suggests that aesthetic, namely subjective, affective, and evaluative processes are at play while listening to music, hence, it is an important subjective stimulus for systematic investigation. Advances in neuroimaging and cognitive neuroscience, have given impetus to neuro-aesthetics, a novel approach to music providing a phenomenological brain-based framework for the aesthetic experience of music with the potential to open the scope for future research. In this paper, we present an affordable, wearable, easy-to-carry device to measure phenomenological goosebumps intensity on our skin with respect to real-time data using IoT devices (Raspberry pi 3, model B). To test the device subjects were asked to provide a list of songs that elicit goosebumps. Wireless earphones were provided, allowing participants to walk around and dance while listening to their music. (Some subjects moved during sessions). Results indicate that goosebumps were reliably detected by the device after visual inspection of the videos/music. The effective measurement when interfaced with neurophysiological devices such as electroencephalography (EEG) can help interpret biomarkers of ecstatic emotions. The second part of the study focuses on identifying primary brain regions involved in goosebump experience during musical stimulation.
2207.04507
Nicole Wein
Aaron Bernstein, Nicole Wein
Closing the Gap Between Directed Hopsets and Shortcut Sets
Abstract shortened to meet arXiv requirements, v2: fixed a typo, v3: implemented reviewer comments
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For an n-vertex directed graph $G = (V,E)$, a $\beta$-\emph{shortcut set} $H$ is a set of additional edges $H \subseteq V \times V$ such that $G \cup H$ has the same transitive closure as $G$, and for every pair $u,v \in V$, there is a $uv$-path in $G \cup H$ with at most $\beta$ edges. A natural generalization of shortcut sets to distances is a $(\beta,\epsilon)$-\emph{hopset} $H \subseteq V \times V$, where the requirement is that $H$ and $G \cup H$ have the same shortest-path distances, and for every $u,v \in V$, there is a $(1+\epsilon)$-approximate shortest path in $G \cup H$ with at most $\beta$ edges. There is a large literature on the tradeoff between the size of a shortcut set / hopset and the value of $\beta$. We highlight the most natural point on this tradeoff: what is the minimum value of $\beta$, such that for any graph $G$, there exists a $\beta$-shortcut set (or a $(\beta,\epsilon)$-hopset) with $O(n)$ edges? Not only is this a natural structural question in its own right, but shortcuts sets / hopsets form the core of many distributed, parallel, and dynamic algorithms for reachability / shortest paths. Until very recently the best known upper bound was a folklore construction showing $\beta = O(n^{1/2})$, but in a breakthrough result Kogan and Parter [SODA 2022] improve this to $\beta = \tilde{O}(n^{1/3})$ for shortcut sets and $\tilde{O}(n^{2/5})$ for hopsets. Our result is to close the gap between shortcut sets and hopsets. That is, we show that for any graph $G$ and any fixed $\epsilon$ there is a $(\tilde{O}(n^{1/3}),\epsilon)$ hopset with $O(n)$ edges. More generally, we achieve a smooth tradeoff between hopset size and $\beta$ which exactly matches the tradeoff of Kogan and Parter for shortcut sets (up to polylog factors). Using a very recent black-box reduction of Kogan and Parter, our new hopset implies improved bounds for approximate distance preservers.
[ { "created": "Sun, 10 Jul 2022 17:14:01 GMT", "version": "v1" }, { "created": "Tue, 19 Jul 2022 14:25:14 GMT", "version": "v2" }, { "created": "Mon, 31 Oct 2022 03:33:15 GMT", "version": "v3" }, { "created": "Mon, 18 Mar 2024 21:53:47 GMT", "version": "v4" } ]
2024-03-20
[ [ "Bernstein", "Aaron", "" ], [ "Wein", "Nicole", "" ] ]
For an n-vertex directed graph $G = (V,E)$, a $\beta$-\emph{shortcut set} $H$ is a set of additional edges $H \subseteq V \times V$ such that $G \cup H$ has the same transitive closure as $G$, and for every pair $u,v \in V$, there is a $uv$-path in $G \cup H$ with at most $\beta$ edges. A natural generalization of shortcut sets to distances is a $(\beta,\epsilon)$-\emph{hopset} $H \subseteq V \times V$, where the requirement is that $H$ and $G \cup H$ have the same shortest-path distances, and for every $u,v \in V$, there is a $(1+\epsilon)$-approximate shortest path in $G \cup H$ with at most $\beta$ edges. There is a large literature on the tradeoff between the size of a shortcut set / hopset and the value of $\beta$. We highlight the most natural point on this tradeoff: what is the minimum value of $\beta$, such that for any graph $G$, there exists a $\beta$-shortcut set (or a $(\beta,\epsilon)$-hopset) with $O(n)$ edges? Not only is this a natural structural question in its own right, but shortcuts sets / hopsets form the core of many distributed, parallel, and dynamic algorithms for reachability / shortest paths. Until very recently the best known upper bound was a folklore construction showing $\beta = O(n^{1/2})$, but in a breakthrough result Kogan and Parter [SODA 2022] improve this to $\beta = \tilde{O}(n^{1/3})$ for shortcut sets and $\tilde{O}(n^{2/5})$ for hopsets. Our result is to close the gap between shortcut sets and hopsets. That is, we show that for any graph $G$ and any fixed $\epsilon$ there is a $(\tilde{O}(n^{1/3}),\epsilon)$ hopset with $O(n)$ edges. More generally, we achieve a smooth tradeoff between hopset size and $\beta$ which exactly matches the tradeoff of Kogan and Parter for shortcut sets (up to polylog factors). Using a very recent black-box reduction of Kogan and Parter, our new hopset implies improved bounds for approximate distance preservers.
2004.11940
Mattia Zeni Dr.
Mattia Zeni, Ivano Bison, Britta Gauckler, Fernando Reis Fausto Giunchiglia
Improving time use measurement with personal big data collection -- the experience of the European Big Data Hackathon 2019
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article assesses the experience with i-Log at the European Big Data Hackathon 2019, a satellite event of the New Techniques and Technologies for Statistics (NTTS) conference, organised by Eurostat. i-Log is a system that allows to capture personal big data from smartphones' internal sensors to be used for time use measurement. It allows the collection of heterogeneous types of data, enabling new possibilities for sociological urban field studies. Sensor data such as those related to the location or the movements of the user can be used to investigate and have insights on the time diaries' answers and assess their overall quality. The key idea is that the users' answers are used to train machine-learning algorithms, allowing the system to learn from the user's habits and to generate new time diaries' answers. In turn, these new labels can be used to assess the quality of existing ones, or to fill the gaps when the user does not provide an answer. The aim of this paper is to introduce the pilot study, the i-Log system and the methodological evidence that arose during the survey.
[ { "created": "Fri, 24 Apr 2020 18:40:08 GMT", "version": "v1" } ]
2020-04-28
[ [ "Zeni", "Mattia", "" ], [ "Bison", "Ivano", "" ], [ "Gauckler", "Britta", "" ], [ "Giunchiglia", "Fernando Reis Fausto", "" ] ]
This article assesses the experience with i-Log at the European Big Data Hackathon 2019, a satellite event of the New Techniques and Technologies for Statistics (NTTS) conference, organised by Eurostat. i-Log is a system that allows to capture personal big data from smartphones' internal sensors to be used for time use measurement. It allows the collection of heterogeneous types of data, enabling new possibilities for sociological urban field studies. Sensor data such as those related to the location or the movements of the user can be used to investigate and have insights on the time diaries' answers and assess their overall quality. The key idea is that the users' answers are used to train machine-learning algorithms, allowing the system to learn from the user's habits and to generate new time diaries' answers. In turn, these new labels can be used to assess the quality of existing ones, or to fill the gaps when the user does not provide an answer. The aim of this paper is to introduce the pilot study, the i-Log system and the methodological evidence that arose during the survey.
1704.03373
Yu Liu
Yu Liu, Junjie Yan, Wanli Ouyang
Quality Aware Network for Set to Set Recognition
Accepted at CVPR 2017
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper targets on the problem of set to set recognition, which learns the metric between two image sets. Images in each set belong to the same identity. Since images in a set can be complementary, they hopefully lead to higher accuracy in practical applications. However, the quality of each sample cannot be guaranteed, and samples with poor quality will hurt the metric. In this paper, the quality aware network (QAN) is proposed to confront this problem, where the quality of each sample can be automatically learned although such information is not explicitly provided in the training stage. The network has two branches, where the first branch extracts appearance feature embedding for each sample and the other branch predicts quality score for each sample. Features and quality scores of all samples in a set are then aggregated to generate the final feature embedding. We show that the two branches can be trained in an end-to-end manner given only the set-level identity annotation. Analysis on gradient spread of this mechanism indicates that the quality learned by the network is beneficial to set-to-set recognition and simplifies the distribution that the network needs to fit. Experiments on both face verification and person re-identification show advantages of the proposed QAN. The source code and network structure can be downloaded at https://github.com/sciencefans/Quality-Aware-Network.
[ { "created": "Tue, 11 Apr 2017 15:47:41 GMT", "version": "v1" } ]
2017-04-12
[ [ "Liu", "Yu", "" ], [ "Yan", "Junjie", "" ], [ "Ouyang", "Wanli", "" ] ]
This paper targets on the problem of set to set recognition, which learns the metric between two image sets. Images in each set belong to the same identity. Since images in a set can be complementary, they hopefully lead to higher accuracy in practical applications. However, the quality of each sample cannot be guaranteed, and samples with poor quality will hurt the metric. In this paper, the quality aware network (QAN) is proposed to confront this problem, where the quality of each sample can be automatically learned although such information is not explicitly provided in the training stage. The network has two branches, where the first branch extracts appearance feature embedding for each sample and the other branch predicts quality score for each sample. Features and quality scores of all samples in a set are then aggregated to generate the final feature embedding. We show that the two branches can be trained in an end-to-end manner given only the set-level identity annotation. Analysis on gradient spread of this mechanism indicates that the quality learned by the network is beneficial to set-to-set recognition and simplifies the distribution that the network needs to fit. Experiments on both face verification and person re-identification show advantages of the proposed QAN. The source code and network structure can be downloaded at https://github.com/sciencefans/Quality-Aware-Network.
2010.11614
Jos\'e Mar\'ia Mart\'inez-Otzeta
Unai Zabala, Igor Rodriguez, Jos\'e Mar\'ia Mart\'inez-Otzeta, Itziar Irigoien, Elena Lazkano
Quantitative analysis of robot gesticulation behavior
null
Auton Robot (2021)
10.1007/s10514-020-09958-1
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social robot capabilities, such as talking gestures, are best produced using data driven approaches to avoid being repetitive and to show trustworthiness. However, there is a lack of robust quantitative methods that allow to compare such methods beyond visual evaluation. In this paper a quantitative analysis is performed that compares two Generative Adversarial Networks based gesture generation approaches. The aim is to measure characteristics such as fidelity to the original training data, but at the same time keep track of the degree of originality of the produced gestures. Principal Coordinate Analysis and procrustes statistics are performed and a new Fr\'echet Gesture Distance is proposed by adapting the Fr\'echet Inception Distance to gestures. These three techniques are taken together to asses the fidelity/originality of the generated gestures.
[ { "created": "Thu, 22 Oct 2020 11:17:18 GMT", "version": "v1" } ]
2021-01-06
[ [ "Zabala", "Unai", "" ], [ "Rodriguez", "Igor", "" ], [ "Martínez-Otzeta", "José María", "" ], [ "Irigoien", "Itziar", "" ], [ "Lazkano", "Elena", "" ] ]
Social robot capabilities, such as talking gestures, are best produced using data driven approaches to avoid being repetitive and to show trustworthiness. However, there is a lack of robust quantitative methods that allow to compare such methods beyond visual evaluation. In this paper a quantitative analysis is performed that compares two Generative Adversarial Networks based gesture generation approaches. The aim is to measure characteristics such as fidelity to the original training data, but at the same time keep track of the degree of originality of the produced gestures. Principal Coordinate Analysis and procrustes statistics are performed and a new Fr\'echet Gesture Distance is proposed by adapting the Fr\'echet Inception Distance to gestures. These three techniques are taken together to asses the fidelity/originality of the generated gestures.
1703.09211
Dongdong Chen
Dongdong Chen and Jing Liao and Lu Yuan and Nenghai Yu and Gang Hua
Coherent Online Video Style Transfer
Corrected typos
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Training a feed-forward network for fast neural style transfer of images is proven to be successful. However, the naive extension to process video frame by frame is prone to producing flickering results. We propose the first end-to-end network for online video style transfer, which generates temporally coherent stylized video sequences in near real-time. Two key ideas include an efficient network by incorporating short-term coherence, and propagating short-term coherence to long-term, which ensures the consistency over larger period of time. Our network can incorporate different image stylization networks. We show that the proposed method clearly outperforms the per-frame baseline both qualitatively and quantitatively. Moreover, it can achieve visually comparable coherence to optimization-based video style transfer, but is three orders of magnitudes faster in runtime.
[ { "created": "Mon, 27 Mar 2017 17:52:55 GMT", "version": "v1" }, { "created": "Tue, 28 Mar 2017 09:04:15 GMT", "version": "v2" } ]
2017-03-29
[ [ "Chen", "Dongdong", "" ], [ "Liao", "Jing", "" ], [ "Yuan", "Lu", "" ], [ "Yu", "Nenghai", "" ], [ "Hua", "Gang", "" ] ]
Training a feed-forward network for fast neural style transfer of images is proven to be successful. However, the naive extension to process video frame by frame is prone to producing flickering results. We propose the first end-to-end network for online video style transfer, which generates temporally coherent stylized video sequences in near real-time. Two key ideas include an efficient network by incorporating short-term coherence, and propagating short-term coherence to long-term, which ensures the consistency over larger period of time. Our network can incorporate different image stylization networks. We show that the proposed method clearly outperforms the per-frame baseline both qualitatively and quantitatively. Moreover, it can achieve visually comparable coherence to optimization-based video style transfer, but is three orders of magnitudes faster in runtime.
1209.1750
Daniel Grier
Daniel Grier
Deciding the Winner of an Arbitrary Finite Poset Game is PSPACE-Complete
null
ICALP 2013, Part I, LNCS 7965, 2013, pp 497-503
10.1007/978-3-642-39206-1_42
null
cs.CC cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A poset game is a two-player game played over a partially ordered set (poset) in which the players alternate choosing an element of the poset, removing it and all elements greater than it. The first player unable to select an element of the poset loses. Polynomial time algorithms exist for certain restricted classes of poset games, such as the game of Nim. However, until recently the complexity of arbitrary finite poset games was only known to exist somewhere between NC^1 and PSPACE. We resolve this discrepancy by showing that deciding the winner of an arbitrary finite poset game is PSPACE-complete. To this end, we give an explicit reduction from Node Kayles, a PSPACE-complete game in which players vie to chose an independent set in a graph.
[ { "created": "Sat, 8 Sep 2012 20:19:26 GMT", "version": "v1" }, { "created": "Mon, 18 Mar 2013 00:17:47 GMT", "version": "v2" } ]
2015-03-20
[ [ "Grier", "Daniel", "" ] ]
A poset game is a two-player game played over a partially ordered set (poset) in which the players alternate choosing an element of the poset, removing it and all elements greater than it. The first player unable to select an element of the poset loses. Polynomial time algorithms exist for certain restricted classes of poset games, such as the game of Nim. However, until recently the complexity of arbitrary finite poset games was only known to exist somewhere between NC^1 and PSPACE. We resolve this discrepancy by showing that deciding the winner of an arbitrary finite poset game is PSPACE-complete. To this end, we give an explicit reduction from Node Kayles, a PSPACE-complete game in which players vie to chose an independent set in a graph.
2403.05406
Muyao Wang
Muyao Wang, Wenchao Chen, Bo Chen
Considering Nonstationary within Multivariate Time Series with Variational Hierarchical Transformer for Forecasting
accepted by AAAI2024
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The forecasting of Multivariate Time Series (MTS) has long been an important but challenging task. Due to the non-stationary problem across long-distance time steps, previous studies primarily adopt stationarization method to attenuate the non-stationary problem of the original series for better predictability. However, existing methods always adopt the stationarized series, which ignores the inherent non-stationarity, and has difficulty in modeling MTS with complex distributions due to the lack of stochasticity. To tackle these problems, we first develop a powerful hierarchical probabilistic generative module to consider the non-stationarity and stochastic characteristics within MTS, and then combine it with transformer for a well-defined variational generative dynamic model named Hierarchical Time series Variational Transformer (HTV-Trans), which recovers the intrinsic non-stationary information into temporal dependencies. Being a powerful probabilistic model, HTV-Trans is utilized to learn expressive representations of MTS and applied to forecasting tasks. Extensive experiments on diverse datasets show the efficiency of HTV-Trans on MTS forecasting tasks
[ { "created": "Fri, 8 Mar 2024 16:04:36 GMT", "version": "v1" } ]
2024-03-11
[ [ "Wang", "Muyao", "" ], [ "Chen", "Wenchao", "" ], [ "Chen", "Bo", "" ] ]
The forecasting of Multivariate Time Series (MTS) has long been an important but challenging task. Due to the non-stationary problem across long-distance time steps, previous studies primarily adopt stationarization method to attenuate the non-stationary problem of the original series for better predictability. However, existing methods always adopt the stationarized series, which ignores the inherent non-stationarity, and has difficulty in modeling MTS with complex distributions due to the lack of stochasticity. To tackle these problems, we first develop a powerful hierarchical probabilistic generative module to consider the non-stationarity and stochastic characteristics within MTS, and then combine it with transformer for a well-defined variational generative dynamic model named Hierarchical Time series Variational Transformer (HTV-Trans), which recovers the intrinsic non-stationary information into temporal dependencies. Being a powerful probabilistic model, HTV-Trans is utilized to learn expressive representations of MTS and applied to forecasting tasks. Extensive experiments on diverse datasets show the efficiency of HTV-Trans on MTS forecasting tasks
2102.07436
Anthony Bellotti
Anthony Bellotti
Approximation to Object Conditional Validity with Inductive Conformal Predictors
23 pages, 1 figure
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Conformal predictors are machine learning algorithms that output prediction sets that have a guarantee of marginal validity for finite samples with minimal distributional assumptions. This is a property that makes conformal predictors useful for machine learning tasks where we require reliable predictions. It would also be desirable to achieve conditional validity in the same setting, in the sense that validity of the prediction intervals remains valid regardless of conditioning on any particular property of the object of the prediction. Unfortunately, it has been shown that such conditional validity is impossible to guarantee for non-trivial prediction problems for finite samples. In this article, instead of trying to achieve a strong conditional validity result, the weaker goal of achieving an approximation to conditional validity is considered. A new algorithm is introduced to do this by iteratively adjusting a conformity measure to deviations from object conditional validity measured in the training data. Along with some theoretical results, experimental results are provided for three data sets that demonstrate (1) in real world machine learning tasks, lack of conditional validity is a measurable problem and (2) that the proposed algorithm is effective at alleviating this problem.
[ { "created": "Mon, 15 Feb 2021 10:14:44 GMT", "version": "v1" }, { "created": "Tue, 2 Mar 2021 00:05:58 GMT", "version": "v2" } ]
2021-03-03
[ [ "Bellotti", "Anthony", "" ] ]
Conformal predictors are machine learning algorithms that output prediction sets that have a guarantee of marginal validity for finite samples with minimal distributional assumptions. This is a property that makes conformal predictors useful for machine learning tasks where we require reliable predictions. It would also be desirable to achieve conditional validity in the same setting, in the sense that validity of the prediction intervals remains valid regardless of conditioning on any particular property of the object of the prediction. Unfortunately, it has been shown that such conditional validity is impossible to guarantee for non-trivial prediction problems for finite samples. In this article, instead of trying to achieve a strong conditional validity result, the weaker goal of achieving an approximation to conditional validity is considered. A new algorithm is introduced to do this by iteratively adjusting a conformity measure to deviations from object conditional validity measured in the training data. Along with some theoretical results, experimental results are provided for three data sets that demonstrate (1) in real world machine learning tasks, lack of conditional validity is a measurable problem and (2) that the proposed algorithm is effective at alleviating this problem.
2301.13748
Sebastian Mair
Sebastian Mair and Jens Sj\"olund
Archetypal Analysis++: Rethinking the Initialization Strategy
27 pages, 17 figures, accepted at the Transactions on Machine Learning Research
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Archetypal analysis is a matrix factorization method with convexity constraints. Due to local minima, a good initialization is essential, but frequently used initialization methods yield either sub-optimal starting points or are prone to get stuck in poor local minima. In this paper, we propose archetypal analysis++ (AA++), a probabilistic initialization strategy for archetypal analysis that sequentially samples points based on their influence on the objective function, similar to $k$-means++. In fact, we argue that $k$-means++ already approximates the proposed initialization method. Furthermore, we suggest to adapt an efficient Monte Carlo approximation of $k$-means++ to AA++. In an extensive empirical evaluation of 15 real-world data sets of varying sizes and dimensionalities and considering two pre-processing strategies, we show that AA++ almost always outperforms all baselines, including the most frequently used ones.
[ { "created": "Tue, 31 Jan 2023 16:33:22 GMT", "version": "v1" }, { "created": "Thu, 25 May 2023 15:29:56 GMT", "version": "v2" }, { "created": "Thu, 22 Feb 2024 22:13:29 GMT", "version": "v3" }, { "created": "Mon, 13 May 2024 14:06:52 GMT", "version": "v4" } ]
2024-05-14
[ [ "Mair", "Sebastian", "" ], [ "Sjölund", "Jens", "" ] ]
Archetypal analysis is a matrix factorization method with convexity constraints. Due to local minima, a good initialization is essential, but frequently used initialization methods yield either sub-optimal starting points or are prone to get stuck in poor local minima. In this paper, we propose archetypal analysis++ (AA++), a probabilistic initialization strategy for archetypal analysis that sequentially samples points based on their influence on the objective function, similar to $k$-means++. In fact, we argue that $k$-means++ already approximates the proposed initialization method. Furthermore, we suggest to adapt an efficient Monte Carlo approximation of $k$-means++ to AA++. In an extensive empirical evaluation of 15 real-world data sets of varying sizes and dimensionalities and considering two pre-processing strategies, we show that AA++ almost always outperforms all baselines, including the most frequently used ones.
0801.1210
Daniel Lombra\~na Gonz\'alez
Daniel Lombrana Gonzalez, Francisco Fernandez de Vega, L. Trujillo, G. Olague, F. Chavez de la O, M. Cardenas, L. Araujo, P. Castillo, K. Sharman
Increasing GP Computing Power via Volunteer Computing
First draft, preparing for PPSN 2008
null
null
null
cs.DC
null
This paper describes how it is possible to increase GP Computing Power via Volunteer Computing (VC) using the BOINC framework. Two experiments using well-known GP tools -Lil-gp & ECJ- are performed in order to demonstrate the benefit of using VC in terms of computing power and speed up. Finally we present an extension of the model where any GP tool or framework can be used inside BOINC regardless of its programming language, complexity or required operating system.
[ { "created": "Tue, 8 Jan 2008 11:36:35 GMT", "version": "v1" } ]
2008-01-09
[ [ "Gonzalez", "Daniel Lombrana", "" ], [ "de Vega", "Francisco Fernandez", "" ], [ "Trujillo", "L.", "" ], [ "Olague", "G.", "" ], [ "de la O", "F. Chavez", "" ], [ "Cardenas", "M.", "" ], [ "Araujo", "L.", "" ], [ "Castillo", "P.", "" ], [ "Sharman", "K.", "" ] ]
This paper describes how it is possible to increase GP Computing Power via Volunteer Computing (VC) using the BOINC framework. Two experiments using well-known GP tools -Lil-gp & ECJ- are performed in order to demonstrate the benefit of using VC in terms of computing power and speed up. Finally we present an extension of the model where any GP tool or framework can be used inside BOINC regardless of its programming language, complexity or required operating system.
2403.11367
Peng Jiang
Peng Jiang, Gaurav Pandey and Srikanth Saripalli
3DGS-ReLoc: 3D Gaussian Splatting for Map Representation and Visual ReLocalization
8 pages, 7 figures
null
null
null
cs.CV cs.GR cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel system designed for 3D mapping and visual relocalization using 3D Gaussian Splatting. Our proposed method uses LiDAR and camera data to create accurate and visually plausible representations of the environment. By leveraging LiDAR data to initiate the training of the 3D Gaussian Splatting map, our system constructs maps that are both detailed and geometrically accurate. To mitigate excessive GPU memory usage and facilitate rapid spatial queries, we employ a combination of a 2D voxel map and a KD-tree. This preparation makes our method well-suited for visual localization tasks, enabling efficient identification of correspondences between the query image and the rendered image from the Gaussian Splatting map via normalized cross-correlation (NCC). Additionally, we refine the camera pose of the query image using feature-based matching and the Perspective-n-Point (PnP) technique. The effectiveness, adaptability, and precision of our system are demonstrated through extensive evaluation on the KITTI360 dataset.
[ { "created": "Sun, 17 Mar 2024 23:06:12 GMT", "version": "v1" } ]
2024-03-19
[ [ "Jiang", "Peng", "" ], [ "Pandey", "Gaurav", "" ], [ "Saripalli", "Srikanth", "" ] ]
This paper presents a novel system designed for 3D mapping and visual relocalization using 3D Gaussian Splatting. Our proposed method uses LiDAR and camera data to create accurate and visually plausible representations of the environment. By leveraging LiDAR data to initiate the training of the 3D Gaussian Splatting map, our system constructs maps that are both detailed and geometrically accurate. To mitigate excessive GPU memory usage and facilitate rapid spatial queries, we employ a combination of a 2D voxel map and a KD-tree. This preparation makes our method well-suited for visual localization tasks, enabling efficient identification of correspondences between the query image and the rendered image from the Gaussian Splatting map via normalized cross-correlation (NCC). Additionally, we refine the camera pose of the query image using feature-based matching and the Perspective-n-Point (PnP) technique. The effectiveness, adaptability, and precision of our system are demonstrated through extensive evaluation on the KITTI360 dataset.
1207.1395
Vladimir Kolmogorov
Vladimir Kolmogorov, Martin Wainwright
On the optimality of tree-reweighted max-product message-passing
Appears in Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence (UAI2005)
null
null
UAI-P-2005-PG-316-323
cs.AI cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tree-reweighted max-product (TRW) message passing is a modified form of the ordinary max-product algorithm for attempting to find minimal energy configurations in Markov random field with cycles. For a TRW fixed point satisfying the strong tree agreement condition, the algorithm outputs a configuration that is provably optimal. In this paper, we focus on the case of binary variables with pairwise couplings, and establish stronger properties of TRW fixed points that satisfy only the milder condition of weak tree agreement (WTA). First, we demonstrate how it is possible to identify part of the optimal solution|i.e., a provably optimal solution for a subset of nodes| without knowing a complete solution. Second, we show that for submodular functions, a WTA fixed point always yields a globally optimal solution. We establish that for binary variables, any WTA fixed point always achieves the global maximum of the linear programming relaxation underlying the TRW method.
[ { "created": "Wed, 4 Jul 2012 16:16:43 GMT", "version": "v1" } ]
2012-07-09
[ [ "Kolmogorov", "Vladimir", "" ], [ "Wainwright", "Martin", "" ] ]
Tree-reweighted max-product (TRW) message passing is a modified form of the ordinary max-product algorithm for attempting to find minimal energy configurations in Markov random field with cycles. For a TRW fixed point satisfying the strong tree agreement condition, the algorithm outputs a configuration that is provably optimal. In this paper, we focus on the case of binary variables with pairwise couplings, and establish stronger properties of TRW fixed points that satisfy only the milder condition of weak tree agreement (WTA). First, we demonstrate how it is possible to identify part of the optimal solution|i.e., a provably optimal solution for a subset of nodes| without knowing a complete solution. Second, we show that for submodular functions, a WTA fixed point always yields a globally optimal solution. We establish that for binary variables, any WTA fixed point always achieves the global maximum of the linear programming relaxation underlying the TRW method.
2309.03326
Rehana Mahfuz
Rehana Mahfuz, Yinyi Guo, Arvind Krishna Sridhar, Erik Visser
Detecting False Alarms and Misses in Audio Captions
null
null
null
null
cs.MM
http://creativecommons.org/licenses/by/4.0/
Metrics to evaluate audio captions simply provide a score without much explanation regarding what may be wrong in case the score is low. Manual human intervention is needed to find any shortcomings of the caption. In this work, we introduce a metric which automatically identifies the shortcomings of an audio caption by detecting the misses and false alarms in a candidate caption with respect to a reference caption, and reports the recall, precision and F-score. Such a metric is very useful in profiling the deficiencies of an audio captioning model, which is a milestone towards improving the quality of audio captions.
[ { "created": "Wed, 6 Sep 2023 19:17:46 GMT", "version": "v1" } ]
2023-09-08
[ [ "Mahfuz", "Rehana", "" ], [ "Guo", "Yinyi", "" ], [ "Sridhar", "Arvind Krishna", "" ], [ "Visser", "Erik", "" ] ]
Metrics to evaluate audio captions simply provide a score without much explanation regarding what may be wrong in case the score is low. Manual human intervention is needed to find any shortcomings of the caption. In this work, we introduce a metric which automatically identifies the shortcomings of an audio caption by detecting the misses and false alarms in a candidate caption with respect to a reference caption, and reports the recall, precision and F-score. Such a metric is very useful in profiling the deficiencies of an audio captioning model, which is a milestone towards improving the quality of audio captions.
cs/0603098
Siddharth Ray
Siddharth Ray, Muriel Medard and Lizhong Zheng
A SIMO Fiber Aided Wireless Network Architecture
null
null
null
null
cs.IT math.IT
null
The concept of a fiber aided wireless network architecture (FAWNA) is introduced in [Ray et al., Allerton Conference 2005], which allows high-speed mobile connectivity by leveraging the speed of optical networks. In this paper, we consider a single-input, multiple-output (SIMO) FAWNA, which consists of a SIMO wireless channel and an optical fiber channel, connected through wireless-optical interfaces. We propose a scheme where the received wireless signal at each interface is quantized and sent over the fiber. Though our architecture is similar to that of the classical CEO problem, our problem is different from it. We show that the capacity of our scheme approaches the capacity of the architecture, exponentially with fiber capacity. We also show that for a given fiber capacity, there is an optimal operating wireless bandwidth and an optimal number of wireless-optical interfaces. The wireless-optical interfaces of our scheme have low complexity and do not require knowledge of the transmitter code book. They are also extendable to FAWNAs with large number of transmitters and interfaces and, offer adaptability to variable rates, changing channel conditions and node positions.
[ { "created": "Sat, 25 Mar 2006 00:46:44 GMT", "version": "v1" } ]
2007-07-13
[ [ "Ray", "Siddharth", "" ], [ "Medard", "Muriel", "" ], [ "Zheng", "Lizhong", "" ] ]
The concept of a fiber aided wireless network architecture (FAWNA) is introduced in [Ray et al., Allerton Conference 2005], which allows high-speed mobile connectivity by leveraging the speed of optical networks. In this paper, we consider a single-input, multiple-output (SIMO) FAWNA, which consists of a SIMO wireless channel and an optical fiber channel, connected through wireless-optical interfaces. We propose a scheme where the received wireless signal at each interface is quantized and sent over the fiber. Though our architecture is similar to that of the classical CEO problem, our problem is different from it. We show that the capacity of our scheme approaches the capacity of the architecture, exponentially with fiber capacity. We also show that for a given fiber capacity, there is an optimal operating wireless bandwidth and an optimal number of wireless-optical interfaces. The wireless-optical interfaces of our scheme have low complexity and do not require knowledge of the transmitter code book. They are also extendable to FAWNAs with large number of transmitters and interfaces and, offer adaptability to variable rates, changing channel conditions and node positions.
2305.10985
Elisa Bassignana
Elisa Bassignana, Filip Ginter, Sampo Pyysalo, Rob van der Goot, and Barbara Plank
Multi-CrossRE A Multi-Lingual Multi-Domain Dataset for Relation Extraction
Accepted at NoDaLiDa 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Most research in Relation Extraction (RE) involves the English language, mainly due to the lack of multi-lingual resources. We propose Multi-CrossRE, the broadest multi-lingual dataset for RE, including 26 languages in addition to English, and covering six text domains. Multi-CrossRE is a machine translated version of CrossRE (Bassignana and Plank, 2022), with a sub-portion including more than 200 sentences in seven diverse languages checked by native speakers. We run a baseline model over the 26 new datasets and--as sanity check--over the 26 back-translations to English. Results on the back-translated data are consistent with the ones on the original English CrossRE, indicating high quality of the translation and the resulting dataset.
[ { "created": "Thu, 18 May 2023 14:01:33 GMT", "version": "v1" } ]
2023-05-19
[ [ "Bassignana", "Elisa", "" ], [ "Ginter", "Filip", "" ], [ "Pyysalo", "Sampo", "" ], [ "van der Goot", "Rob", "" ], [ "Plank", "Barbara", "" ] ]
Most research in Relation Extraction (RE) involves the English language, mainly due to the lack of multi-lingual resources. We propose Multi-CrossRE, the broadest multi-lingual dataset for RE, including 26 languages in addition to English, and covering six text domains. Multi-CrossRE is a machine translated version of CrossRE (Bassignana and Plank, 2022), with a sub-portion including more than 200 sentences in seven diverse languages checked by native speakers. We run a baseline model over the 26 new datasets and--as sanity check--over the 26 back-translations to English. Results on the back-translated data are consistent with the ones on the original English CrossRE, indicating high quality of the translation and the resulting dataset.
2312.03540
Chi-En Tai
Olivia Markham and Yuhao Chen and Chi-en Amy Tai and Alexander Wong
FoodFusion: A Latent Diffusion Model for Realistic Food Image Generation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current state-of-the-art image generation models such as Latent Diffusion Models (LDMs) have demonstrated the capacity to produce visually striking food-related images. However, these generated images often exhibit an artistic or surreal quality that diverges from the authenticity of real-world food representations. This inadequacy renders them impractical for applications requiring realistic food imagery, such as training models for image-based dietary assessment. To address these limitations, we introduce FoodFusion, a Latent Diffusion model engineered specifically for the faithful synthesis of realistic food images from textual descriptions. The development of the FoodFusion model involves harnessing an extensive array of open-source food datasets, resulting in over 300,000 curated image-caption pairs. Additionally, we propose and employ two distinct data cleaning methodologies to ensure that the resulting image-text pairs maintain both realism and accuracy. The FoodFusion model, thus trained, demonstrates a remarkable ability to generate food images that exhibit a significant improvement in terms of both realism and diversity over the publicly available image generation models. We openly share the dataset and fine-tuned models to support advancements in this critical field of food image synthesis at https://bit.ly/genai4good.
[ { "created": "Wed, 6 Dec 2023 15:07:12 GMT", "version": "v1" } ]
2023-12-07
[ [ "Markham", "Olivia", "" ], [ "Chen", "Yuhao", "" ], [ "Tai", "Chi-en Amy", "" ], [ "Wong", "Alexander", "" ] ]
Current state-of-the-art image generation models such as Latent Diffusion Models (LDMs) have demonstrated the capacity to produce visually striking food-related images. However, these generated images often exhibit an artistic or surreal quality that diverges from the authenticity of real-world food representations. This inadequacy renders them impractical for applications requiring realistic food imagery, such as training models for image-based dietary assessment. To address these limitations, we introduce FoodFusion, a Latent Diffusion model engineered specifically for the faithful synthesis of realistic food images from textual descriptions. The development of the FoodFusion model involves harnessing an extensive array of open-source food datasets, resulting in over 300,000 curated image-caption pairs. Additionally, we propose and employ two distinct data cleaning methodologies to ensure that the resulting image-text pairs maintain both realism and accuracy. The FoodFusion model, thus trained, demonstrates a remarkable ability to generate food images that exhibit a significant improvement in terms of both realism and diversity over the publicly available image generation models. We openly share the dataset and fine-tuned models to support advancements in this critical field of food image synthesis at https://bit.ly/genai4good.
2403.10569
Hung Cao
Atah Nuh Mih, Alireza Rahimi, Asfia Kawnine, Francis Palma, Monica Wachowicz, Rickey Dubay, Hung Cao
Achieving Pareto Optimality using Efficient Parameter Reduction for DNNs in Resource-Constrained Edge Environment
arXiv admin note: text overlap with arXiv:2401.05355
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper proposes an optimization of an existing Deep Neural Network (DNN) that improves its hardware utilization and facilitates on-device training for resource-constrained edge environments. We implement efficient parameter reduction strategies on Xception that shrink the model size without sacrificing accuracy, thus decreasing memory utilization during training. We evaluate our model in two experiments: Caltech-101 image classification and PCB defect detection and compare its performance against the original Xception and lightweight models, EfficientNetV2B1 and MobileNetV2. The results of the Caltech-101 image classification show that our model has a better test accuracy (76.21%) than Xception (75.89%), uses less memory on average (847.9MB) than Xception (874.6MB), and has faster training and inference times. The lightweight models overfit with EfficientNetV2B1 having a 30.52% test accuracy and MobileNetV2 having a 58.11% test accuracy. Both lightweight models have better memory usage than our model and Xception. On the PCB defect detection, our model has the best test accuracy (90.30%), compared to Xception (88.10%), EfficientNetV2B1 (55.25%), and MobileNetV2 (50.50%). MobileNetV2 has the least average memory usage (849.4MB), followed by our model (865.8MB), then EfficientNetV2B1 (874.8MB), and Xception has the highest (893.6MB). We further experiment with pre-trained weights and observe that memory usage decreases thereby showing the benefits of transfer learning. A Pareto analysis of the models' performance shows that our optimized model architecture satisfies accuracy and low memory utilization objectives.
[ { "created": "Thu, 14 Mar 2024 19:40:58 GMT", "version": "v1" } ]
2024-03-19
[ [ "Mih", "Atah Nuh", "" ], [ "Rahimi", "Alireza", "" ], [ "Kawnine", "Asfia", "" ], [ "Palma", "Francis", "" ], [ "Wachowicz", "Monica", "" ], [ "Dubay", "Rickey", "" ], [ "Cao", "Hung", "" ] ]
This paper proposes an optimization of an existing Deep Neural Network (DNN) that improves its hardware utilization and facilitates on-device training for resource-constrained edge environments. We implement efficient parameter reduction strategies on Xception that shrink the model size without sacrificing accuracy, thus decreasing memory utilization during training. We evaluate our model in two experiments: Caltech-101 image classification and PCB defect detection and compare its performance against the original Xception and lightweight models, EfficientNetV2B1 and MobileNetV2. The results of the Caltech-101 image classification show that our model has a better test accuracy (76.21%) than Xception (75.89%), uses less memory on average (847.9MB) than Xception (874.6MB), and has faster training and inference times. The lightweight models overfit with EfficientNetV2B1 having a 30.52% test accuracy and MobileNetV2 having a 58.11% test accuracy. Both lightweight models have better memory usage than our model and Xception. On the PCB defect detection, our model has the best test accuracy (90.30%), compared to Xception (88.10%), EfficientNetV2B1 (55.25%), and MobileNetV2 (50.50%). MobileNetV2 has the least average memory usage (849.4MB), followed by our model (865.8MB), then EfficientNetV2B1 (874.8MB), and Xception has the highest (893.6MB). We further experiment with pre-trained weights and observe that memory usage decreases thereby showing the benefits of transfer learning. A Pareto analysis of the models' performance shows that our optimized model architecture satisfies accuracy and low memory utilization objectives.
2207.00581
Mohamad Rida Rammal
Mohamad Rida Rammal, Alessandro Achille, Aditya Golatkar, Suhas Diggavi, Stefano Soatto
On Leave-One-Out Conditional Mutual Information For Generalization
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We derive information theoretic generalization bounds for supervised learning algorithms based on a new measure of leave-one-out conditional mutual information (loo-CMI). Contrary to other CMI bounds, which are black-box bounds that do not exploit the structure of the problem and may be hard to evaluate in practice, our loo-CMI bounds can be computed easily and can be interpreted in connection to other notions such as classical leave-one-out cross-validation, stability of the optimization algorithm, and the geometry of the loss-landscape. It applies both to the output of training algorithms as well as their predictions. We empirically validate the quality of the bound by evaluating its predicted generalization gap in scenarios for deep learning. In particular, our bounds are non-vacuous on large-scale image-classification tasks.
[ { "created": "Fri, 1 Jul 2022 17:58:29 GMT", "version": "v1" } ]
2022-07-04
[ [ "Rammal", "Mohamad Rida", "" ], [ "Achille", "Alessandro", "" ], [ "Golatkar", "Aditya", "" ], [ "Diggavi", "Suhas", "" ], [ "Soatto", "Stefano", "" ] ]
We derive information theoretic generalization bounds for supervised learning algorithms based on a new measure of leave-one-out conditional mutual information (loo-CMI). Contrary to other CMI bounds, which are black-box bounds that do not exploit the structure of the problem and may be hard to evaluate in practice, our loo-CMI bounds can be computed easily and can be interpreted in connection to other notions such as classical leave-one-out cross-validation, stability of the optimization algorithm, and the geometry of the loss-landscape. It applies both to the output of training algorithms as well as their predictions. We empirically validate the quality of the bound by evaluating its predicted generalization gap in scenarios for deep learning. In particular, our bounds are non-vacuous on large-scale image-classification tasks.
1804.01922
Fabian Steiner
Fabian Steiner, Georg B\"ocherer, Patrick Schulte
Approaching Waterfilling Capacity of Parallel Channels by Higher Order Modulation and Probabilistic Amplitude Shaping
Invited paper at CISS 2018, special session "Optimal Communications with Discrete Input Signals"
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Parallel, additive white Gaussian noise (AWGN) channels with an average sum power constraint are considered. It is shown how the waterfilling Shannon capacity can be approached by higher order modulation and probabilistic amplitude shaping (PAS). This is achieved by a new distribution matching approach called product distribution matching (PDM). The asymptotic performance of PDM is analyzed by achievable rates. A heuristic for optimizing the input distribution is proposed, which enables signaling at a target spectral efficiency with a fixed-rate forward error correction (FEC) code, while the optimal power allocation is ensured by mercury-waterfilling and a simple bit-loading strategy. Finite blocklength simulation results with 5G low-density parity-check codes show power savings of around 1 dB compared to a conventional scheme with uniform input distributions.
[ { "created": "Thu, 5 Apr 2018 15:53:34 GMT", "version": "v1" } ]
2018-04-06
[ [ "Steiner", "Fabian", "" ], [ "Böcherer", "Georg", "" ], [ "Schulte", "Patrick", "" ] ]
Parallel, additive white Gaussian noise (AWGN) channels with an average sum power constraint are considered. It is shown how the waterfilling Shannon capacity can be approached by higher order modulation and probabilistic amplitude shaping (PAS). This is achieved by a new distribution matching approach called product distribution matching (PDM). The asymptotic performance of PDM is analyzed by achievable rates. A heuristic for optimizing the input distribution is proposed, which enables signaling at a target spectral efficiency with a fixed-rate forward error correction (FEC) code, while the optimal power allocation is ensured by mercury-waterfilling and a simple bit-loading strategy. Finite blocklength simulation results with 5G low-density parity-check codes show power savings of around 1 dB compared to a conventional scheme with uniform input distributions.
1812.03444
Muhammad Marwan Muhammad Fuad
Muhammad Marwan Muhammad Fuad
Applying Nature-Inspired Optimization Algorithms for Selecting Important Timestamps to Reduce Time Series Dimensionality
13 pages, Evolving Systems (2017). https://link.springer.com/article/10.1007/s12530-017-9207-7#citeas
null
10.1007/s12530-017-9207-7
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time series data account for a major part of data supply available today. Time series mining handles several tasks such as classification, clustering, query-by-content, prediction, and others. Performing data mining tasks on raw time series is inefficient as these data are high-dimensional by nature. Instead, time series are first pre-processed using several techniques before different data mining tasks can be performed on them. In general, there are two main approaches to reduce time series dimensionality, the first is what we call landmark methods. These methods are based on finding characteristic features in the target time series. The second is based on data transformations. These methods transform the time series from the original space into a reduced space, where they can be managed more efficiently. The method we present in this paper applies a third approach, as it projects a time series onto a lower-dimensional space by selecting important points in the time series. The novelty of our method is that these points are not chosen according to a geometric criterion, which is subjective in most cases, but through an optimization process. The other important characteristic of our method is that these important points are selected on a dataset-level and not on a single time series-level. The direct advantage of this strategy is that the distance defined on the low-dimensional space lower bounds the original distance applied to raw data. This enables us to apply the popular GEMINI algorithm. The promising results of our experiments on a wide variety of time series datasets, using different optimizers, and applied to the two major data mining tasks, validate our new method.
[ { "created": "Sun, 9 Dec 2018 08:26:35 GMT", "version": "v1" } ]
2018-12-11
[ [ "Fuad", "Muhammad Marwan Muhammad", "" ] ]
Time series data account for a major part of data supply available today. Time series mining handles several tasks such as classification, clustering, query-by-content, prediction, and others. Performing data mining tasks on raw time series is inefficient as these data are high-dimensional by nature. Instead, time series are first pre-processed using several techniques before different data mining tasks can be performed on them. In general, there are two main approaches to reduce time series dimensionality, the first is what we call landmark methods. These methods are based on finding characteristic features in the target time series. The second is based on data transformations. These methods transform the time series from the original space into a reduced space, where they can be managed more efficiently. The method we present in this paper applies a third approach, as it projects a time series onto a lower-dimensional space by selecting important points in the time series. The novelty of our method is that these points are not chosen according to a geometric criterion, which is subjective in most cases, but through an optimization process. The other important characteristic of our method is that these important points are selected on a dataset-level and not on a single time series-level. The direct advantage of this strategy is that the distance defined on the low-dimensional space lower bounds the original distance applied to raw data. This enables us to apply the popular GEMINI algorithm. The promising results of our experiments on a wide variety of time series datasets, using different optimizers, and applied to the two major data mining tasks, validate our new method.
0708.0712
Stephanie Gerbaud
St\'ephanie Gerbaud (IRISA), Nicolas Mollet (IRISA), Bruno Arnaldi (IRISA)
Virtual Environments for Training: From Individual Learning to Collaboration with Humanoids
null
Dans Edutainment (2007)
null
null
cs.GR
null
The next generation of virtual environments for training is oriented towards collaborative aspects. Therefore, we have decided to enhance our platform for virtual training environments, adding collaboration opportunities and integrating humanoids. In this paper we put forward a model of humanoid that suits both virtual humans and representations of real users, according to collaborative training activities. We suggest adaptations to the scenario model of our platform making it possible to write collaborative procedures. We introduce a mechanism of action selection made up of a global repartition and an individual choice. These models are currently being integrated and validated in GVT, a virtual training tool for maintenance of military equipments, developed in collaboration with the French company NEXTER-Group.
[ { "created": "Mon, 6 Aug 2007 07:42:56 GMT", "version": "v1" } ]
2007-08-07
[ [ "Gerbaud", "Stéphanie", "", "IRISA" ], [ "Mollet", "Nicolas", "", "IRISA" ], [ "Arnaldi", "Bruno", "", "IRISA" ] ]
The next generation of virtual environments for training is oriented towards collaborative aspects. Therefore, we have decided to enhance our platform for virtual training environments, adding collaboration opportunities and integrating humanoids. In this paper we put forward a model of humanoid that suits both virtual humans and representations of real users, according to collaborative training activities. We suggest adaptations to the scenario model of our platform making it possible to write collaborative procedures. We introduce a mechanism of action selection made up of a global repartition and an individual choice. These models are currently being integrated and validated in GVT, a virtual training tool for maintenance of military equipments, developed in collaboration with the French company NEXTER-Group.
2103.10213
Jianhua He
Zheng Huang, Kai Chen, Jianhua He, Xiang Bai, Dimosthenis Karatzas, Shjian Lu, and C.V. Jawahar
ICDAR2019 Competition on Scanned Receipt OCR and Information Extraction
null
null
10.1109/ICDAR.2019.00244
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Scanned receipts OCR and key information extraction (SROIE) represent the processeses of recognizing text from scanned receipts and extracting key texts from them and save the extracted tests to structured documents. SROIE plays critical roles for many document analysis applications and holds great commercial potentials, but very little research works and advances have been published in this area. In recognition of the technical challenges, importance and huge commercial potentials of SROIE, we organized the ICDAR 2019 competition on SROIE. In this competition, we set up three tasks, namely, Scanned Receipt Text Localisation (Task 1), Scanned Receipt OCR (Task 2) and Key Information Extraction from Scanned Receipts (Task 3). A new dataset with 1000 whole scanned receipt images and annotations is created for the competition. In this report we will presents the motivation, competition datasets, task definition, evaluation protocol, submission statistics, performance of submitted methods and results analysis.
[ { "created": "Thu, 18 Mar 2021 12:33:41 GMT", "version": "v1" } ]
2021-03-19
[ [ "Huang", "Zheng", "" ], [ "Chen", "Kai", "" ], [ "He", "Jianhua", "" ], [ "Bai", "Xiang", "" ], [ "Karatzas", "Dimosthenis", "" ], [ "Lu", "Shjian", "" ], [ "Jawahar", "C. V.", "" ] ]
Scanned receipts OCR and key information extraction (SROIE) represent the processeses of recognizing text from scanned receipts and extracting key texts from them and save the extracted tests to structured documents. SROIE plays critical roles for many document analysis applications and holds great commercial potentials, but very little research works and advances have been published in this area. In recognition of the technical challenges, importance and huge commercial potentials of SROIE, we organized the ICDAR 2019 competition on SROIE. In this competition, we set up three tasks, namely, Scanned Receipt Text Localisation (Task 1), Scanned Receipt OCR (Task 2) and Key Information Extraction from Scanned Receipts (Task 3). A new dataset with 1000 whole scanned receipt images and annotations is created for the competition. In this report we will presents the motivation, competition datasets, task definition, evaluation protocol, submission statistics, performance of submitted methods and results analysis.
2111.04427
Santiago Andr\'es Azcoitia
Santiago Andr\'es Azcoitia, Costas Iordanou, Nikolaos Laoutaris
What Is the Price of Data? A Measurement Study of Commercial Data Marketplaces
13 pages, 13 figures, 7 tables
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A large number of Data Marketplaces (DMs) have appeared in the last few years to help owners monetise their data, and data buyers fuel their marketing process, train their ML models, and perform other data-driven decision processes. In this paper, we present a first of its kind measurement study of the growing DM ecosystem and shed light on several totally unknown facts about it. For example, we show that the median price of live data products sold under a subscription model is around US\$1,400 per month. For one-off purchases of static data, the median price is around US\$2,200. We analyse the prices of different categories of data and show that products about telecommunications, manufacturing, automotive, and gaming command the highest prices. We also develop classifiers for comparing prices across different DMs as well as a regression analysis for revealing features that correlate with data product prices.
[ { "created": "Mon, 25 Oct 2021 10:39:47 GMT", "version": "v1" } ]
2021-11-12
[ [ "Azcoitia", "Santiago Andrés", "" ], [ "Iordanou", "Costas", "" ], [ "Laoutaris", "Nikolaos", "" ] ]
A large number of Data Marketplaces (DMs) have appeared in the last few years to help owners monetise their data, and data buyers fuel their marketing process, train their ML models, and perform other data-driven decision processes. In this paper, we present a first of its kind measurement study of the growing DM ecosystem and shed light on several totally unknown facts about it. For example, we show that the median price of live data products sold under a subscription model is around US\$1,400 per month. For one-off purchases of static data, the median price is around US\$2,200. We analyse the prices of different categories of data and show that products about telecommunications, manufacturing, automotive, and gaming command the highest prices. We also develop classifiers for comparing prices across different DMs as well as a regression analysis for revealing features that correlate with data product prices.
1110.1549
Sunil
Sunil Gavaskar Reddy and Rajendra prasad
Power comparison of CMOS and adiabatic full adder circuit
11pages
International Journal of VLSI design & Communication Systems (VLSICS) Vol.2, No.3, September 2011
10.5121/vlsic.2011.2306
null
cs.AR
http://creativecommons.org/licenses/by-nc-sa/3.0/
Full adders are important components in applications such as digital signal processors (DSP) architectures and microprocessors. Apart from the basic addition adders also used in performing useful operations such as subtraction, multiplication, division, address calculation, etc. In most of these systems the adder lies in the critical path that determines the overall performance of the system. In this paper conventional complementary metal oxide semiconductor (CMOS) and adiabatic adder circuits are analyzed in terms of power and transistor count using 0.18UM technology.
[ { "created": "Fri, 7 Oct 2011 14:40:51 GMT", "version": "v1" } ]
2011-10-10
[ [ "Reddy", "Sunil Gavaskar", "" ], [ "prasad", "Rajendra", "" ] ]
Full adders are important components in applications such as digital signal processors (DSP) architectures and microprocessors. Apart from the basic addition adders also used in performing useful operations such as subtraction, multiplication, division, address calculation, etc. In most of these systems the adder lies in the critical path that determines the overall performance of the system. In this paper conventional complementary metal oxide semiconductor (CMOS) and adiabatic adder circuits are analyzed in terms of power and transistor count using 0.18UM technology.
2003.12065
Wen Liu
Zhaoyang Sun, Wenxuan Liu, Feng Liu, Ryan Wen Liu, Shengwu Xiong
Local Facial Makeup Transfer via Disentangled Representation
There's something wrong with the experiment. It's not complete
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Facial makeup transfer aims to render a non-makeup face image in an arbitrary given makeup one while preserving face identity. The most advanced method separates makeup style information from face images to realize makeup transfer. However, makeup style includes several semantic clear local styles which are still entangled together. In this paper, we propose a novel unified adversarial disentangling network to further decompose face images into four independent components, i.e., personal identity, lips makeup style, eyes makeup style and face makeup style. Owing to the further disentangling of makeup style, our method can not only control the degree of global makeup style, but also flexibly regulate the degree of local makeup styles which any other approaches can't do. For makeup removal, different from other methods which regard makeup removal as the reverse process of makeup, we integrate the makeup transfer with the makeup removal into one uniform framework and obtain multiple makeup removal results. Extensive experiments have demonstrated that our approach can produce more realistic and accurate makeup transfer results compared to the state-of-the-art methods.
[ { "created": "Fri, 27 Mar 2020 00:25:13 GMT", "version": "v1" }, { "created": "Sun, 21 Jun 2020 01:22:02 GMT", "version": "v2" } ]
2020-06-23
[ [ "Sun", "Zhaoyang", "" ], [ "Liu", "Wenxuan", "" ], [ "Liu", "Feng", "" ], [ "Liu", "Ryan Wen", "" ], [ "Xiong", "Shengwu", "" ] ]
Facial makeup transfer aims to render a non-makeup face image in an arbitrary given makeup one while preserving face identity. The most advanced method separates makeup style information from face images to realize makeup transfer. However, makeup style includes several semantic clear local styles which are still entangled together. In this paper, we propose a novel unified adversarial disentangling network to further decompose face images into four independent components, i.e., personal identity, lips makeup style, eyes makeup style and face makeup style. Owing to the further disentangling of makeup style, our method can not only control the degree of global makeup style, but also flexibly regulate the degree of local makeup styles which any other approaches can't do. For makeup removal, different from other methods which regard makeup removal as the reverse process of makeup, we integrate the makeup transfer with the makeup removal into one uniform framework and obtain multiple makeup removal results. Extensive experiments have demonstrated that our approach can produce more realistic and accurate makeup transfer results compared to the state-of-the-art methods.
2407.21244
Hamidreza Kasaei
Hamidreza Kasaei, Mohammadreza Kasaei
VITAL: Visual Teleoperation to Enhance Robot Learning through Human-in-the-Loop Corrections
null
null
null
null
cs.RO cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Imitation Learning (IL) has emerged as a powerful approach in robotics, allowing robots to acquire new skills by mimicking human actions. Despite its potential, the data collection process for IL remains a significant challenge due to the logistical difficulties and high costs associated with obtaining high-quality demonstrations. To address these issues, we propose a low-cost visual teleoperation system for bimanual manipulation tasks, called VITAL. Our approach leverages affordable hardware and visual processing techniques to collect demonstrations, which are then augmented to create extensive training datasets for imitation learning. We enhance the generalizability and robustness of the learned policies by utilizing both real and simulated environments and human-in-the-loop corrections. We evaluated our method through several rounds of experiments in simulated and real-robot settings, focusing on tasks of varying complexity, including bottle collecting, stacking objects, and hammering. Our experimental results validate the effectiveness of our approach in learning robust robot policies from simulated data, significantly improved by human-in-the-loop corrections and real-world data integration. Additionally, we demonstrate the framework's capability to generalize to new tasks, such as setting a drink tray, showcasing its adaptability and potential for handling a wide range of real-world bimanual manipulation tasks. A video of the experiments can be found at: https://youtu.be/YeVAMRqRe64?si=R179xDlEGc7nPu8i
[ { "created": "Tue, 30 Jul 2024 23:29:47 GMT", "version": "v1" } ]
2024-08-01
[ [ "Kasaei", "Hamidreza", "" ], [ "Kasaei", "Mohammadreza", "" ] ]
Imitation Learning (IL) has emerged as a powerful approach in robotics, allowing robots to acquire new skills by mimicking human actions. Despite its potential, the data collection process for IL remains a significant challenge due to the logistical difficulties and high costs associated with obtaining high-quality demonstrations. To address these issues, we propose a low-cost visual teleoperation system for bimanual manipulation tasks, called VITAL. Our approach leverages affordable hardware and visual processing techniques to collect demonstrations, which are then augmented to create extensive training datasets for imitation learning. We enhance the generalizability and robustness of the learned policies by utilizing both real and simulated environments and human-in-the-loop corrections. We evaluated our method through several rounds of experiments in simulated and real-robot settings, focusing on tasks of varying complexity, including bottle collecting, stacking objects, and hammering. Our experimental results validate the effectiveness of our approach in learning robust robot policies from simulated data, significantly improved by human-in-the-loop corrections and real-world data integration. Additionally, we demonstrate the framework's capability to generalize to new tasks, such as setting a drink tray, showcasing its adaptability and potential for handling a wide range of real-world bimanual manipulation tasks. A video of the experiments can be found at: https://youtu.be/YeVAMRqRe64?si=R179xDlEGc7nPu8i
2401.08209
Leheng Zhang
Leheng Zhang, Yawei Li, Xingyu Zhou, Xiaorui Zhao, Shuhang Gu
Transcending the Limit of Local Window: Advanced Super-Resolution Transformer with Adaptive Token Dictionary
15 pages, 9 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single Image Super-Resolution is a classic computer vision problem that involves estimating high-resolution (HR) images from low-resolution (LR) ones. Although deep neural networks (DNNs), especially Transformers for super-resolution, have seen significant advancements in recent years, challenges still remain, particularly in limited receptive field caused by window-based self-attention. To address these issues, we introduce a group of auxiliary Adaptive Token Dictionary to SR Transformer and establish an ATD-SR method. The introduced token dictionary could learn prior information from training data and adapt the learned prior to specific testing image through an adaptive refinement step. The refinement strategy could not only provide global information to all input tokens but also group image tokens into categories. Based on category partitions, we further propose a category-based self-attention mechanism designed to leverage distant but similar tokens for enhancing input features. The experimental results show that our method achieves the best performance on various single image super-resolution benchmarks.
[ { "created": "Tue, 16 Jan 2024 08:50:44 GMT", "version": "v1" }, { "created": "Thu, 18 Jan 2024 07:59:03 GMT", "version": "v2" } ]
2024-01-19
[ [ "Zhang", "Leheng", "" ], [ "Li", "Yawei", "" ], [ "Zhou", "Xingyu", "" ], [ "Zhao", "Xiaorui", "" ], [ "Gu", "Shuhang", "" ] ]
Single Image Super-Resolution is a classic computer vision problem that involves estimating high-resolution (HR) images from low-resolution (LR) ones. Although deep neural networks (DNNs), especially Transformers for super-resolution, have seen significant advancements in recent years, challenges still remain, particularly in limited receptive field caused by window-based self-attention. To address these issues, we introduce a group of auxiliary Adaptive Token Dictionary to SR Transformer and establish an ATD-SR method. The introduced token dictionary could learn prior information from training data and adapt the learned prior to specific testing image through an adaptive refinement step. The refinement strategy could not only provide global information to all input tokens but also group image tokens into categories. Based on category partitions, we further propose a category-based self-attention mechanism designed to leverage distant but similar tokens for enhancing input features. The experimental results show that our method achieves the best performance on various single image super-resolution benchmarks.
1206.1282
Vinod M. Prabhakaran
Vinod M. Prabhakaran and Manoj M. Prabhakaran
Assisted Common Information with an Application to Secure Two-Party Sampling
26 pages, 8 figures, to appear in IEEE Transactions on Information Theory
null
10.1109/TIT.2014.2316011
null
cs.IT cs.CR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we generalize the notion of common information of two dependent variables introduced by G\'acs & K\"orner. They defined common information as the largest entropy rate of a common random variable two parties observing one of the sources each can agree upon. It is well-known that their common information captures only a limited form of dependence between the random variables and is zero in most cases of interest. Our generalization, which we call the Assisted Common Information system, takes into account almost-common information ignored by G\'acs-K\"orner common information. In the assisted common information system, a genie assists the parties in agreeing on a more substantial common random variable; we characterize the trade-off between the amount of communication from the genie and the quality of the common random variable produced using a rate region we call the region of tension. We show that this region has an application in deriving upperbounds on the efficiency of secure two-party sampling, which is a special case of secure multi-party computation, a central problem in modern cryptography. Two parties desire to produce samples of a pair of jointly distributed random variables such that neither party learns more about the other's output than what its own output reveals. They have access to a set up - correlated random variables whose distribution is different from the desired distribution - and noiseless communication. We present an upperbound on the rate at which a given set up can be used to produce samples from a desired distribution by showing a monotonicity property for the region of tension: a protocol between two parties can only lower the tension between their views. Then, by calculating the bounds on the region of tension of various pairs of correlated random variables, we derive bounds on the rate of secure two-party sampling.
[ { "created": "Wed, 6 Jun 2012 17:25:57 GMT", "version": "v1" }, { "created": "Wed, 6 Nov 2013 06:36:58 GMT", "version": "v2" }, { "created": "Tue, 25 Mar 2014 06:43:19 GMT", "version": "v3" } ]
2016-11-17
[ [ "Prabhakaran", "Vinod M.", "" ], [ "Prabhakaran", "Manoj M.", "" ] ]
In this paper we generalize the notion of common information of two dependent variables introduced by G\'acs & K\"orner. They defined common information as the largest entropy rate of a common random variable two parties observing one of the sources each can agree upon. It is well-known that their common information captures only a limited form of dependence between the random variables and is zero in most cases of interest. Our generalization, which we call the Assisted Common Information system, takes into account almost-common information ignored by G\'acs-K\"orner common information. In the assisted common information system, a genie assists the parties in agreeing on a more substantial common random variable; we characterize the trade-off between the amount of communication from the genie and the quality of the common random variable produced using a rate region we call the region of tension. We show that this region has an application in deriving upperbounds on the efficiency of secure two-party sampling, which is a special case of secure multi-party computation, a central problem in modern cryptography. Two parties desire to produce samples of a pair of jointly distributed random variables such that neither party learns more about the other's output than what its own output reveals. They have access to a set up - correlated random variables whose distribution is different from the desired distribution - and noiseless communication. We present an upperbound on the rate at which a given set up can be used to produce samples from a desired distribution by showing a monotonicity property for the region of tension: a protocol between two parties can only lower the tension between their views. Then, by calculating the bounds on the region of tension of various pairs of correlated random variables, we derive bounds on the rate of secure two-party sampling.
2204.09837
Vaidehi Srinivas
Vaidehi Srinivas, David P. Woodruff, Ziyu Xu, Samson Zhou
Memory Bounds for the Experts Problem
32 pages, 1 figure, to appear in the 54th ACM Symposium on Theory of Computing (STOC 2022)
null
null
null
cs.DS cs.LG
http://creativecommons.org/licenses/by/4.0/
Online learning with expert advice is a fundamental problem of sequential prediction. In this problem, the algorithm has access to a set of $n$ "experts" who make predictions on each day. The goal on each day is to process these predictions, and make a prediction with the minimum cost. After making a prediction, the algorithm sees the actual outcome on that day, updates its state, and then moves on to the next day. An algorithm is judged by how well it does compared to the best expert in the set. The classical algorithm for this problem is the multiplicative weights algorithm. However, every application, to our knowledge, relies on storing weights for every expert, and uses $\Omega(n)$ memory. There is little work on understanding the memory required to solve the online learning with expert advice problem, or run standard sequential prediction algorithms, in natural streaming models, which is especially important when the number of experts, as well as the number of days on which the experts make predictions, is large. We initiate the study of the learning with expert advice problem in the streaming setting, and show lower and upper bounds. Our lower bound for i.i.d., random order, and adversarial order streams uses a reduction to a custom-built problem using a novel masking technique, to show a smooth trade-off for regret versus memory. Our upper bounds show novel ways to run standard sequential prediction algorithms in rounds on small "pools" of experts, thus reducing the necessary memory. For random-order streams, we show that our upper bound is tight up to low order terms. We hope that these results and techniques will have broad applications in online learning, and can inspire algorithms based on standard sequential prediction techniques, like multiplicative weights, for a wide range of other problems in the memory-constrained setting.
[ { "created": "Thu, 21 Apr 2022 01:22:18 GMT", "version": "v1" } ]
2022-04-22
[ [ "Srinivas", "Vaidehi", "" ], [ "Woodruff", "David P.", "" ], [ "Xu", "Ziyu", "" ], [ "Zhou", "Samson", "" ] ]
Online learning with expert advice is a fundamental problem of sequential prediction. In this problem, the algorithm has access to a set of $n$ "experts" who make predictions on each day. The goal on each day is to process these predictions, and make a prediction with the minimum cost. After making a prediction, the algorithm sees the actual outcome on that day, updates its state, and then moves on to the next day. An algorithm is judged by how well it does compared to the best expert in the set. The classical algorithm for this problem is the multiplicative weights algorithm. However, every application, to our knowledge, relies on storing weights for every expert, and uses $\Omega(n)$ memory. There is little work on understanding the memory required to solve the online learning with expert advice problem, or run standard sequential prediction algorithms, in natural streaming models, which is especially important when the number of experts, as well as the number of days on which the experts make predictions, is large. We initiate the study of the learning with expert advice problem in the streaming setting, and show lower and upper bounds. Our lower bound for i.i.d., random order, and adversarial order streams uses a reduction to a custom-built problem using a novel masking technique, to show a smooth trade-off for regret versus memory. Our upper bounds show novel ways to run standard sequential prediction algorithms in rounds on small "pools" of experts, thus reducing the necessary memory. For random-order streams, we show that our upper bound is tight up to low order terms. We hope that these results and techniques will have broad applications in online learning, and can inspire algorithms based on standard sequential prediction techniques, like multiplicative weights, for a wide range of other problems in the memory-constrained setting.
2311.00931
Kamel Alrashedy
Kamel Alrashedy, Vincent J. Hellendoorn, Alessandro Orso
Learning Defect Prediction from Unrealistic Data
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pretrained models of code, such as CodeBERT and CodeT5, have become popular choices for code understanding and generation tasks. Such models tend to be large and require commensurate volumes of training data, which are rarely available for downstream tasks. Instead, it has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs. Models trained on such data, however, tend to only perform well on similar data, while underperforming on real world programs. In this paper, we conjecture that this discrepancy stems from the presence of distracting samples that steer the model away from the real-world task distribution. To investigate this conjecture, we propose an approach for identifying the subsets of these large yet unrealistic datasets that are most similar to examples in real-world datasets based on their learned representations. Our approach extracts high-dimensional embeddings of both real-world and artificial programs using a neural model and scores artificial samples based on their distance to the nearest real-world sample. We show that training on only the nearest, representationally most similar samples while discarding samples that are not at all similar in representations yields consistent improvements across two popular pretrained models of code on two code understanding tasks. Our results are promising, in that they show that training models on a representative subset of an unrealistic dataset can help us harness the power of large-scale synthetic data generation while preserving downstream task performance. Finally, we highlight the limitations of applying AI models for predicting vulnerabilities and bugs in real-world applications
[ { "created": "Thu, 2 Nov 2023 01:51:43 GMT", "version": "v1" }, { "created": "Sat, 20 Jan 2024 17:05:40 GMT", "version": "v2" } ]
2024-01-23
[ [ "Alrashedy", "Kamel", "" ], [ "Hellendoorn", "Vincent J.", "" ], [ "Orso", "Alessandro", "" ] ]
Pretrained models of code, such as CodeBERT and CodeT5, have become popular choices for code understanding and generation tasks. Such models tend to be large and require commensurate volumes of training data, which are rarely available for downstream tasks. Instead, it has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs. Models trained on such data, however, tend to only perform well on similar data, while underperforming on real world programs. In this paper, we conjecture that this discrepancy stems from the presence of distracting samples that steer the model away from the real-world task distribution. To investigate this conjecture, we propose an approach for identifying the subsets of these large yet unrealistic datasets that are most similar to examples in real-world datasets based on their learned representations. Our approach extracts high-dimensional embeddings of both real-world and artificial programs using a neural model and scores artificial samples based on their distance to the nearest real-world sample. We show that training on only the nearest, representationally most similar samples while discarding samples that are not at all similar in representations yields consistent improvements across two popular pretrained models of code on two code understanding tasks. Our results are promising, in that they show that training models on a representative subset of an unrealistic dataset can help us harness the power of large-scale synthetic data generation while preserving downstream task performance. Finally, we highlight the limitations of applying AI models for predicting vulnerabilities and bugs in real-world applications
2310.09466
Wenyu Jiang
Kaizhi Huang, Wenyu Jiang, Yajun Chen, Liang Jin, Qingqing Wu, Xiaoling Hu
Robust Anti-jamming Communications with DMA-Based Reconfigurable Heterogeneous Array
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the future commercial and military communication systems, anti-jamming remains a critical issue. Existing homogeneous or heterogeneous arrays with a limited degrees of freedom (DoF) and high consumption are unable to meet the requirements of communication in rapidly changing and intense jamming environments. To address these challenges, we propose a reconfigurable heterogeneous array (RHA) architecture based on dynamic metasurface antenna (DMA), which will increase the DoF and further improve anti-jamming capabilities. We propose a two-step anti-jamming scheme based on RHA, where the multipaths are estimated by an atomic norm minimization (ANM) based scheme, and then the received signal-to-interference-plus-noise ratio (SINR) is maximized by jointly designing the phase shift of each DMA element and the weights of the array elements. To solve the challenging non-convex discrete fractional problem along with the estimation error in the direction of arrival (DoA) and channel state information (CSI), we propose a robust alternative algorithm based on the S-procedure to solve the lower-bound SINR maximization problem. Simulation results demonstrate that the proposed RHA architecture and corresponding schemes have superior performance in terms of jamming immunity and robustness.
[ { "created": "Sat, 14 Oct 2023 01:52:25 GMT", "version": "v1" } ]
2023-10-17
[ [ "Huang", "Kaizhi", "" ], [ "Jiang", "Wenyu", "" ], [ "Chen", "Yajun", "" ], [ "Jin", "Liang", "" ], [ "Wu", "Qingqing", "" ], [ "Hu", "Xiaoling", "" ] ]
In the future commercial and military communication systems, anti-jamming remains a critical issue. Existing homogeneous or heterogeneous arrays with a limited degrees of freedom (DoF) and high consumption are unable to meet the requirements of communication in rapidly changing and intense jamming environments. To address these challenges, we propose a reconfigurable heterogeneous array (RHA) architecture based on dynamic metasurface antenna (DMA), which will increase the DoF and further improve anti-jamming capabilities. We propose a two-step anti-jamming scheme based on RHA, where the multipaths are estimated by an atomic norm minimization (ANM) based scheme, and then the received signal-to-interference-plus-noise ratio (SINR) is maximized by jointly designing the phase shift of each DMA element and the weights of the array elements. To solve the challenging non-convex discrete fractional problem along with the estimation error in the direction of arrival (DoA) and channel state information (CSI), we propose a robust alternative algorithm based on the S-procedure to solve the lower-bound SINR maximization problem. Simulation results demonstrate that the proposed RHA architecture and corresponding schemes have superior performance in terms of jamming immunity and robustness.
2012.03438
Bingyu Liu
Bingyu Liu, Yuhong Guo, Jieping Ye, Weihong Deng
Selective Pseudo-Labeling with Reinforcement Learning for Semi-Supervised Domain Adaptation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent domain adaptation methods have demonstrated impressive improvement on unsupervised domain adaptation problems. However, in the semi-supervised domain adaptation (SSDA) setting where the target domain has a few labeled instances available, these methods can fail to improve performance. Inspired by the effectiveness of pseudo-labels in domain adaptation, we propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation. It is difficult for conventional pseudo-labeling methods to balance the correctness and representativeness of pseudo-labeled data. To address this limitation, we develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances. Moreover, motivated by large margin loss's capacity on learning discriminative features with little data, we further propose a novel target margin loss for our base model training to improve its discriminability. Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
[ { "created": "Mon, 7 Dec 2020 03:37:38 GMT", "version": "v1" } ]
2020-12-08
[ [ "Liu", "Bingyu", "" ], [ "Guo", "Yuhong", "" ], [ "Ye", "Jieping", "" ], [ "Deng", "Weihong", "" ] ]
Recent domain adaptation methods have demonstrated impressive improvement on unsupervised domain adaptation problems. However, in the semi-supervised domain adaptation (SSDA) setting where the target domain has a few labeled instances available, these methods can fail to improve performance. Inspired by the effectiveness of pseudo-labels in domain adaptation, we propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation. It is difficult for conventional pseudo-labeling methods to balance the correctness and representativeness of pseudo-labeled data. To address this limitation, we develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances. Moreover, motivated by large margin loss's capacity on learning discriminative features with little data, we further propose a novel target margin loss for our base model training to improve its discriminability. Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
1807.09915
Qi Zheng
Chaojian Yu, Xinyi Zhao, Qi Zheng, Peng Zhang, Xinge You
Hierarchical Bilinear Pooling for Fine-Grained Visual Recognition
16 pages, 3 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Fine-grained visual recognition is challenging because it highly relies on the modeling of various semantic parts and fine-grained feature learning. Bilinear pooling based models have been shown to be effective at fine-grained recognition, while most previous approaches neglect the fact that inter-layer part feature interaction and fine-grained feature learning are mutually correlated and can reinforce each other. In this paper, we present a novel model to address these issues. First, a cross-layer bilinear pooling approach is proposed to capture the inter-layer part feature relations, which results in superior performance compared with other bilinear pooling based approaches. Second, we propose a novel hierarchical bilinear pooling framework to integrate multiple cross-layer bilinear features to enhance their representation capability. Our formulation is intuitive, efficient and achieves state-of-the-art results on the widely used fine-grained recognition datasets.
[ { "created": "Thu, 26 Jul 2018 01:46:15 GMT", "version": "v1" } ]
2018-07-27
[ [ "Yu", "Chaojian", "" ], [ "Zhao", "Xinyi", "" ], [ "Zheng", "Qi", "" ], [ "Zhang", "Peng", "" ], [ "You", "Xinge", "" ] ]
Fine-grained visual recognition is challenging because it highly relies on the modeling of various semantic parts and fine-grained feature learning. Bilinear pooling based models have been shown to be effective at fine-grained recognition, while most previous approaches neglect the fact that inter-layer part feature interaction and fine-grained feature learning are mutually correlated and can reinforce each other. In this paper, we present a novel model to address these issues. First, a cross-layer bilinear pooling approach is proposed to capture the inter-layer part feature relations, which results in superior performance compared with other bilinear pooling based approaches. Second, we propose a novel hierarchical bilinear pooling framework to integrate multiple cross-layer bilinear features to enhance their representation capability. Our formulation is intuitive, efficient and achieves state-of-the-art results on the widely used fine-grained recognition datasets.
2201.07321
Navpreet Kaur
Navpreet Kaur, Jason Hughes, and Juntao Chen
VaxEquity: A Data-Driven Risk Assessment and Optimization Framework for Equitable Vaccine Distribution
null
null
null
null
cs.CE stat.AP
http://creativecommons.org/licenses/by/4.0/
With the continuous rise of the COVID-19 cases worldwide, it is imperative to ensure that all those vulnerable countries lacking vaccine resources can receive sufficient support to contain the risks. COVAX is such an initiative operated by the WHO to supply vaccines to the most needed countries. One critical problem faced by the COVAX is how to distribute the limited amount of vaccines to these countries in the most efficient and equitable manner. This paper aims to address this challenge by first proposing a data-driven risk assessment and prediction model and then developing a decision-making framework to support the strategic vaccine distribution. The machine learning-based risk prediction model characterizes how the risk is influenced by the underlying essential factors, e.g., the vaccination level among the population in each COVAX country. This predictive model is then leveraged to design the optimal vaccine distribution strategy that simultaneously minimizes the resulting risks while maximizing the vaccination coverage in these countries targeted by COVAX. Finally, we corroborate the proposed framework using case studies with real-world data.
[ { "created": "Tue, 18 Jan 2022 21:38:12 GMT", "version": "v1" } ]
2022-01-20
[ [ "Kaur", "Navpreet", "" ], [ "Hughes", "Jason", "" ], [ "Chen", "Juntao", "" ] ]
With the continuous rise of the COVID-19 cases worldwide, it is imperative to ensure that all those vulnerable countries lacking vaccine resources can receive sufficient support to contain the risks. COVAX is such an initiative operated by the WHO to supply vaccines to the most needed countries. One critical problem faced by the COVAX is how to distribute the limited amount of vaccines to these countries in the most efficient and equitable manner. This paper aims to address this challenge by first proposing a data-driven risk assessment and prediction model and then developing a decision-making framework to support the strategic vaccine distribution. The machine learning-based risk prediction model characterizes how the risk is influenced by the underlying essential factors, e.g., the vaccination level among the population in each COVAX country. This predictive model is then leveraged to design the optimal vaccine distribution strategy that simultaneously minimizes the resulting risks while maximizing the vaccination coverage in these countries targeted by COVAX. Finally, we corroborate the proposed framework using case studies with real-world data.