id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1407.7170
Amir Leshem
Amir Leshem, Maziyar Hamdi, Vikram Krishnamurthy
Boundary value problems in consensus networks
Submitted for publication, Feb. 2014
null
null
null
cs.SI cs.IT math.IT physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the effect of boundary value conditions on consensus networks. Consider a network where some nodes keep their estimates constant while other nodes average their estimates with that of their neighbors. We analyze such networks and show that in contrast to standard consensus networks, the network estimate converges to a general harmonic function on the graph. Furthermore, the final value depends only on the value at the boundary nodes. This has important implications in consensus networks -- for example, we show that consensus networks are extremely sensitive to the existence of a single malicious node or consistent errors in a single node. We also discuss applications of this result in social and sensor networks. We investigate the existence of boundary nodes in human social networks via an experimental study involving human subjects. Finally, the paper is concluded with the numerical studies of the boundary value problems in consensus networks.
[ { "created": "Sun, 27 Jul 2014 00:27:42 GMT", "version": "v1" } ]
2014-07-29
[ [ "Leshem", "Amir", "" ], [ "Hamdi", "Maziyar", "" ], [ "Krishnamurthy", "Vikram", "" ] ]
This paper studies the effect of boundary value conditions on consensus networks. Consider a network where some nodes keep their estimates constant while other nodes average their estimates with that of their neighbors. We analyze such networks and show that in contrast to standard consensus networks, the network estimate converges to a general harmonic function on the graph. Furthermore, the final value depends only on the value at the boundary nodes. This has important implications in consensus networks -- for example, we show that consensus networks are extremely sensitive to the existence of a single malicious node or consistent errors in a single node. We also discuss applications of this result in social and sensor networks. We investigate the existence of boundary nodes in human social networks via an experimental study involving human subjects. Finally, the paper is concluded with the numerical studies of the boundary value problems in consensus networks.
2210.00701
Dan Qiao
Dan Qiao, Yu-Xiang Wang
Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning with Linear Function Approximation
48 pages
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of deployment efficient reinforcement learning (RL) with linear function approximation under the \emph{reward-free} exploration setting. This is a well-motivated problem because deploying new policies is costly in real-life RL applications. Under the linear MDP setting with feature dimension $d$ and planning horizon $H$, we propose a new algorithm that collects at most $\widetilde{O}(\frac{d^2H^5}{\epsilon^2})$ trajectories within $H$ deployments to identify $\epsilon$-optimal policy for any (possibly data-dependent) choice of reward functions. To the best of our knowledge, our approach is the first to achieve optimal deployment complexity and optimal $d$ dependence in sample complexity at the same time, even if the reward is known ahead of time. Our novel techniques include an exploration-preserving policy discretization and a generalized G-optimal experiment design, which could be of independent interest. Lastly, we analyze the related problem of regret minimization in low-adaptive RL and provide information-theoretic lower bounds for switching cost and batch complexity.
[ { "created": "Mon, 3 Oct 2022 03:48:26 GMT", "version": "v1" }, { "created": "Wed, 22 Feb 2023 00:33:11 GMT", "version": "v2" } ]
2023-02-23
[ [ "Qiao", "Dan", "" ], [ "Wang", "Yu-Xiang", "" ] ]
We study the problem of deployment efficient reinforcement learning (RL) with linear function approximation under the \emph{reward-free} exploration setting. This is a well-motivated problem because deploying new policies is costly in real-life RL applications. Under the linear MDP setting with feature dimension $d$ and planning horizon $H$, we propose a new algorithm that collects at most $\widetilde{O}(\frac{d^2H^5}{\epsilon^2})$ trajectories within $H$ deployments to identify $\epsilon$-optimal policy for any (possibly data-dependent) choice of reward functions. To the best of our knowledge, our approach is the first to achieve optimal deployment complexity and optimal $d$ dependence in sample complexity at the same time, even if the reward is known ahead of time. Our novel techniques include an exploration-preserving policy discretization and a generalized G-optimal experiment design, which could be of independent interest. Lastly, we analyze the related problem of regret minimization in low-adaptive RL and provide information-theoretic lower bounds for switching cost and batch complexity.
2406.00002
Manos Kamarianakis
Achilleas Filippidis, Nikolaos Marmaras, Michael Maravgakis, Alexandra Plexousaki, Manos Kamarianakis, George Papagiannakis
VR Isle Academy: A VR Digital Twin Approach for Robotic Surgical Skill Development
10 pages, 14 figures, Acknowledgement Section updated
null
null
null
cs.RO cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contemporary progress in the field of robotics, marked by improved efficiency and stability, has paved the way for the global adoption of surgical robotic systems (SRS). While these systems enhance surgeons' skills by offering a more accurate and less invasive approach to operations, they come at a considerable cost. Moreover, SRS components often involve heavy machinery, making the training process challenging due to limited access to such equipment. In this paper we introduce a cost-effective way to facilitate training for a simulator of a SRS via a portable, device-agnostic, ultra realistic simulation with hand tracking and feet tracking support. Error assessment is accessible in both real-time and offline, which enables the monitoring and tracking of users' performance. The VR application has been objectively evaluated by several untrained testers showcasing significant reduction in error metrics as the number of training sessions increases. This indicates that the proposed VR application denoted as VR Isle Academy operates efficiently, improving the robot - controlling skills of the testers in an intuitive and immersive way towards reducing the learning curve at minimal cost.
[ { "created": "Sat, 4 May 2024 14:47:42 GMT", "version": "v1" }, { "created": "Mon, 1 Jul 2024 14:41:04 GMT", "version": "v2" } ]
2024-07-02
[ [ "Filippidis", "Achilleas", "" ], [ "Marmaras", "Nikolaos", "" ], [ "Maravgakis", "Michael", "" ], [ "Plexousaki", "Alexandra", "" ], [ "Kamarianakis", "Manos", "" ], [ "Papagiannakis", "George", "" ] ]
Contemporary progress in the field of robotics, marked by improved efficiency and stability, has paved the way for the global adoption of surgical robotic systems (SRS). While these systems enhance surgeons' skills by offering a more accurate and less invasive approach to operations, they come at a considerable cost. Moreover, SRS components often involve heavy machinery, making the training process challenging due to limited access to such equipment. In this paper we introduce a cost-effective way to facilitate training for a simulator of a SRS via a portable, device-agnostic, ultra realistic simulation with hand tracking and feet tracking support. Error assessment is accessible in both real-time and offline, which enables the monitoring and tracking of users' performance. The VR application has been objectively evaluated by several untrained testers showcasing significant reduction in error metrics as the number of training sessions increases. This indicates that the proposed VR application denoted as VR Isle Academy operates efficiently, improving the robot - controlling skills of the testers in an intuitive and immersive way towards reducing the learning curve at minimal cost.
2011.07069
Vigen Arakelyan
Jing Geng, Vigen Arakelian (LS2N, RoMas, INSA Rennes), Damien Chablat (LS2N, ReV, CNRS)
Shaking Force Balancing of the Orthoglide
null
In: Zeghloul S., Laribi M., Sandoval Arevalo J. (eds) Advances in Service and Industrial Robotics. RAAD 2020. Mechanisms and Machine Science, vol 84. Springer, pp.227-234, 2020, 2211-0984
10.1007/978-3-030-48989-2_25
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The shaking force balancing is a well-known problem in the design of high-speed robotic systems because the variable dynamic loads cause noises, wear and fatigue of mechanical structures. Different solutions, for full or partial shaking force balancing, via internal mass redistribution or by adding auxiliary links were developed. The paper deals with the shaking force balancing of the Orthoglide. The suggested solution based on the optimal acceleration control of the manipulator's common center of mass allows a significant reduction of the shaking force. Compared with the balancing method via adding counterweights or auxiliary substructures, the proposed method can avoid some drawbacks: the increase of the total mass, the overall size and the complexity of the mechanism, which become especially challenging for special parallel manipulators. Using the proposed motion control method, the maximal value of the total mass center acceleration is reduced, as a consequence, the shaking force of the manipulator decreases. The efficiency of the suggested method via numerical simulations carried out with ADAMS is demonstrated.
[ { "created": "Fri, 13 Nov 2020 14:57:59 GMT", "version": "v1" } ]
2020-11-17
[ [ "Geng", "Jing", "", "LS2N, RoMas, INSA Rennes" ], [ "Arakelian", "Vigen", "", "LS2N, RoMas, INSA Rennes" ], [ "Chablat", "Damien", "", "LS2N, ReV, CNRS" ] ]
The shaking force balancing is a well-known problem in the design of high-speed robotic systems because the variable dynamic loads cause noises, wear and fatigue of mechanical structures. Different solutions, for full or partial shaking force balancing, via internal mass redistribution or by adding auxiliary links were developed. The paper deals with the shaking force balancing of the Orthoglide. The suggested solution based on the optimal acceleration control of the manipulator's common center of mass allows a significant reduction of the shaking force. Compared with the balancing method via adding counterweights or auxiliary substructures, the proposed method can avoid some drawbacks: the increase of the total mass, the overall size and the complexity of the mechanism, which become especially challenging for special parallel manipulators. Using the proposed motion control method, the maximal value of the total mass center acceleration is reduced, as a consequence, the shaking force of the manipulator decreases. The efficiency of the suggested method via numerical simulations carried out with ADAMS is demonstrated.
2208.03526
Zhikang Wang
Zhikang Wang, Yue Bi, Tong Pan, Xiaoyu Wang, Chris Bain, Richard Bassed, Seiya Imoto, Jianhua Yao, Jiangning Song
Multiplex-detection Based Multiple Instance Learning Network for Whole Slide Image Classification
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple instance learning (MIL) is a powerful approach to classify whole slide images (WSIs) for diagnostic pathology. A fundamental challenge of MIL on WSI classification is to discover the \textit{critical instances} that trigger the bag label. However, previous methods are primarily designed under the independent and identical distribution hypothesis (\textit{i.i.d}), ignoring either the correlations between instances or heterogeneity of tumours. In this paper, we propose a novel multiplex-detection-based multiple instance learning (MDMIL) to tackle the issues above. Specifically, MDMIL is constructed by the internal query generation module (IQGM) and the multiplex detection module (MDM) and assisted by the memory-based contrastive loss during training. Firstly, IQGM gives the probability of instances and generates the internal query (IQ) for the subsequent MDM by aggregating highly reliable features after the distribution analysis. Secondly, the multiplex-detection cross-attention (MDCA) and multi-head self-attention (MHSA) in MDM cooperate to generate the final representations for the WSI. In this process, the IQ and trainable variational query (VQ) successfully build up the connections between instances and significantly improve the model's robustness toward heterogeneous tumours. At last, to further enforce constraints in the feature space and stabilize the training process, we adopt a memory-based contrastive loss, which is practicable for WSI classification even with a single sample as input in each iteration. We conduct experiments on three computational pathology datasets, e.g., CAMELYON16, TCGA-NSCLC, and TCGA-RCC datasets. The superior accuracy and AUC demonstrate the superiority of our proposed MDMIL over other state-of-the-art methods.
[ { "created": "Sat, 6 Aug 2022 14:36:48 GMT", "version": "v1" }, { "created": "Wed, 31 Aug 2022 12:19:25 GMT", "version": "v2" }, { "created": "Thu, 1 Sep 2022 03:55:04 GMT", "version": "v3" } ]
2022-09-02
[ [ "Wang", "Zhikang", "" ], [ "Bi", "Yue", "" ], [ "Pan", "Tong", "" ], [ "Wang", "Xiaoyu", "" ], [ "Bain", "Chris", "" ], [ "Bassed", "Richard", "" ], [ "Imoto", "Seiya", "" ], [ "Yao", "Jianhua", "" ], [ "Song", "Jiangning", "" ] ]
Multiple instance learning (MIL) is a powerful approach to classify whole slide images (WSIs) for diagnostic pathology. A fundamental challenge of MIL on WSI classification is to discover the \textit{critical instances} that trigger the bag label. However, previous methods are primarily designed under the independent and identical distribution hypothesis (\textit{i.i.d}), ignoring either the correlations between instances or heterogeneity of tumours. In this paper, we propose a novel multiplex-detection-based multiple instance learning (MDMIL) to tackle the issues above. Specifically, MDMIL is constructed by the internal query generation module (IQGM) and the multiplex detection module (MDM) and assisted by the memory-based contrastive loss during training. Firstly, IQGM gives the probability of instances and generates the internal query (IQ) for the subsequent MDM by aggregating highly reliable features after the distribution analysis. Secondly, the multiplex-detection cross-attention (MDCA) and multi-head self-attention (MHSA) in MDM cooperate to generate the final representations for the WSI. In this process, the IQ and trainable variational query (VQ) successfully build up the connections between instances and significantly improve the model's robustness toward heterogeneous tumours. At last, to further enforce constraints in the feature space and stabilize the training process, we adopt a memory-based contrastive loss, which is practicable for WSI classification even with a single sample as input in each iteration. We conduct experiments on three computational pathology datasets, e.g., CAMELYON16, TCGA-NSCLC, and TCGA-RCC datasets. The superior accuracy and AUC demonstrate the superiority of our proposed MDMIL over other state-of-the-art methods.
2304.00367
Peter Du
Peter Du, Surya Murthy, Katherine Driggs-Campbell
Conveying Autonomous Robot Capabilities through Contrasting Behaviour Summaries
null
null
null
null
cs.RO cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As advances in artificial intelligence enable increasingly capable learning-based autonomous agents, it becomes more challenging for human observers to efficiently construct a mental model of the agent's behaviour. In order to successfully deploy autonomous agents, humans should not only be able to understand the individual limitations of the agents but also have insight on how they compare against one another. To do so, we need effective methods for generating human interpretable agent behaviour summaries. Single agent behaviour summarization has been tackled in the past through methods that generate explanations for why an agent chose to pick a particular action at a single timestep. However, for complex tasks, a per-action explanation may not be able to convey an agents global strategy. As a result, researchers have looked towards multi-timestep summaries which can better help humans assess an agents overall capability. More recently, multi-step summaries have also been used for generating contrasting examples to evaluate multiple agents. However, past approaches have largely relied on unstructured search methods to generate summaries and require agents to have a discrete action space. In this paper we present an adaptive search method for efficiently generating contrasting behaviour summaries with support for continuous state and action spaces. We perform a user study to evaluate the effectiveness of the summaries for helping humans discern the superior autonomous agent for a given task. Our results indicate that adaptive search can efficiently identify informative contrasting scenarios that enable humans to accurately select the better performing agent with a limited observation time budget.
[ { "created": "Sat, 1 Apr 2023 18:20:59 GMT", "version": "v1" } ]
2023-04-04
[ [ "Du", "Peter", "" ], [ "Murthy", "Surya", "" ], [ "Driggs-Campbell", "Katherine", "" ] ]
As advances in artificial intelligence enable increasingly capable learning-based autonomous agents, it becomes more challenging for human observers to efficiently construct a mental model of the agent's behaviour. In order to successfully deploy autonomous agents, humans should not only be able to understand the individual limitations of the agents but also have insight on how they compare against one another. To do so, we need effective methods for generating human interpretable agent behaviour summaries. Single agent behaviour summarization has been tackled in the past through methods that generate explanations for why an agent chose to pick a particular action at a single timestep. However, for complex tasks, a per-action explanation may not be able to convey an agents global strategy. As a result, researchers have looked towards multi-timestep summaries which can better help humans assess an agents overall capability. More recently, multi-step summaries have also been used for generating contrasting examples to evaluate multiple agents. However, past approaches have largely relied on unstructured search methods to generate summaries and require agents to have a discrete action space. In this paper we present an adaptive search method for efficiently generating contrasting behaviour summaries with support for continuous state and action spaces. We perform a user study to evaluate the effectiveness of the summaries for helping humans discern the superior autonomous agent for a given task. Our results indicate that adaptive search can efficiently identify informative contrasting scenarios that enable humans to accurately select the better performing agent with a limited observation time budget.
1908.09701
Huan Zhao
Huan Zhao, Yingqi Zhou, Yangqiu Song, Dik Lun Lee
Motif Enhanced Recommendation over Heterogeneous Information Network
CIKM 2019 camera ready version
null
null
null
cs.SI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heterogeneous Information Networks (HIN) has been widely used in recommender systems (RSs). In previous HIN-based RSs, meta-path is used to compute the similarity between users and items. However, existing meta-path based methods only consider first-order relations, ignoring higher-order relations among the nodes of \textit{same} type, captured by \textit{motifs}. In this paper, we propose to use motifs to capture higher-order relations among nodes of same type in a HIN and develop the motif-enhanced meta-path (MEMP) to combine motif-based higher-order relations with edge-based first-order relations. With MEMP-based similarities between users and items, we design a recommending model MoHINRec, and experimental results on two real-world datasets, Epinions and CiaoDVD, demonstrate its superiority over existing HIN-based RS methods.
[ { "created": "Mon, 26 Aug 2019 14:21:14 GMT", "version": "v1" } ]
2019-08-27
[ [ "Zhao", "Huan", "" ], [ "Zhou", "Yingqi", "" ], [ "Song", "Yangqiu", "" ], [ "Lee", "Dik Lun", "" ] ]
Heterogeneous Information Networks (HIN) has been widely used in recommender systems (RSs). In previous HIN-based RSs, meta-path is used to compute the similarity between users and items. However, existing meta-path based methods only consider first-order relations, ignoring higher-order relations among the nodes of \textit{same} type, captured by \textit{motifs}. In this paper, we propose to use motifs to capture higher-order relations among nodes of same type in a HIN and develop the motif-enhanced meta-path (MEMP) to combine motif-based higher-order relations with edge-based first-order relations. With MEMP-based similarities between users and items, we design a recommending model MoHINRec, and experimental results on two real-world datasets, Epinions and CiaoDVD, demonstrate its superiority over existing HIN-based RS methods.
1705.08339
Carlos Mosquera
Carlos Mosquera, Roberto Lopez-Valcarce, Vahid Joroughi
Distributed Precoding Systems in Multi-Gateway Multibeam Satellites: Regularization and Coarse Beamforming
Submitted to IEEE Transactions on Wireless Communications
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper deals with the problem of beamforming design in a multibeam satellite, which is shared by different groups of terminals -clusters-, each served by an Earth station or gateway. Each gateway precodes the symbols addressed to its respective users; the design follows an MMSE criterion, and a regularization factor judiciously chosen allows to account for the presence of mutually interfering clusters, extending more classical results applicable to one centralized station. More importantly, channel statistics can be used instead of instantaneous channel state information, avoiding the exchange of information among gateways through backhaul links. The on-board satellite beamforming weights are designed to exploit the degrees of freedom of the satellite antennas to minimize the noise impact and the interference to some specific users. On-ground beamforming results are provided as a reference to compare the joint performance of MMSE precoders and on-board beamforming network. A non-adaptive design complements the results and makes them more amenable to practical use by designing a coarse beamforming network.
[ { "created": "Tue, 23 May 2017 14:57:48 GMT", "version": "v1" } ]
2017-05-24
[ [ "Mosquera", "Carlos", "" ], [ "Lopez-Valcarce", "Roberto", "" ], [ "Joroughi", "Vahid", "" ] ]
This paper deals with the problem of beamforming design in a multibeam satellite, which is shared by different groups of terminals -clusters-, each served by an Earth station or gateway. Each gateway precodes the symbols addressed to its respective users; the design follows an MMSE criterion, and a regularization factor judiciously chosen allows to account for the presence of mutually interfering clusters, extending more classical results applicable to one centralized station. More importantly, channel statistics can be used instead of instantaneous channel state information, avoiding the exchange of information among gateways through backhaul links. The on-board satellite beamforming weights are designed to exploit the degrees of freedom of the satellite antennas to minimize the noise impact and the interference to some specific users. On-ground beamforming results are provided as a reference to compare the joint performance of MMSE precoders and on-board beamforming network. A non-adaptive design complements the results and makes them more amenable to practical use by designing a coarse beamforming network.
1303.2175
Keivan Navi
Samira Shirinabadi Farahani, Ronak Zarhoun, Mohammad Hossein Moaiyeri and Keivan Navi
An efficient cntfet-based 7-input minority gate
null
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complementary metal oxide semiconductor technology (CMOS) has been faced critical challenges in nano-scale regime. CNTFET (Carbon Nanotube Field effect transistor) technology is a promising alternative for CMOS technology. In this paper, we proposed a novel 7-input minority gate in CNTFET technology that has only 9 CNTFETs. Minority function is utilized in the voting systems for decision making and also it is used in data mining. This proposed 7-input minority gate is utilized less fewer transistors than the conventional CMOS method which utilizes many transistors for implementing sum of products. By means of this proposed 7-input minority gate, a 4-input NAND gate can be implemented, which gets better the conventional design in terms of delay and energy efficiency and has much more deriving power at its output.
[ { "created": "Sat, 9 Mar 2013 06:57:21 GMT", "version": "v1" } ]
2013-03-12
[ [ "Farahani", "Samira Shirinabadi", "" ], [ "Zarhoun", "Ronak", "" ], [ "Moaiyeri", "Mohammad Hossein", "" ], [ "Navi", "Keivan", "" ] ]
Complementary metal oxide semiconductor technology (CMOS) has been faced critical challenges in nano-scale regime. CNTFET (Carbon Nanotube Field effect transistor) technology is a promising alternative for CMOS technology. In this paper, we proposed a novel 7-input minority gate in CNTFET technology that has only 9 CNTFETs. Minority function is utilized in the voting systems for decision making and also it is used in data mining. This proposed 7-input minority gate is utilized less fewer transistors than the conventional CMOS method which utilizes many transistors for implementing sum of products. By means of this proposed 7-input minority gate, a 4-input NAND gate can be implemented, which gets better the conventional design in terms of delay and energy efficiency and has much more deriving power at its output.
1909.11764
Chen Zhu
Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, Jingjing Liu
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Adding results with ALBERT
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness of the proposed approach, we apply it to Transformer-based models for natural language understanding and commonsense reasoning tasks. Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores of BERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8. In addition, the proposed approach achieves state-of-the-art single-model test accuracies of 85.44\% and 67.75\% on ARC-Easy and ARC-Challenge. Experiments on CommonsenseQA benchmark further demonstrate that FreeLB can be generalized and boost the performance of RoBERTa-large model on other tasks as well. Code is available at \url{https://github.com/zhuchen03/FreeLB .
[ { "created": "Wed, 25 Sep 2019 20:50:32 GMT", "version": "v1" }, { "created": "Mon, 30 Sep 2019 18:53:21 GMT", "version": "v2" }, { "created": "Sat, 5 Oct 2019 04:05:46 GMT", "version": "v3" }, { "created": "Wed, 19 Feb 2020 01:57:24 GMT", "version": "v4" }, { "created": "Thu, 23 Apr 2020 07:19:00 GMT", "version": "v5" } ]
2020-04-24
[ [ "Zhu", "Chen", "" ], [ "Cheng", "Yu", "" ], [ "Gan", "Zhe", "" ], [ "Sun", "Siqi", "" ], [ "Goldstein", "Tom", "" ], [ "Liu", "Jingjing", "" ] ]
Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness of the proposed approach, we apply it to Transformer-based models for natural language understanding and commonsense reasoning tasks. Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores of BERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8. In addition, the proposed approach achieves state-of-the-art single-model test accuracies of 85.44\% and 67.75\% on ARC-Easy and ARC-Challenge. Experiments on CommonsenseQA benchmark further demonstrate that FreeLB can be generalized and boost the performance of RoBERTa-large model on other tasks as well. Code is available at \url{https://github.com/zhuchen03/FreeLB .
2404.02565
Sreela Kodali
Sreela Kodali, Cihualpilli Camino Cruz, Thomas C. Bulea, Kevin S. Rao Diana Bharucha-Goebel, Alexander T. Chesler, Carsten G. Bonnemann, Allison M. Okamura
Spatial Summation of Localized Pressure for Haptic Sensory Prostheses
2 pages, 2 figures, 2024 IEEE Haptics Symposium Work-in-Progress Paper
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A host of medical conditions, including amputations, diabetes, stroke, and genetic disease, result in loss of touch sensation. Because most types of sensory loss have no pharmacological treatment or rehabilitative therapy, we propose a haptic sensory prosthesis that provides substitutive feedback. The wrist and forearm are compelling locations for feedback due to available skin area and not occluding the hands, but have reduced mechanoreceptor density compared to the fingertips. Focusing on localized pressure as the feedback modality, we hypothesize that we can improve on prior devices by invoking a wider range of stimulus intensity using multiple points of pressure to evoke spatial summation, which is the cumulative perceptual experience from multiple points of stimuli. We conducted a preliminary perceptual test to investigate this idea and found that just noticeable difference is reduced with two points of pressure compared to one, motivating future work using spatial summation in sensory prostheses.
[ { "created": "Wed, 3 Apr 2024 08:37:05 GMT", "version": "v1" } ]
2024-04-04
[ [ "Kodali", "Sreela", "" ], [ "Cruz", "Cihualpilli Camino", "" ], [ "Bulea", "Thomas C.", "" ], [ "Bharucha-Goebel", "Kevin S. Rao Diana", "" ], [ "Chesler", "Alexander T.", "" ], [ "Bonnemann", "Carsten G.", "" ], [ "Okamura", "Allison M.", "" ] ]
A host of medical conditions, including amputations, diabetes, stroke, and genetic disease, result in loss of touch sensation. Because most types of sensory loss have no pharmacological treatment or rehabilitative therapy, we propose a haptic sensory prosthesis that provides substitutive feedback. The wrist and forearm are compelling locations for feedback due to available skin area and not occluding the hands, but have reduced mechanoreceptor density compared to the fingertips. Focusing on localized pressure as the feedback modality, we hypothesize that we can improve on prior devices by invoking a wider range of stimulus intensity using multiple points of pressure to evoke spatial summation, which is the cumulative perceptual experience from multiple points of stimuli. We conducted a preliminary perceptual test to investigate this idea and found that just noticeable difference is reduced with two points of pressure compared to one, motivating future work using spatial summation in sensory prostheses.
1909.07166
Smriti Prathapan
Kaushik Velusamy, Smriti Prathapan, Milton Halem
Exploring the Behavior of Coherent Accelerator Processor Interface (CAPI) on IBM Power8+ Architecture and FlashSystem 900
18 pages, 7 figures, 3 tables, Accepted for publication at 2019 International Workshop on OpenPOWER for HPC (IWOPH19) International Supercomputing Conference HPC Frankfurt, Germany
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
The Coherent Accelerator Processor Interface (CAPI) is a general term for the infrastructure that provides high throughput and low latency path to the flash storage connected to the IBM POWER 8+ System. CAPI accelerator card is attached coherently as a peer to the Power8+ processor. This removes the overhead and complexity of the IO subsystem and allows the accelerator to operate as part of an application. In this paper, we present the results of experiments on IBM FlashSystem900 (FS900) with CAPI accelerator card using the "CAPI-Flash IBM Data Engine for NoSQL Software" Library. This library provides the application, a direct access to the underlying flash storage through user space APIs, to manage and access the data in flash. This offloads kernel IO driver functionality to dedicated CAPI FPGA accelerator hardware. We conducted experiments to analyze the performance of FS900 with CAPI accelerator card, using the Key Value Layer APIs, employing NASA's MODIS Land Surface Reflectance dataset as a large dataset use case. We performed Read and Write operations on datasets of size ranging from 1MB to 3TB by varying the number of threads. We then compared this performance with other heterogeneous storage and memory devices such as NVM, SSD and RAM, without using the CAPI Accelerator in synchronous and asynchronous file IO modes of operations. The results indicate that FS900 & CAPI, together with the metadata cache in RAM, delivers the highest IO/s and OP/s for read operations. This was higher than just using RAM, along with utilizing lesser CPU resources. Among FS900, SSD and NVM, FS900 had the highest write IO/s. Another important observation is that, when the size of the input dataset exceeds the capacity of RAM, and when the data access is non-uniform and sparse, FS900 with CAPI would be a cost-effective alternative.
[ { "created": "Thu, 12 Sep 2019 15:45:37 GMT", "version": "v1" } ]
2019-09-17
[ [ "Velusamy", "Kaushik", "" ], [ "Prathapan", "Smriti", "" ], [ "Halem", "Milton", "" ] ]
The Coherent Accelerator Processor Interface (CAPI) is a general term for the infrastructure that provides high throughput and low latency path to the flash storage connected to the IBM POWER 8+ System. CAPI accelerator card is attached coherently as a peer to the Power8+ processor. This removes the overhead and complexity of the IO subsystem and allows the accelerator to operate as part of an application. In this paper, we present the results of experiments on IBM FlashSystem900 (FS900) with CAPI accelerator card using the "CAPI-Flash IBM Data Engine for NoSQL Software" Library. This library provides the application, a direct access to the underlying flash storage through user space APIs, to manage and access the data in flash. This offloads kernel IO driver functionality to dedicated CAPI FPGA accelerator hardware. We conducted experiments to analyze the performance of FS900 with CAPI accelerator card, using the Key Value Layer APIs, employing NASA's MODIS Land Surface Reflectance dataset as a large dataset use case. We performed Read and Write operations on datasets of size ranging from 1MB to 3TB by varying the number of threads. We then compared this performance with other heterogeneous storage and memory devices such as NVM, SSD and RAM, without using the CAPI Accelerator in synchronous and asynchronous file IO modes of operations. The results indicate that FS900 & CAPI, together with the metadata cache in RAM, delivers the highest IO/s and OP/s for read operations. This was higher than just using RAM, along with utilizing lesser CPU resources. Among FS900, SSD and NVM, FS900 had the highest write IO/s. Another important observation is that, when the size of the input dataset exceeds the capacity of RAM, and when the data access is non-uniform and sparse, FS900 with CAPI would be a cost-effective alternative.
1801.04473
Regis Perrier
Iulia Tunaru and Beno\^it Denis and R\'egis Perrier and Bernard Uguen
Channel Whispering: a Protocol for Physical Layer Group Key Generation. Application to IR-UWB through Deconvolution
21 pages
null
null
null
cs.CR cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As wireless ad hoc and mobile networks are emerging and the transferred data become more sensitive, information security measures should make use of all the available contextual resources to secure information flows. The physical layer security framework provides models, algorithms, and proofs of concept for generating pairwise symmetric keys over single links between two nodes within communication range. In this study, we focus on cooperative group key generation over multiple Impulse Radio - Ultra Wideband (IR-UWB) channels according to the source model. The main idea, proposed in previous work, consists in generating receiver-specific signals, also called s-signals, so that only the intended receiver has access to the non-observable channels corresponding to its non-adjacent links. Herein, we complete the analysis of the proposed protocol and investigate several signal processing algorithms to generate the s-signal expressed as a solution to a deconvolution problem in the case of IR-UWB. Our findings indicate that it is compulsory to add a parameterizable constraint to the searched s-signal and that the Expectation-Maximization algorithm can provide a stable self-parameterizable solution. Compared to physical layer key distribution methods, the proposed key generation protocol requires less traffic overhead for small cooperative groups while being robust at medium and high signal-to-noise ratios.
[ { "created": "Sat, 13 Jan 2018 18:26:45 GMT", "version": "v1" } ]
2018-01-16
[ [ "Tunaru", "Iulia", "" ], [ "Denis", "Benoît", "" ], [ "Perrier", "Régis", "" ], [ "Uguen", "Bernard", "" ] ]
As wireless ad hoc and mobile networks are emerging and the transferred data become more sensitive, information security measures should make use of all the available contextual resources to secure information flows. The physical layer security framework provides models, algorithms, and proofs of concept for generating pairwise symmetric keys over single links between two nodes within communication range. In this study, we focus on cooperative group key generation over multiple Impulse Radio - Ultra Wideband (IR-UWB) channels according to the source model. The main idea, proposed in previous work, consists in generating receiver-specific signals, also called s-signals, so that only the intended receiver has access to the non-observable channels corresponding to its non-adjacent links. Herein, we complete the analysis of the proposed protocol and investigate several signal processing algorithms to generate the s-signal expressed as a solution to a deconvolution problem in the case of IR-UWB. Our findings indicate that it is compulsory to add a parameterizable constraint to the searched s-signal and that the Expectation-Maximization algorithm can provide a stable self-parameterizable solution. Compared to physical layer key distribution methods, the proposed key generation protocol requires less traffic overhead for small cooperative groups while being robust at medium and high signal-to-noise ratios.
2207.04647
Zvi Schreiber
Zvi Schreiber
Epi-constructivism: Decidable sets of computable numbers as foundational objects for mathematics
null
null
null
null
cs.LO math.LO
http://creativecommons.org/licenses/by-nc-nd/4.0/
It is well known that the R, the set of real numbers, is an abstract set, where almost all its elements cannot be described in any finite language. We investigate possible approaches to what might be called an epi-constructionist approach to mathematics. While most constructive mathematics is concerned with constructive proofs, the agenda here is that the objects that we study, specifically the class of numbers that we study, should be an enumerable set of finite symbol strings. These might also be called decidable constructive real numbers, that is our class of numbers should be a computable sets of explicitly represented computable numbers. There have been various investigations of the computable numbers going back to Turing. Most are however not expressed constructively, rather computable is a property assigned to some of the abstract real numbers. Other definitions define constructive real numbers without reference to the abstract R, but the construction is undecidable, i.e., we cannot determine if a given construction represents a computable real number or not. For example, we may define a real as a computable convergent sequence of rationals, but cannot in general decide if a given computable sequence is convergent. This paper explores several specific classes of decidable constructive real numbers that could form foundational objects for what we might call an epi-constructionist mathematics.
[ { "created": "Mon, 11 Jul 2022 06:18:24 GMT", "version": "v1" } ]
2022-07-12
[ [ "Schreiber", "Zvi", "" ] ]
It is well known that the R, the set of real numbers, is an abstract set, where almost all its elements cannot be described in any finite language. We investigate possible approaches to what might be called an epi-constructionist approach to mathematics. While most constructive mathematics is concerned with constructive proofs, the agenda here is that the objects that we study, specifically the class of numbers that we study, should be an enumerable set of finite symbol strings. These might also be called decidable constructive real numbers, that is our class of numbers should be a computable sets of explicitly represented computable numbers. There have been various investigations of the computable numbers going back to Turing. Most are however not expressed constructively, rather computable is a property assigned to some of the abstract real numbers. Other definitions define constructive real numbers without reference to the abstract R, but the construction is undecidable, i.e., we cannot determine if a given construction represents a computable real number or not. For example, we may define a real as a computable convergent sequence of rationals, but cannot in general decide if a given computable sequence is convergent. This paper explores several specific classes of decidable constructive real numbers that could form foundational objects for what we might call an epi-constructionist mathematics.
2105.00173
Daniel Szelogowski
Daniel Szelogowski
Emotion Recognition of the Singing Voice: Toward a Real-Time Analysis Tool for Singers
26 pages, 10 figures, 6 tables
null
null
null
cs.SD cs.AI cs.CY cs.LG cs.NE eess.AS
http://creativecommons.org/licenses/by/4.0/
Current computational-emotion research has focused on applying acoustic properties to analyze how emotions are perceived mathematically or used in natural language processing machine learning models. While recent interest has focused on analyzing emotions from the spoken voice, little experimentation has been performed to discover how emotions are recognized in the singing voice -- both in noiseless and noisy data (i.e., data that is either inaccurate, difficult to interpret, has corrupted/distorted/nonsense information like actual noise sounds in this case, or has a low ratio of usable/unusable information). Not only does this ignore the challenges of training machine learning models on more subjective data and testing them with much noisier data, but there is also a clear disconnect in progress between advancing the development of convolutional neural networks and the goal of emotionally cognizant artificial intelligence. By training a new model to include this type of information with a rich comprehension of psycho-acoustic properties, not only can models be trained to recognize information within extremely noisy data, but advancement can be made toward more complex biofeedback applications -- including creating a model which could recognize emotions given any human information (language, breath, voice, body, posture) and be used in any performance medium (music, speech, acting) or psychological assistance for patients with disorders such as BPD, alexithymia, autism, among others. This paper seeks to reflect and expand upon the findings of related research and present a stepping-stone toward this end goal.
[ { "created": "Sat, 1 May 2021 05:47:15 GMT", "version": "v1" }, { "created": "Sun, 4 Jul 2021 07:34:14 GMT", "version": "v2" } ]
2021-07-06
[ [ "Szelogowski", "Daniel", "" ] ]
Current computational-emotion research has focused on applying acoustic properties to analyze how emotions are perceived mathematically or used in natural language processing machine learning models. While recent interest has focused on analyzing emotions from the spoken voice, little experimentation has been performed to discover how emotions are recognized in the singing voice -- both in noiseless and noisy data (i.e., data that is either inaccurate, difficult to interpret, has corrupted/distorted/nonsense information like actual noise sounds in this case, or has a low ratio of usable/unusable information). Not only does this ignore the challenges of training machine learning models on more subjective data and testing them with much noisier data, but there is also a clear disconnect in progress between advancing the development of convolutional neural networks and the goal of emotionally cognizant artificial intelligence. By training a new model to include this type of information with a rich comprehension of psycho-acoustic properties, not only can models be trained to recognize information within extremely noisy data, but advancement can be made toward more complex biofeedback applications -- including creating a model which could recognize emotions given any human information (language, breath, voice, body, posture) and be used in any performance medium (music, speech, acting) or psychological assistance for patients with disorders such as BPD, alexithymia, autism, among others. This paper seeks to reflect and expand upon the findings of related research and present a stepping-stone toward this end goal.
2407.13648
Skyler Grandel
Skyler Grandel (1), Scott Thomas Andersen (2), Yu Huang (1), Kevin Leach (1) ((1) Vanderbilt University, (2) Universidad Nacional Aut\`onoma de M\`exico)
COMCAT: Leveraging Human Judgment to Improve Automatic Documentation and Summarization
12 pages, 6 figures
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Software maintenance constitutes a substantial portion of the total lifetime costs of software, with a significant portion attributed to code comprehension. Software comprehension is eased by documentation such as comments that summarize and explain code. We present COMCAT, an approach to automate comment generation by augmenting Large Language Models (LLMs) with expertise-guided context to target the annotation of source code with comments that improve comprehension. Our approach enables the selection of the most relevant and informative comments for a given snippet or file containing source code. We develop the COMCAT pipeline to comment C/C++ files by (1) automatically identifying suitable locations in which to place comments, (2) predicting the most helpful type of comment for each location, and (3) generating a comment based on the selected location and comment type. In a human subject evaluation, we demonstrate that COMCAT-generated comments significantly improve developer code comprehension across three indicative software engineering tasks by up to 12% for 87% of participants. In addition, we demonstrate that COMCAT-generated comments are at least as accurate and readable as human-generated comments and are preferred over standard ChatGPT-generated comments for up to 92% of snippets of code. Furthermore, we develop and release a dataset containing source code snippets, human-written comments, and human-annotated comment categories. COMCAT leverages LLMs to offer a significant improvement in code comprehension across a variety of human software engineering tasks.
[ { "created": "Thu, 18 Jul 2024 16:26:31 GMT", "version": "v1" } ]
2024-07-19
[ [ "Grandel", "Skyler", "" ], [ "Andersen", "Scott Thomas", "" ], [ "Huang", "Yu", "" ], [ "Leach", "Kevin", "" ] ]
Software maintenance constitutes a substantial portion of the total lifetime costs of software, with a significant portion attributed to code comprehension. Software comprehension is eased by documentation such as comments that summarize and explain code. We present COMCAT, an approach to automate comment generation by augmenting Large Language Models (LLMs) with expertise-guided context to target the annotation of source code with comments that improve comprehension. Our approach enables the selection of the most relevant and informative comments for a given snippet or file containing source code. We develop the COMCAT pipeline to comment C/C++ files by (1) automatically identifying suitable locations in which to place comments, (2) predicting the most helpful type of comment for each location, and (3) generating a comment based on the selected location and comment type. In a human subject evaluation, we demonstrate that COMCAT-generated comments significantly improve developer code comprehension across three indicative software engineering tasks by up to 12% for 87% of participants. In addition, we demonstrate that COMCAT-generated comments are at least as accurate and readable as human-generated comments and are preferred over standard ChatGPT-generated comments for up to 92% of snippets of code. Furthermore, we develop and release a dataset containing source code snippets, human-written comments, and human-annotated comment categories. COMCAT leverages LLMs to offer a significant improvement in code comprehension across a variety of human software engineering tasks.
2007.02984
Nitzan Zamir
Nitzan Zamir and Yoram Moses
Probably Approximately Knowing
23 pages, 2 figures, a full version of a paper whose extended abstract appears in the proceeding of PODC 2020
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Whereas deterministic protocols are typically guaranteed to obtain particular goals of interest, probabilistic protocols typically provide only probabilistic guarantees. This paper initiates an investigation of the interdependence between actions and subjective beliefs of agents in a probabilistic setting. In particular, we study what probabilistic beliefs an agent should have when performing actions, in a protocol that satisfies a probabilistic constraint of the form: 'Condition C should hold with probability at least p when action a is performed'. Our main result is that the expected degree of an agent's belief in C when it performs a equals the probability that C holds when a is performed. Indeed, if the threshold of the probabilistic constraint should hold with probaility p=1-x^2 for some small value of x then, with probability 1-x, when the agent acts it will assign a probabilistic belief no smaller than 1-x to the possibility that C holds. In other words, viewing strong belief as, intuitively, approximate knowledge, the agent must probably approximately know (PAK-know) that C is true when it acts.
[ { "created": "Mon, 6 Jul 2020 18:12:41 GMT", "version": "v1" } ]
2020-07-08
[ [ "Zamir", "Nitzan", "" ], [ "Moses", "Yoram", "" ] ]
Whereas deterministic protocols are typically guaranteed to obtain particular goals of interest, probabilistic protocols typically provide only probabilistic guarantees. This paper initiates an investigation of the interdependence between actions and subjective beliefs of agents in a probabilistic setting. In particular, we study what probabilistic beliefs an agent should have when performing actions, in a protocol that satisfies a probabilistic constraint of the form: 'Condition C should hold with probability at least p when action a is performed'. Our main result is that the expected degree of an agent's belief in C when it performs a equals the probability that C holds when a is performed. Indeed, if the threshold of the probabilistic constraint should hold with probaility p=1-x^2 for some small value of x then, with probability 1-x, when the agent acts it will assign a probabilistic belief no smaller than 1-x to the possibility that C holds. In other words, viewing strong belief as, intuitively, approximate knowledge, the agent must probably approximately know (PAK-know) that C is true when it acts.
1510.04817
Javier Alvez
Javier \'Alvez and Paqui Lucio and German Rigau
Improving the Competency of First-Order Ontologies
8 pages, 2 tables
Proceedings of the 8th International Conference on Knowledge Capture (K-CAP 2015). Palisades, NY. 2015
10.1145/2815833.2815841
null
cs.AI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new framework to evaluate and improve first-order (FO) ontologies using automated theorem provers (ATPs) on the basis of competency questions (CQs). Our framework includes both the adaptation of a methodology for evaluating ontologies to the framework of first-order logic and a new set of non-trivial CQs designed to evaluate FO versions of SUMO, which significantly extends the very small set of CQs proposed in the literature. Most of these new CQs have been automatically generated from a small set of patterns and the mapping of WordNet to SUMO. Applying our framework, we demonstrate that Adimen-SUMO v2.2 outperforms TPTP-SUMO. In addition, using the feedback provided by ATPs we have set an improved version of Adimen-SUMO (v2.4). This new version outperforms the previous ones in terms of competency. For instance, "Humans can reason" is automatically inferred from Adimen-SUMO v2.4, while it is neither deducible from TPTP-SUMO nor Adimen-SUMO v2.2.
[ { "created": "Fri, 16 Oct 2015 09:01:35 GMT", "version": "v1" } ]
2015-10-19
[ [ "Álvez", "Javier", "" ], [ "Lucio", "Paqui", "" ], [ "Rigau", "German", "" ] ]
We introduce a new framework to evaluate and improve first-order (FO) ontologies using automated theorem provers (ATPs) on the basis of competency questions (CQs). Our framework includes both the adaptation of a methodology for evaluating ontologies to the framework of first-order logic and a new set of non-trivial CQs designed to evaluate FO versions of SUMO, which significantly extends the very small set of CQs proposed in the literature. Most of these new CQs have been automatically generated from a small set of patterns and the mapping of WordNet to SUMO. Applying our framework, we demonstrate that Adimen-SUMO v2.2 outperforms TPTP-SUMO. In addition, using the feedback provided by ATPs we have set an improved version of Adimen-SUMO (v2.4). This new version outperforms the previous ones in terms of competency. For instance, "Humans can reason" is automatically inferred from Adimen-SUMO v2.4, while it is neither deducible from TPTP-SUMO nor Adimen-SUMO v2.2.
2304.01950
Yu Qiao
Yu Qiao, Md. Shirajum Munir, Apurba Adhikary, Huy Q. Le, Avi Deb Raha, Chaoning Zhang, Choong Seon Hong
MP-FedCL: Multiprototype Federated Contrastive Learning for Edge Intelligence
Accepted by IEEE Internet of Things
null
null
null
cs.LG cs.AI cs.CV cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Federated learning-assisted edge intelligence enables privacy protection in modern intelligent services. However, not independent and identically distributed (non-IID) distribution among edge clients can impair the local model performance. The existing single prototype-based strategy represents a class by using the mean of the feature space. However, feature spaces are usually not clustered, and a single prototype may not represent a class well. Motivated by this, this paper proposes a multi-prototype federated contrastive learning approach (MP-FedCL) which demonstrates the effectiveness of using a multi-prototype strategy over a single-prototype under non-IID settings, including both label and feature skewness. Specifically, a multi-prototype computation strategy based on \textit{k-means} is first proposed to capture different embedding representations for each class space, using multiple prototypes ($k$ centroids) to represent a class in the embedding space. In each global round, the computed multiple prototypes and their respective model parameters are sent to the edge server for aggregation into a global prototype pool, which is then sent back to all clients to guide their local training. Finally, local training for each client minimizes their own supervised learning tasks and learns from shared prototypes in the global prototype pool through supervised contrastive learning, which encourages them to learn knowledge related to their own class from others and reduces the absorption of unrelated knowledge in each global iteration. Experimental results on MNIST, Digit-5, Office-10, and DomainNet show that our method outperforms multiple baselines, with an average test accuracy improvement of about 4.6\% and 10.4\% under feature and label non-IID distributions, respectively.
[ { "created": "Sat, 1 Apr 2023 09:16:40 GMT", "version": "v1" }, { "created": "Wed, 11 Oct 2023 14:21:29 GMT", "version": "v2" } ]
2023-10-12
[ [ "Qiao", "Yu", "" ], [ "Munir", "Md. Shirajum", "" ], [ "Adhikary", "Apurba", "" ], [ "Le", "Huy Q.", "" ], [ "Raha", "Avi Deb", "" ], [ "Zhang", "Chaoning", "" ], [ "Hong", "Choong Seon", "" ] ]
Federated learning-assisted edge intelligence enables privacy protection in modern intelligent services. However, not independent and identically distributed (non-IID) distribution among edge clients can impair the local model performance. The existing single prototype-based strategy represents a class by using the mean of the feature space. However, feature spaces are usually not clustered, and a single prototype may not represent a class well. Motivated by this, this paper proposes a multi-prototype federated contrastive learning approach (MP-FedCL) which demonstrates the effectiveness of using a multi-prototype strategy over a single-prototype under non-IID settings, including both label and feature skewness. Specifically, a multi-prototype computation strategy based on \textit{k-means} is first proposed to capture different embedding representations for each class space, using multiple prototypes ($k$ centroids) to represent a class in the embedding space. In each global round, the computed multiple prototypes and their respective model parameters are sent to the edge server for aggregation into a global prototype pool, which is then sent back to all clients to guide their local training. Finally, local training for each client minimizes their own supervised learning tasks and learns from shared prototypes in the global prototype pool through supervised contrastive learning, which encourages them to learn knowledge related to their own class from others and reduces the absorption of unrelated knowledge in each global iteration. Experimental results on MNIST, Digit-5, Office-10, and DomainNet show that our method outperforms multiple baselines, with an average test accuracy improvement of about 4.6\% and 10.4\% under feature and label non-IID distributions, respectively.
1811.08521
Scott Wisdom
Scott Wisdom, John R. Hershey, Kevin Wilson, Jeremy Thorpe, Michael Chinen, Brian Patton, Rif A. Saurous
Differentiable Consistency Constraints for Improved Deep Speech Enhancement
null
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, deep networks have led to dramatic improvements in speech enhancement by framing it as a data-driven pattern recognition problem. In many modern enhancement systems, large amounts of data are used to train a deep network to estimate masks for complex-valued short-time Fourier transforms (STFTs) to suppress noise and preserve speech. However, current masking approaches often neglect two important constraints: STFT consistency and mixture consistency. Without STFT consistency, the system's output is not necessarily the STFT of a time-domain signal, and without mixture consistency, the sum of the estimated sources does not necessarily equal the input mixture. Furthermore, the only previous approaches that apply mixture consistency use real-valued masks; mixture consistency has been ignored for complex-valued masks. In this paper, we show that STFT consistency and mixture consistency can be jointly imposed by adding simple differentiable projection layers to the enhancement network. These layers are compatible with real or complex-valued masks. Using both of these constraints with complex-valued masks provides a 0.7 dB increase in scale-invariant signal-to-distortion ratio (SI-SDR) on a large dataset of speech corrupted by a wide variety of nonstationary noise across a range of input SNRs.
[ { "created": "Tue, 20 Nov 2018 22:44:12 GMT", "version": "v1" } ]
2018-11-22
[ [ "Wisdom", "Scott", "" ], [ "Hershey", "John R.", "" ], [ "Wilson", "Kevin", "" ], [ "Thorpe", "Jeremy", "" ], [ "Chinen", "Michael", "" ], [ "Patton", "Brian", "" ], [ "Saurous", "Rif A.", "" ] ]
In recent years, deep networks have led to dramatic improvements in speech enhancement by framing it as a data-driven pattern recognition problem. In many modern enhancement systems, large amounts of data are used to train a deep network to estimate masks for complex-valued short-time Fourier transforms (STFTs) to suppress noise and preserve speech. However, current masking approaches often neglect two important constraints: STFT consistency and mixture consistency. Without STFT consistency, the system's output is not necessarily the STFT of a time-domain signal, and without mixture consistency, the sum of the estimated sources does not necessarily equal the input mixture. Furthermore, the only previous approaches that apply mixture consistency use real-valued masks; mixture consistency has been ignored for complex-valued masks. In this paper, we show that STFT consistency and mixture consistency can be jointly imposed by adding simple differentiable projection layers to the enhancement network. These layers are compatible with real or complex-valued masks. Using both of these constraints with complex-valued masks provides a 0.7 dB increase in scale-invariant signal-to-distortion ratio (SI-SDR) on a large dataset of speech corrupted by a wide variety of nonstationary noise across a range of input SNRs.
2404.08181
Sina Hajimiri
Sina Hajimiri, Ismail Ben Ayed, Jose Dolz
Pay Attention to Your Neighbours: Training-Free Open-Vocabulary Semantic Segmentation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Despite the significant progress in deep learning for dense visual recognition problems, such as semantic segmentation, traditional methods are constrained by fixed class sets. Meanwhile, vision-language foundation models, such as CLIP, have showcased remarkable effectiveness in numerous zero-shot image-level tasks, owing to their robust generalizability. Recently, a body of work has investigated utilizing these models in open-vocabulary semantic segmentation (OVSS). However, existing approaches often rely on impractical supervised pre-training or access to additional pre-trained networks. In this work, we propose a strong baseline for training-free OVSS, termed Neighbour-Aware CLIP (NACLIP), representing a straightforward adaptation of CLIP tailored for this scenario. Our method enforces localization of patches in the self-attention of CLIP's vision transformer which, despite being crucial for dense prediction tasks, has been overlooked in the OVSS literature. By incorporating design choices favouring segmentation, our approach significantly improves performance without requiring additional data, auxiliary pre-trained networks, or extensive hyperparameter tuning, making it highly practical for real-world applications. Experiments are performed on 8 popular semantic segmentation benchmarks, yielding state-of-the-art performance on most scenarios. Our code is publicly available at https://github.com/sinahmr/NACLIP .
[ { "created": "Fri, 12 Apr 2024 01:08:04 GMT", "version": "v1" } ]
2024-04-15
[ [ "Hajimiri", "Sina", "" ], [ "Ayed", "Ismail Ben", "" ], [ "Dolz", "Jose", "" ] ]
Despite the significant progress in deep learning for dense visual recognition problems, such as semantic segmentation, traditional methods are constrained by fixed class sets. Meanwhile, vision-language foundation models, such as CLIP, have showcased remarkable effectiveness in numerous zero-shot image-level tasks, owing to their robust generalizability. Recently, a body of work has investigated utilizing these models in open-vocabulary semantic segmentation (OVSS). However, existing approaches often rely on impractical supervised pre-training or access to additional pre-trained networks. In this work, we propose a strong baseline for training-free OVSS, termed Neighbour-Aware CLIP (NACLIP), representing a straightforward adaptation of CLIP tailored for this scenario. Our method enforces localization of patches in the self-attention of CLIP's vision transformer which, despite being crucial for dense prediction tasks, has been overlooked in the OVSS literature. By incorporating design choices favouring segmentation, our approach significantly improves performance without requiring additional data, auxiliary pre-trained networks, or extensive hyperparameter tuning, making it highly practical for real-world applications. Experiments are performed on 8 popular semantic segmentation benchmarks, yielding state-of-the-art performance on most scenarios. Our code is publicly available at https://github.com/sinahmr/NACLIP .
1708.07549
Moi Hoon Yap
Adrian K. Davison, Walied Merghani and Moi Hoon Yap
Objective Classes for Micro-Facial Expression Recognition
11 pages, 4 figures and 5 tables. This paper will be submitted for journal review
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP, HOOF and HOG 3D feature descriptors. The experiments are evaluated on two benchmark FACS coded datasets: CASME II and SAMM. The best result achieves 86.35\% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.
[ { "created": "Thu, 24 Aug 2017 20:37:10 GMT", "version": "v1" }, { "created": "Sun, 3 Dec 2017 06:12:57 GMT", "version": "v2" } ]
2017-12-05
[ [ "Davison", "Adrian K.", "" ], [ "Merghani", "Walied", "" ], [ "Yap", "Moi Hoon", "" ] ]
Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP, HOOF and HOG 3D feature descriptors. The experiments are evaluated on two benchmark FACS coded datasets: CASME II and SAMM. The best result achieves 86.35\% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.
2001.04552
Luca Puglia
Luca Puglia and Cormac Brick
Deep Learning Stereo Vision at the edge
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an overview of the methodology used to build a new stereo vision solution that is suitable for System on Chip. This new solution was developed to bring computer vision capability to embedded devices that live in a power constrained environment. The solution is constructured as a hybrid between classical Stereo Vision techniques and deep learning approaches. The stereoscopic module is composed of two separate modules: one that accelerates the neural network we trained and one that accelerates the front-end part. The system is completely passive and does not require any structured light to obtain very compelling accuracy. With respect to the previous Stereo Vision solutions offered by the industries we offer a major improvement is robustness to noise. This is mainly possible due to the deep learning part of the chosen architecture. We submitted our result to Middlebury dataset challenge. It currently ranks as the best System on Chip solution. The system has been developed for low latency applications which require better than real time performance on high definition videos.
[ { "created": "Mon, 13 Jan 2020 22:30:41 GMT", "version": "v1" } ]
2020-01-15
[ [ "Puglia", "Luca", "" ], [ "Brick", "Cormac", "" ] ]
We present an overview of the methodology used to build a new stereo vision solution that is suitable for System on Chip. This new solution was developed to bring computer vision capability to embedded devices that live in a power constrained environment. The solution is constructured as a hybrid between classical Stereo Vision techniques and deep learning approaches. The stereoscopic module is composed of two separate modules: one that accelerates the neural network we trained and one that accelerates the front-end part. The system is completely passive and does not require any structured light to obtain very compelling accuracy. With respect to the previous Stereo Vision solutions offered by the industries we offer a major improvement is robustness to noise. This is mainly possible due to the deep learning part of the chosen architecture. We submitted our result to Middlebury dataset challenge. It currently ranks as the best System on Chip solution. The system has been developed for low latency applications which require better than real time performance on high definition videos.
1411.0710
Eric Bax
Valeria Stourm and Eric Bax
Incorporating Hidden Costs of Annoying Ads in Display Auctions
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Media publisher platforms often face an effectiveness-nuisance tradeoff: more annoying ads can be more effective for some advertisers because of their ability to attract attention, but after attracting viewers' attention, their nuisance to viewers can decrease engagement with the platform over time. With the rise of mobile technology and ad blockers, many platforms are becoming increasingly concerned about how to improve monetization through digital ads while improving viewer experience. We study an online ad auction mechanism that incorporates a charge for ad impact on user experience as a criterion for ad selection and pricing. Like a Pigovian tax, the charge causes advertisers to internalize the hidden cost of foregone future platform revenue due to ad impact on user experience. Over time, the mechanism provides an incentive for advertisers to develop ads that are effective while offering viewers a more pleasant experience. We show that adopting the mechanism can simultaneously benefit the publisher, advertisers, and viewers, even in the short term. Incorporating a charge for ad impact can increase expected advertiser profits if enough advertisers compete. A stronger effectiveness-nuisance tradeoff, meaning that ad effectiveness is more strongly associated with negative impact on user experience, increases the amount of competition required for the mechanism to benefit advertisers. The findings suggest that the mechanism can benefit the marketplace for ad slots that consistently attract many advertisers.
[ { "created": "Mon, 3 Nov 2014 21:38:55 GMT", "version": "v1" }, { "created": "Thu, 26 Jan 2017 06:44:28 GMT", "version": "v2" } ]
2017-01-27
[ [ "Stourm", "Valeria", "" ], [ "Bax", "Eric", "" ] ]
Media publisher platforms often face an effectiveness-nuisance tradeoff: more annoying ads can be more effective for some advertisers because of their ability to attract attention, but after attracting viewers' attention, their nuisance to viewers can decrease engagement with the platform over time. With the rise of mobile technology and ad blockers, many platforms are becoming increasingly concerned about how to improve monetization through digital ads while improving viewer experience. We study an online ad auction mechanism that incorporates a charge for ad impact on user experience as a criterion for ad selection and pricing. Like a Pigovian tax, the charge causes advertisers to internalize the hidden cost of foregone future platform revenue due to ad impact on user experience. Over time, the mechanism provides an incentive for advertisers to develop ads that are effective while offering viewers a more pleasant experience. We show that adopting the mechanism can simultaneously benefit the publisher, advertisers, and viewers, even in the short term. Incorporating a charge for ad impact can increase expected advertiser profits if enough advertisers compete. A stronger effectiveness-nuisance tradeoff, meaning that ad effectiveness is more strongly associated with negative impact on user experience, increases the amount of competition required for the mechanism to benefit advertisers. The findings suggest that the mechanism can benefit the marketplace for ad slots that consistently attract many advertisers.
2312.15869
Piji Li
Ruoqing Zhao, Xi Wang, Hongliang Dai, Pan Gao, Piji Li
Medical Report Generation based on Segment-Enhanced Contrastive Representation Learning
NLPCC 2023
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated radiology report generation has the potential to improve radiology reporting and alleviate the workload of radiologists. However, the medical report generation task poses unique challenges due to the limited availability of medical data and the presence of data bias. To maximize the utility of available data and reduce data bias, we propose MSCL (Medical image Segmentation with Contrastive Learning), a framework that utilizes the Segment Anything Model (SAM) to segment organs, abnormalities, bones, etc., and can pay more attention to the meaningful ROIs in the image to get better visual representations. Then we introduce a supervised contrastive loss that assigns more weight to reports that are semantically similar to the target while training. The design of this loss function aims to mitigate the impact of data bias and encourage the model to capture the essential features of a medical image and generate high-quality reports. Experimental results demonstrate the effectiveness of our proposed model, where we achieve state-of-the-art performance on the IU X-Ray public dataset.
[ { "created": "Tue, 26 Dec 2023 03:33:48 GMT", "version": "v1" } ]
2023-12-27
[ [ "Zhao", "Ruoqing", "" ], [ "Wang", "Xi", "" ], [ "Dai", "Hongliang", "" ], [ "Gao", "Pan", "" ], [ "Li", "Piji", "" ] ]
Automated radiology report generation has the potential to improve radiology reporting and alleviate the workload of radiologists. However, the medical report generation task poses unique challenges due to the limited availability of medical data and the presence of data bias. To maximize the utility of available data and reduce data bias, we propose MSCL (Medical image Segmentation with Contrastive Learning), a framework that utilizes the Segment Anything Model (SAM) to segment organs, abnormalities, bones, etc., and can pay more attention to the meaningful ROIs in the image to get better visual representations. Then we introduce a supervised contrastive loss that assigns more weight to reports that are semantically similar to the target while training. The design of this loss function aims to mitigate the impact of data bias and encourage the model to capture the essential features of a medical image and generate high-quality reports. Experimental results demonstrate the effectiveness of our proposed model, where we achieve state-of-the-art performance on the IU X-Ray public dataset.
1601.06704
Nikita Polyanskii
A. G. D'yachkov, I.V. Vorobyev, N.A. Polyanskii and V.Yu. Shchukin
On a Hypergraph Approach to Multistage Group Testing Problems
5 pages, IEEE conference
null
10.1109/ISIT.2016.7541486
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Group testing is a well known search problem that consists in detecting up to $s$ defective elements of the set $[t]=\{1,\ldots,t\}$ by carrying out tests on properly chosen subsets of $[t]$. In classical group testing the goal is to find all defective elements by using the minimal possible number of tests. In this paper we consider multistage group testing. We propose a general idea how to use a hypergraph approach to searching defects. For the case $s=2$, we design an explicit construction, which makes use of $2\log_2t(1+o(1))$ tests in the worst case and consists of $4$ stages. For the general case $s>2$, we provide an explicit construction, which uses $(2s-1)\log_2t(1+o(1))$ tests and consists of $2s-1$ rounds.
[ { "created": "Mon, 25 Jan 2016 18:20:45 GMT", "version": "v1" } ]
2016-11-18
[ [ "D'yachkov", "A. G.", "" ], [ "Vorobyev", "I. V.", "" ], [ "Polyanskii", "N. A.", "" ], [ "Shchukin", "V. Yu.", "" ] ]
Group testing is a well known search problem that consists in detecting up to $s$ defective elements of the set $[t]=\{1,\ldots,t\}$ by carrying out tests on properly chosen subsets of $[t]$. In classical group testing the goal is to find all defective elements by using the minimal possible number of tests. In this paper we consider multistage group testing. We propose a general idea how to use a hypergraph approach to searching defects. For the case $s=2$, we design an explicit construction, which makes use of $2\log_2t(1+o(1))$ tests in the worst case and consists of $4$ stages. For the general case $s>2$, we provide an explicit construction, which uses $(2s-1)\log_2t(1+o(1))$ tests and consists of $2s-1$ rounds.
1506.05527
Kim-Kwang Raymond Choo
Ben Martini, Quang Do, Kim-Kwang Raymond Choo
Conceptual evidence collection and analysis methodology for Android devices
in Cloud Security Ecosystem (Syngress, an Imprint of Elsevier), 2015
null
10.1016/B978-0-12-801595-7.00014-8
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Android devices continue to grow in popularity and capability meaning the need for a forensically sound evidence collection methodology for these devices also increases. This chapter proposes a methodology for evidence collection and analysis for Android devices that is, as far as practical, device agnostic. Android devices may contain a significant amount of evidential data that could be essential to a forensic practitioner in their investigations. However, the retrieval of this data requires that the practitioner understand and utilize techniques to analyze information collected from the device. The major contribution of this research is an in-depth evidence collection and analysis methodology for forensic practitioners.
[ { "created": "Thu, 18 Jun 2015 01:25:00 GMT", "version": "v1" } ]
2015-06-19
[ [ "Martini", "Ben", "" ], [ "Do", "Quang", "" ], [ "Choo", "Kim-Kwang Raymond", "" ] ]
Android devices continue to grow in popularity and capability meaning the need for a forensically sound evidence collection methodology for these devices also increases. This chapter proposes a methodology for evidence collection and analysis for Android devices that is, as far as practical, device agnostic. Android devices may contain a significant amount of evidential data that could be essential to a forensic practitioner in their investigations. However, the retrieval of this data requires that the practitioner understand and utilize techniques to analyze information collected from the device. The major contribution of this research is an in-depth evidence collection and analysis methodology for forensic practitioners.
2203.10131
Patrick Schnell
Patrick Schnell, Philipp Holl, Nils Thuerey
Half-Inverse Gradients for Physical Deep Learning
ICLR 2022 spotlight, code available at https://github.com/tum-pbs/half-inverse-gradients
null
null
null
cs.LG physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent works in deep learning have shown that integrating differentiable physics simulators into the training process can greatly improve the quality of results. Although this combination represents a more complex optimization task than supervised neural network training, the same gradient-based optimizers are typically employed to minimize the loss function. However, the integrated physics solvers have a profound effect on the gradient flow as manipulating scales in magnitude and direction is an inherent property of many physical processes. Consequently, the gradient flow is often highly unbalanced and creates an environment in which existing gradient-based optimizers perform poorly. In this work, we analyze the characteristics of both physical and neural network optimizations to derive a new method that does not suffer from this phenomenon. Our method is based on a half-inversion of the Jacobian and combines principles of both classical network and physics optimizers to solve the combined optimization task. Compared to state-of-the-art neural network optimizers, our method converges more quickly and yields better solutions, which we demonstrate on three complex learning problems involving nonlinear oscillators, the Schroedinger equation and the Poisson problem.
[ { "created": "Fri, 18 Mar 2022 19:11:04 GMT", "version": "v1" } ]
2022-03-22
[ [ "Schnell", "Patrick", "" ], [ "Holl", "Philipp", "" ], [ "Thuerey", "Nils", "" ] ]
Recent works in deep learning have shown that integrating differentiable physics simulators into the training process can greatly improve the quality of results. Although this combination represents a more complex optimization task than supervised neural network training, the same gradient-based optimizers are typically employed to minimize the loss function. However, the integrated physics solvers have a profound effect on the gradient flow as manipulating scales in magnitude and direction is an inherent property of many physical processes. Consequently, the gradient flow is often highly unbalanced and creates an environment in which existing gradient-based optimizers perform poorly. In this work, we analyze the characteristics of both physical and neural network optimizations to derive a new method that does not suffer from this phenomenon. Our method is based on a half-inversion of the Jacobian and combines principles of both classical network and physics optimizers to solve the combined optimization task. Compared to state-of-the-art neural network optimizers, our method converges more quickly and yields better solutions, which we demonstrate on three complex learning problems involving nonlinear oscillators, the Schroedinger equation and the Poisson problem.
1903.12468
Niveditha Manjunath
Ezio Bartocci, Niveditha Manjunath, Leonardo Mariani, Cristinel Mateis, Dejan Ni\v{c}kovi\'c
Automatic Failure Explanation in CPS Models
null
null
null
null
cs.SE cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Debugging Cyber-Physical System (CPS) models can be extremely complex. Indeed, only the detection of a failure is insuffcient to know how to correct a faulty model. Faults can propagate in time and in space producing observable misbehaviours in locations completely different from the location of the fault. Understanding the reason of an observed failure is typically a challenging and laborious task left to the experience and domain knowledge of the designer. \n In this paper, we propose CPSDebug, a novel approach that by combining testing, specification mining, and failure analysis, can automatically explain failures in Simulink/Stateflow models. We evaluate CPSDebug on two case studies, involving two use scenarios and several classes of faults, demonstrating the potential value of our approach.
[ { "created": "Fri, 29 Mar 2019 12:26:42 GMT", "version": "v1" } ]
2020-10-14
[ [ "Bartocci", "Ezio", "" ], [ "Manjunath", "Niveditha", "" ], [ "Mariani", "Leonardo", "" ], [ "Mateis", "Cristinel", "" ], [ "Ničković", "Dejan", "" ] ]
Debugging Cyber-Physical System (CPS) models can be extremely complex. Indeed, only the detection of a failure is insuffcient to know how to correct a faulty model. Faults can propagate in time and in space producing observable misbehaviours in locations completely different from the location of the fault. Understanding the reason of an observed failure is typically a challenging and laborious task left to the experience and domain knowledge of the designer. \n In this paper, we propose CPSDebug, a novel approach that by combining testing, specification mining, and failure analysis, can automatically explain failures in Simulink/Stateflow models. We evaluate CPSDebug on two case studies, involving two use scenarios and several classes of faults, demonstrating the potential value of our approach.
1410.7460
Ahmed Ewaisha
Ahmed Ewaisha and Cihan Tepedelenlio\u{g}lu
Throughput Optimization in Multi-Channel Cognitive Radios with Hard Deadline Constraints
Keywords: Delay Constraint, Optimal Stopping Rule, Water Filling, Stochastic Optimization, Optimal Channel Selection
null
10.1109/TVT.2015.2425951
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a cognitive radio scenario we consider a single secondary user (SU) accessing a multi-channel system. The SU senses the channels sequentially to detect if a primary user (PU) is occupying the channels, and stops its search to access a channel if it offers a significantly high throughput. The optimal stopping rule and power control problem is considered. The problem is formulated as a SU's throughput-maximization problem under a power, interference and packet delay constraints. We first show the effect of the optimal stopping rule on the packet delay, then solve this optimization problem for both the overlay system where the SU transmits only at the spectrum holes as well as the underlay system where tolerable interference (or tolerable collision probability) is allowed. We provide closed-form expressions for the optimal stopping rule, and show that the optimal power control strategy for this multi-channel problem is a modified water-filling approach. We extend the work to multiple SU scenario and show that when the number of SUs is large the complexity of the solution becomes smaller than that of the single SU case. We discuss the application of this problem in typical networks where packets arrive simultaneously and have the same departure deadline. We further propose an online adaptation policy to the optimal stopping rule that meets the packets' hard-deadline constraint and, at the same time, gives higher throughput than the offline policy.
[ { "created": "Mon, 27 Oct 2014 23:45:23 GMT", "version": "v1" }, { "created": "Fri, 11 Dec 2015 03:20:38 GMT", "version": "v2" }, { "created": "Tue, 29 Dec 2015 14:30:13 GMT", "version": "v3" } ]
2015-12-31
[ [ "Ewaisha", "Ahmed", "" ], [ "Tepedelenlioğlu", "Cihan", "" ] ]
In a cognitive radio scenario we consider a single secondary user (SU) accessing a multi-channel system. The SU senses the channels sequentially to detect if a primary user (PU) is occupying the channels, and stops its search to access a channel if it offers a significantly high throughput. The optimal stopping rule and power control problem is considered. The problem is formulated as a SU's throughput-maximization problem under a power, interference and packet delay constraints. We first show the effect of the optimal stopping rule on the packet delay, then solve this optimization problem for both the overlay system where the SU transmits only at the spectrum holes as well as the underlay system where tolerable interference (or tolerable collision probability) is allowed. We provide closed-form expressions for the optimal stopping rule, and show that the optimal power control strategy for this multi-channel problem is a modified water-filling approach. We extend the work to multiple SU scenario and show that when the number of SUs is large the complexity of the solution becomes smaller than that of the single SU case. We discuss the application of this problem in typical networks where packets arrive simultaneously and have the same departure deadline. We further propose an online adaptation policy to the optimal stopping rule that meets the packets' hard-deadline constraint and, at the same time, gives higher throughput than the offline policy.
1809.05369
Chris Norval
Chris Norval, Jennifer Cobbe, Heleen Janssen, Jatinder Singh
Reclaiming Data: Overcoming app identification barriers for exercising data protection rights
Author preprint (accepted 20-Aug-18) To appear in the proceedings of the 4th Workshop on Legal and Technical Issues in Cloud and Pervasive Computing (IoT) [CLaw-18], UbiComp/ISWC'18 Adjunct, https://doi.org/10.1145/3267305.3274153
null
10.1145/3267305.3274153
null
cs.CY cs.CR cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data protection regulations generally afford individuals certain rights over their personal data, including the rights to access, rectify, and delete the data held on them. Exercising such rights naturally requires those with data management obligations (service providers) to be able to match an individual with their data. However, many mobile apps collect personal data, without requiring user registration or collecting details of a user's identity (email address, names, phone number, and so forth). As a result, a user's ability to exercise their rights will be hindered without means for an individual to link themselves with this 'nameless' data. Current approaches often involve those seeking to exercise their legal rights having to give the app's provider more personal information, or even to register for a service; both of which seem contrary to the spirit of data protection law. This paper explores these concerns, and indicates simple means for facilitating data subject rights through both application and mobile platform (OS) design.
[ { "created": "Fri, 14 Sep 2018 12:05:05 GMT", "version": "v1" } ]
2018-09-17
[ [ "Norval", "Chris", "" ], [ "Cobbe", "Jennifer", "" ], [ "Janssen", "Heleen", "" ], [ "Singh", "Jatinder", "" ] ]
Data protection regulations generally afford individuals certain rights over their personal data, including the rights to access, rectify, and delete the data held on them. Exercising such rights naturally requires those with data management obligations (service providers) to be able to match an individual with their data. However, many mobile apps collect personal data, without requiring user registration or collecting details of a user's identity (email address, names, phone number, and so forth). As a result, a user's ability to exercise their rights will be hindered without means for an individual to link themselves with this 'nameless' data. Current approaches often involve those seeking to exercise their legal rights having to give the app's provider more personal information, or even to register for a service; both of which seem contrary to the spirit of data protection law. This paper explores these concerns, and indicates simple means for facilitating data subject rights through both application and mobile platform (OS) design.
2206.04397
Rafael Menezes
Rafael Menezes, Daniel Moura, Helena Cavalcante, Rosiane de Freitas and Lucas C. Cordeiro
ESBMC-Jimple: Verifying Kotlin Programs via Jimple Intermediate Representation
ACM SIGSOFT International Symposium on Software Testing and Analysis 2022
null
10.1145/3533767.3543294
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we describe and evaluate the first model checker for verifying Kotlin programs through the Jimple intermediate representation. The verifier, named ESBMC-Jimple, is built on top of the Efficient SMT-based Context-Bounded Model Checker (ESBMC). It uses the Soot framework to obtain the Jimple IR, representing a simplified version of the Kotlin source code, containing a maximum of three operands per instruction. ESBMC-Jimple processes Kotlin source code together with a model of the standard Kotlin libraries and checks a set of safety properties. Experimental results show that ESBMC-Jimple can correctly verify a set of Kotlin benchmarks from the literature and that it is competitive with state-of-the-art Java bytecode verifiers. A demonstration is available at https://youtu.be/J6WhNfXvJNc.
[ { "created": "Thu, 9 Jun 2022 10:18:53 GMT", "version": "v1" }, { "created": "Wed, 20 Jul 2022 13:26:30 GMT", "version": "v2" } ]
2022-07-21
[ [ "Menezes", "Rafael", "" ], [ "Moura", "Daniel", "" ], [ "Cavalcante", "Helena", "" ], [ "de Freitas", "Rosiane", "" ], [ "Cordeiro", "Lucas C.", "" ] ]
In this work, we describe and evaluate the first model checker for verifying Kotlin programs through the Jimple intermediate representation. The verifier, named ESBMC-Jimple, is built on top of the Efficient SMT-based Context-Bounded Model Checker (ESBMC). It uses the Soot framework to obtain the Jimple IR, representing a simplified version of the Kotlin source code, containing a maximum of three operands per instruction. ESBMC-Jimple processes Kotlin source code together with a model of the standard Kotlin libraries and checks a set of safety properties. Experimental results show that ESBMC-Jimple can correctly verify a set of Kotlin benchmarks from the literature and that it is competitive with state-of-the-art Java bytecode verifiers. A demonstration is available at https://youtu.be/J6WhNfXvJNc.
1211.5520
Ashish Tendulkar Dr
Vivekanand Samant, Arvind Hulgeri, Alfonso Valencia, Ashish V. Tendulkar
Accurate Demarcation of Protein Domain Linkers based on Structural Analysis of Linker Probable Region
18 pages, 2 figures
International Journal of Computational Biology, 0001:01-19, 2012
null
null
cs.CE q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In multi-domain proteins, the domains are connected by a flexible unstructured region called as protein domain linker. The accurate demarcation of these linkers holds a key to understanding of their biochemical and evolutionary attributes. This knowledge helps in designing a suitable linker for engineering stable multi-domain chimeric proteins. Here we propose a novel method for the demarcation of the linker based on a three-dimensional protein structure and a domain definition. The proposed method is based on biological knowledge about structural flexibility of the linkers. We performed structural analysis on a linker probable region (LPR) around domain boundary points of known SCOP domains. The LPR was described using a set of overlapping peptide fragments of fixed size. Each peptide fragment was then described by geometric invariants (GIs) and subjected to clustering process where the fragments corresponding to actual linker come up as outliers. We then discover the actual linkers by finding the longest continuous stretch of outlier fragments from LPRs. This method was evaluated on a benchmark dataset of 51 continuous multi-domain proteins, where it achieves F1 score of 0.745 (0.83 precision and 0.66 recall). When the method was applied on 725 continuous multi-domain proteins, it was able to identify novel linkers that were not reported previously. This method can be used in combination with supervised / sequence based linker prediction methods for accurate linker demarcation.
[ { "created": "Fri, 23 Nov 2012 14:53:54 GMT", "version": "v1" } ]
2012-11-26
[ [ "Samant", "Vivekanand", "" ], [ "Hulgeri", "Arvind", "" ], [ "Valencia", "Alfonso", "" ], [ "Tendulkar", "Ashish V.", "" ] ]
In multi-domain proteins, the domains are connected by a flexible unstructured region called as protein domain linker. The accurate demarcation of these linkers holds a key to understanding of their biochemical and evolutionary attributes. This knowledge helps in designing a suitable linker for engineering stable multi-domain chimeric proteins. Here we propose a novel method for the demarcation of the linker based on a three-dimensional protein structure and a domain definition. The proposed method is based on biological knowledge about structural flexibility of the linkers. We performed structural analysis on a linker probable region (LPR) around domain boundary points of known SCOP domains. The LPR was described using a set of overlapping peptide fragments of fixed size. Each peptide fragment was then described by geometric invariants (GIs) and subjected to clustering process where the fragments corresponding to actual linker come up as outliers. We then discover the actual linkers by finding the longest continuous stretch of outlier fragments from LPRs. This method was evaluated on a benchmark dataset of 51 continuous multi-domain proteins, where it achieves F1 score of 0.745 (0.83 precision and 0.66 recall). When the method was applied on 725 continuous multi-domain proteins, it was able to identify novel linkers that were not reported previously. This method can be used in combination with supervised / sequence based linker prediction methods for accurate linker demarcation.
2402.04231
Subhamoy Maitra
Ajeet Kumar and Subhamoy Maitra
Further Constructions of AMUBs for Non-prime power Composite Dimensions
null
null
null
null
cs.DM
http://creativecommons.org/licenses/by/4.0/
Construction of a large class of Mutually Unbiased Bases (MUBs) for non-prime power composite dimensions ($d = k\times s$) is a long standing open problem, which leads to different construction methods for the class Approximate MUBs (AMUBs) by relaxing the criterion that the absolute value of the dot product between two vectors chosen from different bases should be $\leq \frac{\beta}{\sqrt{d}}$. In this chapter, we consider a more general class of AMUBs (ARMUBs, considering the real ones too), compared to our earlier work in [Cryptography and Communications, 14(3): 527--549, 2022]. We note that the quality of AMUBs (ARMUBs) constructed using RBD$(X,A)$ with $|X|= d$, critically depends on the parameters, $|s-k|$, $\mu$ (maximum number of elements common between any pair of blocks), and the set of block sizes. We present the construction of $\mathcal{O}(\sqrt{d})$ many $\beta$-AMUBs for composite $d$ when $|s-k|< \sqrt{d}$, using RBDs having block sizes approximately $\sqrt{d}$, such that $|\braket{\psi^l_i|\psi^m_j}| \leq \frac{\beta}{\sqrt{d}}$ where $\beta = 1 + \frac{|s-k|}{2\sqrt{d}}+ \mathcal{O}(d^{-1}) \leq 2$. Moreover, if real Hadamard matrix of order $k$ or $s$ exists, then one can construct at least $N(k)+1$ (or $N(s)+1$) many $\beta$-ARMUBs for dimension $d$, with $\beta \leq 2 - \frac{|s-k|}{2\sqrt{d}}+ \mathcal{O}(d^{-1})< 2$, where $N(w)$ is the number of MOLS$(w)$. This improves and generalizes some of our previous results for ARMUBs from two points, viz., the real cases are now extended to complex ones too. The earlier efforts use some existing RBDs, whereas here we consider new instances of RBDs that provide better results. Similar to the earlier cases, the AMUBs (ARMUBs) constructed using RBDs are in general very sparse, where the sparsity $(\epsilon)$ is $1 - \mathcal{O}(d^{-\frac{1}{2}})$.
[ { "created": "Tue, 6 Feb 2024 18:39:25 GMT", "version": "v1" } ]
2024-02-07
[ [ "Kumar", "Ajeet", "" ], [ "Maitra", "Subhamoy", "" ] ]
Construction of a large class of Mutually Unbiased Bases (MUBs) for non-prime power composite dimensions ($d = k\times s$) is a long standing open problem, which leads to different construction methods for the class Approximate MUBs (AMUBs) by relaxing the criterion that the absolute value of the dot product between two vectors chosen from different bases should be $\leq \frac{\beta}{\sqrt{d}}$. In this chapter, we consider a more general class of AMUBs (ARMUBs, considering the real ones too), compared to our earlier work in [Cryptography and Communications, 14(3): 527--549, 2022]. We note that the quality of AMUBs (ARMUBs) constructed using RBD$(X,A)$ with $|X|= d$, critically depends on the parameters, $|s-k|$, $\mu$ (maximum number of elements common between any pair of blocks), and the set of block sizes. We present the construction of $\mathcal{O}(\sqrt{d})$ many $\beta$-AMUBs for composite $d$ when $|s-k|< \sqrt{d}$, using RBDs having block sizes approximately $\sqrt{d}$, such that $|\braket{\psi^l_i|\psi^m_j}| \leq \frac{\beta}{\sqrt{d}}$ where $\beta = 1 + \frac{|s-k|}{2\sqrt{d}}+ \mathcal{O}(d^{-1}) \leq 2$. Moreover, if real Hadamard matrix of order $k$ or $s$ exists, then one can construct at least $N(k)+1$ (or $N(s)+1$) many $\beta$-ARMUBs for dimension $d$, with $\beta \leq 2 - \frac{|s-k|}{2\sqrt{d}}+ \mathcal{O}(d^{-1})< 2$, where $N(w)$ is the number of MOLS$(w)$. This improves and generalizes some of our previous results for ARMUBs from two points, viz., the real cases are now extended to complex ones too. The earlier efforts use some existing RBDs, whereas here we consider new instances of RBDs that provide better results. Similar to the earlier cases, the AMUBs (ARMUBs) constructed using RBDs are in general very sparse, where the sparsity $(\epsilon)$ is $1 - \mathcal{O}(d^{-\frac{1}{2}})$.
2003.11456
Ralf M\"oller
Ralf M\"oller
Derivation of Coupled PCA and SVD Learning Rules from a Newton Zero-Finding Framework
null
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In coupled learning rules for PCA (principal component analysis) and SVD (singular value decomposition), the update of the estimates of eigenvectors or singular vectors is influenced by the estimates of eigenvalues or singular values, respectively. This coupled update mitigates the speed-stability problem since the update equations converge from all directions with approximately the same speed. A method to derive coupled learning rules from information criteria by Newton optimization is known. However, these information criteria have to be designed, offer no explanatory value, and can only impose Euclidean constraints on the vector estimates. Here we describe an alternative approach where coupled PCA and SVD learning rules can systematically be derived from a Newton zero-finding framework. The derivation starts from an objective function, combines the equations for its extrema with arbitrary constraints on the vector estimates, and solves the resulting vector zero-point equation using Newton's zero-finding method. To demonstrate the framework, we derive PCA and SVD learning rules with constant Euclidean length or constant sum of the vector estimates.
[ { "created": "Wed, 25 Mar 2020 15:49:55 GMT", "version": "v1" } ]
2020-03-26
[ [ "Möller", "Ralf", "" ] ]
In coupled learning rules for PCA (principal component analysis) and SVD (singular value decomposition), the update of the estimates of eigenvectors or singular vectors is influenced by the estimates of eigenvalues or singular values, respectively. This coupled update mitigates the speed-stability problem since the update equations converge from all directions with approximately the same speed. A method to derive coupled learning rules from information criteria by Newton optimization is known. However, these information criteria have to be designed, offer no explanatory value, and can only impose Euclidean constraints on the vector estimates. Here we describe an alternative approach where coupled PCA and SVD learning rules can systematically be derived from a Newton zero-finding framework. The derivation starts from an objective function, combines the equations for its extrema with arbitrary constraints on the vector estimates, and solves the resulting vector zero-point equation using Newton's zero-finding method. To demonstrate the framework, we derive PCA and SVD learning rules with constant Euclidean length or constant sum of the vector estimates.
2403.06999
Nan Liu
Ziwen Wang, Jin Wee Lee, Tanujit Chakraborty, Yilin Ning, Mingxuan Liu, Feng Xie, Marcus Eng Hock Ong, Nan Liu
Survival modeling using deep learning, machine learning and statistical methods: A comparative analysis for predicting mortality after hospital admission
null
null
null
null
cs.LG cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Survival analysis is essential for studying time-to-event outcomes and providing a dynamic understanding of the probability of an event occurring over time. Various survival analysis techniques, from traditional statistical models to state-of-the-art machine learning algorithms, support healthcare intervention and policy decisions. However, there remains ongoing discussion about their comparative performance. We conducted a comparative study of several survival analysis methods, including Cox proportional hazards (CoxPH), stepwise CoxPH, elastic net penalized Cox model, Random Survival Forests (RSF), Gradient Boosting machine (GBM) learning, AutoScore-Survival, DeepSurv, time-dependent Cox model based on neural network (CoxTime), and DeepHit survival neural network. We applied the concordance index (C-index) for model goodness-of-fit, and integral Brier scores (IBS) for calibration, and considered the model interpretability. As a case study, we performed a retrospective analysis of patients admitted through the emergency department of a tertiary hospital from 2017 to 2019, predicting 90-day all-cause mortality based on patient demographics, clinicopathological features, and historical data. The results of the C-index indicate that deep learning achieved comparable performance, with DeepSurv producing the best discrimination (DeepSurv: 0.893; CoxTime: 0.892; DeepHit: 0.891). The calibration of DeepSurv (IBS: 0.041) performed the best, followed by RSF (IBS: 0.042) and GBM (IBS: 0.0421), all using the full variables. Moreover, AutoScore-Survival, using a minimal variable subset, is easy to interpret, and can achieve good discrimination and calibration (C-index: 0.867; IBS: 0.044). While all models were satisfactory, DeepSurv exhibited the best discrimination and calibration. In addition, AutoScore-Survival offers a more parsimonious model and excellent interpretability.
[ { "created": "Mon, 4 Mar 2024 10:46:02 GMT", "version": "v1" } ]
2024-03-13
[ [ "Wang", "Ziwen", "" ], [ "Lee", "Jin Wee", "" ], [ "Chakraborty", "Tanujit", "" ], [ "Ning", "Yilin", "" ], [ "Liu", "Mingxuan", "" ], [ "Xie", "Feng", "" ], [ "Ong", "Marcus Eng Hock", "" ], [ "Liu", "Nan", "" ] ]
Survival analysis is essential for studying time-to-event outcomes and providing a dynamic understanding of the probability of an event occurring over time. Various survival analysis techniques, from traditional statistical models to state-of-the-art machine learning algorithms, support healthcare intervention and policy decisions. However, there remains ongoing discussion about their comparative performance. We conducted a comparative study of several survival analysis methods, including Cox proportional hazards (CoxPH), stepwise CoxPH, elastic net penalized Cox model, Random Survival Forests (RSF), Gradient Boosting machine (GBM) learning, AutoScore-Survival, DeepSurv, time-dependent Cox model based on neural network (CoxTime), and DeepHit survival neural network. We applied the concordance index (C-index) for model goodness-of-fit, and integral Brier scores (IBS) for calibration, and considered the model interpretability. As a case study, we performed a retrospective analysis of patients admitted through the emergency department of a tertiary hospital from 2017 to 2019, predicting 90-day all-cause mortality based on patient demographics, clinicopathological features, and historical data. The results of the C-index indicate that deep learning achieved comparable performance, with DeepSurv producing the best discrimination (DeepSurv: 0.893; CoxTime: 0.892; DeepHit: 0.891). The calibration of DeepSurv (IBS: 0.041) performed the best, followed by RSF (IBS: 0.042) and GBM (IBS: 0.0421), all using the full variables. Moreover, AutoScore-Survival, using a minimal variable subset, is easy to interpret, and can achieve good discrimination and calibration (C-index: 0.867; IBS: 0.044). While all models were satisfactory, DeepSurv exhibited the best discrimination and calibration. In addition, AutoScore-Survival offers a more parsimonious model and excellent interpretability.
1809.08566
Faegheh Hasibi
Arash Dargahi Nobari, Arian Askari, Faegheh Hasibi and Mahmood Neshati
Query Understanding via Entity Attribute Identification
Proceedings of the 27th International Conference on Information and Knowledge Management (CIKM '18), 2018
null
10.1145/3269206.3269245
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding searchers' queries is an essential component of semantic search systems. In many cases, search queries involve specific attributes of an entity in a knowledge base (KB), which can be further used to find query answers. In this study, we aim to move forward the understanding of queries by identifying their related entity attributes from a knowledge base. To this end, we introduce the task of entity attribute identification and propose two methods to address it: (i) a model based on Markov Random Field, and (ii) a learning to rank model. We develop a human annotated test collection and show that our proposed methods can bring significant improvements over the baseline methods.
[ { "created": "Sun, 23 Sep 2018 09:49:19 GMT", "version": "v1" } ]
2018-09-25
[ [ "Nobari", "Arash Dargahi", "" ], [ "Askari", "Arian", "" ], [ "Hasibi", "Faegheh", "" ], [ "Neshati", "Mahmood", "" ] ]
Understanding searchers' queries is an essential component of semantic search systems. In many cases, search queries involve specific attributes of an entity in a knowledge base (KB), which can be further used to find query answers. In this study, we aim to move forward the understanding of queries by identifying their related entity attributes from a knowledge base. To this end, we introduce the task of entity attribute identification and propose two methods to address it: (i) a model based on Markov Random Field, and (ii) a learning to rank model. We develop a human annotated test collection and show that our proposed methods can bring significant improvements over the baseline methods.
2107.13718
Luchuan Song
Kun Zhao, Luchuan Song, Bin Liu, Qi Chu, Nenghai Yu
Cascaded Residual Density Network for Crowd Counting
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Crowd counting is a challenging task due to the issues such as scale variation and perspective variation in real crowd scenes. In this paper, we propose a novel Cascaded Residual Density Network (CRDNet) in a coarse-to-fine approach to generate the high-quality density map for crowd counting more accurately. (1) We estimate the residual density maps by multi-scale pyramidal features through cascaded residual density modules. It can improve the quality of density map layer by layer effectively. (2) A novel additional local count loss is presented to refine the accuracy of crowd counting, which reduces the errors of pixel-wise Euclidean loss by restricting the number of people in the local crowd areas. Experiments on two public benchmark datasets show that the proposed method achieves effective improvement compared with the state-of-the-art methods.
[ { "created": "Thu, 29 Jul 2021 03:07:11 GMT", "version": "v1" } ]
2021-07-30
[ [ "Zhao", "Kun", "" ], [ "Song", "Luchuan", "" ], [ "Liu", "Bin", "" ], [ "Chu", "Qi", "" ], [ "Yu", "Nenghai", "" ] ]
Crowd counting is a challenging task due to the issues such as scale variation and perspective variation in real crowd scenes. In this paper, we propose a novel Cascaded Residual Density Network (CRDNet) in a coarse-to-fine approach to generate the high-quality density map for crowd counting more accurately. (1) We estimate the residual density maps by multi-scale pyramidal features through cascaded residual density modules. It can improve the quality of density map layer by layer effectively. (2) A novel additional local count loss is presented to refine the accuracy of crowd counting, which reduces the errors of pixel-wise Euclidean loss by restricting the number of people in the local crowd areas. Experiments on two public benchmark datasets show that the proposed method achieves effective improvement compared with the state-of-the-art methods.
2306.05442
Zhaoyang Huang
Zhaoyang Huang, Xiaoyu Shi, Chao Zhang, Qiang Wang, Yijin Li, Hongwei Qin, Jifeng Dai, Xiaogang Wang, and Hongsheng Li
FlowFormer: A Transformer Architecture and Its Masked Cost Volume Autoencoding for Optical Flow
arXiv admin note: substantial text overlap with arXiv:2203.16194, arXiv:2303.01237
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
This paper introduces a novel transformer-based network architecture, FlowFormer, along with the Masked Cost Volume AutoEncoding (MCVA) for pretraining it to tackle the problem of optical flow estimation. FlowFormer tokenizes the 4D cost-volume built from the source-target image pair and iteratively refines flow estimation with a cost-volume encoder-decoder architecture. The cost-volume encoder derives a cost memory with alternate-group transformer~(AGT) layers in a latent space and the decoder recurrently decodes flow from the cost memory with dynamic positional cost queries. On the Sintel benchmark, FlowFormer architecture achieves 1.16 and 2.09 average end-point-error~(AEPE) on the clean and final pass, a 16.5\% and 15.5\% error reduction from the GMA~(1.388 and 2.47). MCVA enhances FlowFormer by pretraining the cost-volume encoder with a masked autoencoding scheme, which further unleashes the capability of FlowFormer with unlabeled data. This is especially critical in optical flow estimation because ground truth flows are more expensive to acquire than labels in other vision tasks. MCVA improves FlowFormer all-sided and FlowFormer+MCVA ranks 1st among all published methods on both Sintel and KITTI-2015 benchmarks and achieves the best generalization performance. Specifically, FlowFormer+MCVA achieves 1.07 and 1.94 AEPE on the Sintel benchmark, leading to 7.76\% and 7.18\% error reductions from FlowFormer.
[ { "created": "Thu, 8 Jun 2023 12:24:04 GMT", "version": "v1" } ]
2023-06-12
[ [ "Huang", "Zhaoyang", "" ], [ "Shi", "Xiaoyu", "" ], [ "Zhang", "Chao", "" ], [ "Wang", "Qiang", "" ], [ "Li", "Yijin", "" ], [ "Qin", "Hongwei", "" ], [ "Dai", "Jifeng", "" ], [ "Wang", "Xiaogang", "" ], [ "Li", "Hongsheng", "" ] ]
This paper introduces a novel transformer-based network architecture, FlowFormer, along with the Masked Cost Volume AutoEncoding (MCVA) for pretraining it to tackle the problem of optical flow estimation. FlowFormer tokenizes the 4D cost-volume built from the source-target image pair and iteratively refines flow estimation with a cost-volume encoder-decoder architecture. The cost-volume encoder derives a cost memory with alternate-group transformer~(AGT) layers in a latent space and the decoder recurrently decodes flow from the cost memory with dynamic positional cost queries. On the Sintel benchmark, FlowFormer architecture achieves 1.16 and 2.09 average end-point-error~(AEPE) on the clean and final pass, a 16.5\% and 15.5\% error reduction from the GMA~(1.388 and 2.47). MCVA enhances FlowFormer by pretraining the cost-volume encoder with a masked autoencoding scheme, which further unleashes the capability of FlowFormer with unlabeled data. This is especially critical in optical flow estimation because ground truth flows are more expensive to acquire than labels in other vision tasks. MCVA improves FlowFormer all-sided and FlowFormer+MCVA ranks 1st among all published methods on both Sintel and KITTI-2015 benchmarks and achieves the best generalization performance. Specifically, FlowFormer+MCVA achieves 1.07 and 1.94 AEPE on the Sintel benchmark, leading to 7.76\% and 7.18\% error reductions from FlowFormer.
2206.01781
Jaskaran Grover
Jaskaran Grover and Changliu Liu and Katia Sycara
The Before, During, and After of Multi-Robot Deadlock
Accepted to International Journal of Robotics Research 2022, WAFR 2020 Special Issue
null
null
null
cs.RO cs.MA math.OC
http://creativecommons.org/licenses/by/4.0/
Collision avoidance for multirobot systems is a well-studied problem. Recently, control barrier functions (CBFs) have been proposed for synthesizing controllers that guarantee collision avoidance and goal stabilization for multiple robots. However, it has been noted that reactive control synthesis methods (such as CBFs) are prone to \textit{deadlock}, an equilibrium of system dynamics that causes the robots to stall before reaching their goals. In this paper, we analyze the closed-loop dynamics of robots using CBFs, to characterize controller parameters, initial conditions, and goal locations that invariably lead the system to deadlock. Using tools from duality theory, we derive geometric properties of robot configurations of an $N$ robot system once it is in deadlock and we justify them using the mechanics interpretation of KKT conditions. Our key deductions are that 1) system deadlock is characterized by a force-equilibrium on robots and 2) deadlock occurs to ensure safety when safety is on the brink of being violated. These deductions allow us to interpret deadlock as a subset of the state space, and we show that this set is non-empty and located on the boundary of the safe set. By exploiting these properties, we analyze the number of admissible robot configurations in deadlock and develop a provably-correct decentralized algorithm for deadlock resolution to safely deliver the robots to their goals. This algorithm is validated in simulations as well as experimentally on Khepera-IV robots.
[ { "created": "Fri, 3 Jun 2022 18:48:55 GMT", "version": "v1" } ]
2022-06-07
[ [ "Grover", "Jaskaran", "" ], [ "Liu", "Changliu", "" ], [ "Sycara", "Katia", "" ] ]
Collision avoidance for multirobot systems is a well-studied problem. Recently, control barrier functions (CBFs) have been proposed for synthesizing controllers that guarantee collision avoidance and goal stabilization for multiple robots. However, it has been noted that reactive control synthesis methods (such as CBFs) are prone to \textit{deadlock}, an equilibrium of system dynamics that causes the robots to stall before reaching their goals. In this paper, we analyze the closed-loop dynamics of robots using CBFs, to characterize controller parameters, initial conditions, and goal locations that invariably lead the system to deadlock. Using tools from duality theory, we derive geometric properties of robot configurations of an $N$ robot system once it is in deadlock and we justify them using the mechanics interpretation of KKT conditions. Our key deductions are that 1) system deadlock is characterized by a force-equilibrium on robots and 2) deadlock occurs to ensure safety when safety is on the brink of being violated. These deductions allow us to interpret deadlock as a subset of the state space, and we show that this set is non-empty and located on the boundary of the safe set. By exploiting these properties, we analyze the number of admissible robot configurations in deadlock and develop a provably-correct decentralized algorithm for deadlock resolution to safely deliver the robots to their goals. This algorithm is validated in simulations as well as experimentally on Khepera-IV robots.
1910.00511
Yang Zhang
Yang Zhang, Shiyu Chang, Mo Yu, Kaizhi Qian
An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are two major paradigms of white-box adversarial attacks that attempt to impose input perturbations. The first paradigm, called the fix-perturbation attack, crafts adversarial samples within a given perturbation level. The second paradigm, called the zero-confidence attack, finds the smallest perturbation needed to cause mis-classification, also known as the margin of an input feature. While the former paradigm is well-resolved, the latter is not. Existing zero-confidence attacks either introduce significant ap-proximation errors, or are too time-consuming. We therefore propose MARGINATTACK, a zero-confidence attack framework that is able to compute the margin with improved accuracy and efficiency. Our experiments show that MARGINATTACK is able to compute a smaller margin than the state-of-the-art zero-confidence attacks, and matches the state-of-the-art fix-perturbation at-tacks. In addition, it runs significantly faster than the Carlini-Wagner attack, currently the most ac-curate zero-confidence attack algorithm.
[ { "created": "Tue, 1 Oct 2019 15:59:52 GMT", "version": "v1" } ]
2019-10-02
[ [ "Zhang", "Yang", "" ], [ "Chang", "Shiyu", "" ], [ "Yu", "Mo", "" ], [ "Qian", "Kaizhi", "" ] ]
There are two major paradigms of white-box adversarial attacks that attempt to impose input perturbations. The first paradigm, called the fix-perturbation attack, crafts adversarial samples within a given perturbation level. The second paradigm, called the zero-confidence attack, finds the smallest perturbation needed to cause mis-classification, also known as the margin of an input feature. While the former paradigm is well-resolved, the latter is not. Existing zero-confidence attacks either introduce significant ap-proximation errors, or are too time-consuming. We therefore propose MARGINATTACK, a zero-confidence attack framework that is able to compute the margin with improved accuracy and efficiency. Our experiments show that MARGINATTACK is able to compute a smaller margin than the state-of-the-art zero-confidence attacks, and matches the state-of-the-art fix-perturbation at-tacks. In addition, it runs significantly faster than the Carlini-Wagner attack, currently the most ac-curate zero-confidence attack algorithm.
2307.15198
Yao Su
Yao Su and Zhentian Qian and Lei Ma and Lifang He and Xiangnan Kong
One-shot Joint Extraction, Registration and Segmentation of Neuroimaging Data
Published as a research track paper at KDD 2023. Code: https://github.com/Anonymous4545/JERS
null
10.1145/3580305.3599452
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain extraction, registration and segmentation are indispensable preprocessing steps in neuroimaging studies. The aim is to extract the brain from raw imaging scans (i.e., extraction step), align it with a target brain image (i.e., registration step) and label the anatomical brain regions (i.e., segmentation step). Conventional studies typically focus on developing separate methods for the extraction, registration and segmentation tasks in a supervised setting. The performance of these methods is largely contingent on the quantity of training samples and the extent of visual inspections carried out by experts for error correction. Nevertheless, collecting voxel-level labels and performing manual quality control on high-dimensional neuroimages (e.g., 3D MRI) are expensive and time-consuming in many medical studies. In this paper, we study the problem of one-shot joint extraction, registration and segmentation in neuroimaging data, which exploits only one labeled template image (a.k.a. atlas) and a few unlabeled raw images for training. We propose a unified end-to-end framework, called JERS, to jointly optimize the extraction, registration and segmentation tasks, allowing feedback among them. Specifically, we use a group of extraction, registration and segmentation modules to learn the extraction mask, transformation and segmentation mask, where modules are interconnected and mutually reinforced by self-supervision. Empirical results on real-world datasets demonstrate that our proposed method performs exceptionally in the extraction, registration and segmentation tasks. Our code and data can be found at https://github.com/Anonymous4545/JERS
[ { "created": "Thu, 27 Jul 2023 21:14:40 GMT", "version": "v1" } ]
2023-07-31
[ [ "Su", "Yao", "" ], [ "Qian", "Zhentian", "" ], [ "Ma", "Lei", "" ], [ "He", "Lifang", "" ], [ "Kong", "Xiangnan", "" ] ]
Brain extraction, registration and segmentation are indispensable preprocessing steps in neuroimaging studies. The aim is to extract the brain from raw imaging scans (i.e., extraction step), align it with a target brain image (i.e., registration step) and label the anatomical brain regions (i.e., segmentation step). Conventional studies typically focus on developing separate methods for the extraction, registration and segmentation tasks in a supervised setting. The performance of these methods is largely contingent on the quantity of training samples and the extent of visual inspections carried out by experts for error correction. Nevertheless, collecting voxel-level labels and performing manual quality control on high-dimensional neuroimages (e.g., 3D MRI) are expensive and time-consuming in many medical studies. In this paper, we study the problem of one-shot joint extraction, registration and segmentation in neuroimaging data, which exploits only one labeled template image (a.k.a. atlas) and a few unlabeled raw images for training. We propose a unified end-to-end framework, called JERS, to jointly optimize the extraction, registration and segmentation tasks, allowing feedback among them. Specifically, we use a group of extraction, registration and segmentation modules to learn the extraction mask, transformation and segmentation mask, where modules are interconnected and mutually reinforced by self-supervision. Empirical results on real-world datasets demonstrate that our proposed method performs exceptionally in the extraction, registration and segmentation tasks. Our code and data can be found at https://github.com/Anonymous4545/JERS
2310.10910
Davut Emre Ta\c{s}ar
Davut Emre Tasar, Kutan Koruyan, Ceren Ocal Tasar
Machine Learning in the Quantum Age: Quantum vs. Classical Support Vector Machines
6 Pages, in Turkish language
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
This work endeavors to juxtapose the efficacy of machine learning algorithms within classical and quantum computational paradigms. Particularly, by emphasizing on Support Vector Machines (SVM), we scrutinize the classification prowess of classical SVM and Quantum Support Vector Machines (QSVM) operational on quantum hardware over the Iris dataset. The methodology embraced encapsulates an extensive array of experiments orchestrated through the Qiskit library, alongside hyperparameter optimization. The findings unveil that in particular scenarios, QSVMs extend a level of accuracy that can vie with classical SVMs, albeit the execution times are presently protracted. Moreover, we underscore that augmenting quantum computational capacity and the magnitude of parallelism can markedly ameliorate the performance of quantum machine learning algorithms. This inquiry furnishes invaluable insights regarding the extant scenario and future potentiality of machine learning applications in the quantum epoch. Colab: https://t.ly/QKuz0
[ { "created": "Tue, 17 Oct 2023 01:06:59 GMT", "version": "v1" } ]
2023-10-18
[ [ "Tasar", "Davut Emre", "" ], [ "Koruyan", "Kutan", "" ], [ "Tasar", "Ceren Ocal", "" ] ]
This work endeavors to juxtapose the efficacy of machine learning algorithms within classical and quantum computational paradigms. Particularly, by emphasizing on Support Vector Machines (SVM), we scrutinize the classification prowess of classical SVM and Quantum Support Vector Machines (QSVM) operational on quantum hardware over the Iris dataset. The methodology embraced encapsulates an extensive array of experiments orchestrated through the Qiskit library, alongside hyperparameter optimization. The findings unveil that in particular scenarios, QSVMs extend a level of accuracy that can vie with classical SVMs, albeit the execution times are presently protracted. Moreover, we underscore that augmenting quantum computational capacity and the magnitude of parallelism can markedly ameliorate the performance of quantum machine learning algorithms. This inquiry furnishes invaluable insights regarding the extant scenario and future potentiality of machine learning applications in the quantum epoch. Colab: https://t.ly/QKuz0
1304.0357
Arkadiusz Stopczynski Mr.
Arkadiusz Stopczynski, Carsten Stahlhut, Jakob Eg Larsen, Michael Kai Petersen, and Lars Kai Hansen
The Smartphone Brain Scanner: A Mobile Real-time Neuroimaging System
null
null
10.1371/journal.pone.0086733
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Combining low cost wireless EEG sensors with smartphones offers novel opportunities for mobile brain imaging in an everyday context. We present a framework for building multi-platform, portable EEG applications with real-time 3D source reconstruction. The system - Smartphone Brain Scanner - combines an off-the-shelf neuroheadset or EEG cap with a smartphone or tablet, and as such represents the first fully mobile system for real-time 3D EEG imaging. We discuss the benefits and challenges of a fully portable system, including technical limitations as well as real-time reconstruction of 3D images of brain activity. We present examples of the brain activity captured in a simple experiment involving imagined finger tapping, showing that the acquired signal in a relevant brain region is similar to that obtained with standard EEG lab equipment. Although the quality of the signal in a mobile solution using a off-the-shelf consumer neuroheadset is lower compared to that obtained using high density standard EEG equipment, we propose that mobile application development may offset the disadvantages and provide completely new opportunities for neuroimaging in natural settings.
[ { "created": "Mon, 1 Apr 2013 13:51:52 GMT", "version": "v1" } ]
2014-03-05
[ [ "Stopczynski", "Arkadiusz", "" ], [ "Stahlhut", "Carsten", "" ], [ "Larsen", "Jakob Eg", "" ], [ "Petersen", "Michael Kai", "" ], [ "Hansen", "Lars Kai", "" ] ]
Combining low cost wireless EEG sensors with smartphones offers novel opportunities for mobile brain imaging in an everyday context. We present a framework for building multi-platform, portable EEG applications with real-time 3D source reconstruction. The system - Smartphone Brain Scanner - combines an off-the-shelf neuroheadset or EEG cap with a smartphone or tablet, and as such represents the first fully mobile system for real-time 3D EEG imaging. We discuss the benefits and challenges of a fully portable system, including technical limitations as well as real-time reconstruction of 3D images of brain activity. We present examples of the brain activity captured in a simple experiment involving imagined finger tapping, showing that the acquired signal in a relevant brain region is similar to that obtained with standard EEG lab equipment. Although the quality of the signal in a mobile solution using a off-the-shelf consumer neuroheadset is lower compared to that obtained using high density standard EEG equipment, we propose that mobile application development may offset the disadvantages and provide completely new opportunities for neuroimaging in natural settings.
2207.06590
Jun Yang
Jun Yang, Yuehan Wang, Yiling Lou, Ming Wen and Lingming Zhang
Attention: Not Just Another Dataset for Patch-Correctness Checking
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated Program Repair (APR) techniques have drawn wide attention from both academia and industry. Meanwhile, one main limitation with the current state-of-the-art APR tools is that patches passing all the original tests are not necessarily the correct ones wanted by developers, i.e., the plausible patch problem. To date, various Patch-Correctness Checking (PCC) techniques have been proposed to address this important issue. However, they are only evaluated on very limited datasets as the APR tools used for generating such patches can only explore a small subset of the search space of possible patches, posing serious threats to external validity to existing PCC studies. In this paper, we construct an extensive PCC dataset (the largest manually labeled PCC dataset to our knowledge) to revisit all state-of-the-art PCC techniques. More specifically, our PCC dataset includes 1,988 patches generated from the recent PraPR APR tool, which leverages highly-optimized bytecode-level patch executions and can exhaustively explore all possible plausible patches within its large predefined search space (including well-known fixing patterns from various prior APR tools). Our extensive study of representative PCC techniques on the new dataset has revealed various surprising findings and provided guidelines for future PCC research.
[ { "created": "Thu, 14 Jul 2022 01:07:17 GMT", "version": "v1" }, { "created": "Wed, 8 Feb 2023 23:10:09 GMT", "version": "v2" } ]
2023-02-10
[ [ "Yang", "Jun", "" ], [ "Wang", "Yuehan", "" ], [ "Lou", "Yiling", "" ], [ "Wen", "Ming", "" ], [ "Zhang", "Lingming", "" ] ]
Automated Program Repair (APR) techniques have drawn wide attention from both academia and industry. Meanwhile, one main limitation with the current state-of-the-art APR tools is that patches passing all the original tests are not necessarily the correct ones wanted by developers, i.e., the plausible patch problem. To date, various Patch-Correctness Checking (PCC) techniques have been proposed to address this important issue. However, they are only evaluated on very limited datasets as the APR tools used for generating such patches can only explore a small subset of the search space of possible patches, posing serious threats to external validity to existing PCC studies. In this paper, we construct an extensive PCC dataset (the largest manually labeled PCC dataset to our knowledge) to revisit all state-of-the-art PCC techniques. More specifically, our PCC dataset includes 1,988 patches generated from the recent PraPR APR tool, which leverages highly-optimized bytecode-level patch executions and can exhaustively explore all possible plausible patches within its large predefined search space (including well-known fixing patterns from various prior APR tools). Our extensive study of representative PCC techniques on the new dataset has revealed various surprising findings and provided guidelines for future PCC research.
2001.04325
Eli Chen
Oren Haik, Oded Perry, Eli Chen, Peter Klammer
A Novel Inspection System For Variable Data Printing Using Deep Learning
Accepted for publication in: Winter Applications of Computer Vision (WACV) 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel approach for inspecting variable data prints (VDP) with an ultra-low false alarm rate (0.005%) and potential applicability to other real-world problems. The system is based on a comparison between two images: a reference image and an image captured by low-cost scanners. The comparison task is challenging as low-cost imaging systems create artifacts that may erroneously be classified as true (genuine) defects. To address this challenge we introduce two new fusion methods, for change detection applications, which are both fast and efficient. The first is an early fusion method that combines the two input images into a single pseudo-color image. The second, called Change-Detection Single Shot Detector (CD-SSD) leverages the SSD by fusing features in the middle of the network. We demonstrate the effectiveness of the proposed deep learning-based approach with a large dataset from real-world printing scenarios. Finally, we evaluate our models on a different domain of aerial imagery change detection (AICD). Our best method clearly outperforms the state-of-the-art baseline on this dataset.
[ { "created": "Mon, 13 Jan 2020 15:07:13 GMT", "version": "v1" } ]
2020-01-14
[ [ "Haik", "Oren", "" ], [ "Perry", "Oded", "" ], [ "Chen", "Eli", "" ], [ "Klammer", "Peter", "" ] ]
We present a novel approach for inspecting variable data prints (VDP) with an ultra-low false alarm rate (0.005%) and potential applicability to other real-world problems. The system is based on a comparison between two images: a reference image and an image captured by low-cost scanners. The comparison task is challenging as low-cost imaging systems create artifacts that may erroneously be classified as true (genuine) defects. To address this challenge we introduce two new fusion methods, for change detection applications, which are both fast and efficient. The first is an early fusion method that combines the two input images into a single pseudo-color image. The second, called Change-Detection Single Shot Detector (CD-SSD) leverages the SSD by fusing features in the middle of the network. We demonstrate the effectiveness of the proposed deep learning-based approach with a large dataset from real-world printing scenarios. Finally, we evaluate our models on a different domain of aerial imagery change detection (AICD). Our best method clearly outperforms the state-of-the-art baseline on this dataset.
2402.18117
Changqi Wang
Haoyu Xie, Changqi Wang, Jian Zhao, Yang Liu, Jun Dan, Chong Fu, Baigui Sun
PRCL: Probabilistic Representation Contrastive Learning for Semi-Supervised Semantic Segmentation
19 pages, 11 figures
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tremendous breakthroughs have been developed in Semi-Supervised Semantic Segmentation (S4) through contrastive learning. However, due to limited annotations, the guidance on unlabeled images is generated by the model itself, which inevitably exists noise and disturbs the unsupervised training process. To address this issue, we propose a robust contrastive-based S4 framework, termed the Probabilistic Representation Contrastive Learning (PRCL) framework to enhance the robustness of the unsupervised training process. We model the pixel-wise representation as Probabilistic Representations (PR) via multivariate Gaussian distribution and tune the contribution of the ambiguous representations to tolerate the risk of inaccurate guidance in contrastive learning. Furthermore, we introduce Global Distribution Prototypes (GDP) by gathering all PRs throughout the whole training process. Since the GDP contains the information of all representations with the same class, it is robust from the instant noise in representations and bears the intra-class variance of representations. In addition, we generate Virtual Negatives (VNs) based on GDP to involve the contrastive learning process. Extensive experiments on two public benchmarks demonstrate the superiority of our PRCL framework.
[ { "created": "Wed, 28 Feb 2024 07:10:37 GMT", "version": "v1" } ]
2024-02-29
[ [ "Xie", "Haoyu", "" ], [ "Wang", "Changqi", "" ], [ "Zhao", "Jian", "" ], [ "Liu", "Yang", "" ], [ "Dan", "Jun", "" ], [ "Fu", "Chong", "" ], [ "Sun", "Baigui", "" ] ]
Tremendous breakthroughs have been developed in Semi-Supervised Semantic Segmentation (S4) through contrastive learning. However, due to limited annotations, the guidance on unlabeled images is generated by the model itself, which inevitably exists noise and disturbs the unsupervised training process. To address this issue, we propose a robust contrastive-based S4 framework, termed the Probabilistic Representation Contrastive Learning (PRCL) framework to enhance the robustness of the unsupervised training process. We model the pixel-wise representation as Probabilistic Representations (PR) via multivariate Gaussian distribution and tune the contribution of the ambiguous representations to tolerate the risk of inaccurate guidance in contrastive learning. Furthermore, we introduce Global Distribution Prototypes (GDP) by gathering all PRs throughout the whole training process. Since the GDP contains the information of all representations with the same class, it is robust from the instant noise in representations and bears the intra-class variance of representations. In addition, we generate Virtual Negatives (VNs) based on GDP to involve the contrastive learning process. Extensive experiments on two public benchmarks demonstrate the superiority of our PRCL framework.
2109.15200
Zhao Hengling
Hengling Zhao, Yipeng Liu, Xiaolin Huang and Ce Zhu
Semi-tensor Product-based TensorDecomposition for Neural Network Compression
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The existing tensor networks adopt conventional matrix product for connection. The classical matrix product requires strict dimensionality consistency between factors, which can result in redundancy in data representation. In this paper, the semi-tensor product is used to generalize classical matrix product-based mode product to semi-tensor mode product. As it permits the connection of two factors with different dimensionality, more flexible and compact tensor decompositions can be obtained with smaller sizes of factors. Tucker decomposition, Tensor Train (TT) and Tensor Ring (TR) are common decomposition for low rank compression of deep neural networks. The semi-tensor product is applied to these tensor decompositions to obtained their generalized versions, i.e., semi-tensor Tucker decomposition (STTu), semi-tensor train(STT) and semi-tensor ring (STR). Experimental results show the STTu, STT and STR achieve higher compression factors than the conventional tensor decompositions with the same accuracy but less training times in ResNet and WideResNetcompression. With 2% accuracy degradation, the TT-RN (rank = 14) and the TR-WRN (rank = 16) only obtain 3 times and99t times compression factors while the STT-RN (rank = 14) and the STR-WRN (rank = 16) achieve 9 times and 179 times compression factors, respectively.
[ { "created": "Thu, 30 Sep 2021 15:18:14 GMT", "version": "v1" } ]
2021-10-01
[ [ "Zhao", "Hengling", "" ], [ "Liu", "Yipeng", "" ], [ "Huang", "Xiaolin", "" ], [ "Zhu", "Ce", "" ] ]
The existing tensor networks adopt conventional matrix product for connection. The classical matrix product requires strict dimensionality consistency between factors, which can result in redundancy in data representation. In this paper, the semi-tensor product is used to generalize classical matrix product-based mode product to semi-tensor mode product. As it permits the connection of two factors with different dimensionality, more flexible and compact tensor decompositions can be obtained with smaller sizes of factors. Tucker decomposition, Tensor Train (TT) and Tensor Ring (TR) are common decomposition for low rank compression of deep neural networks. The semi-tensor product is applied to these tensor decompositions to obtained their generalized versions, i.e., semi-tensor Tucker decomposition (STTu), semi-tensor train(STT) and semi-tensor ring (STR). Experimental results show the STTu, STT and STR achieve higher compression factors than the conventional tensor decompositions with the same accuracy but less training times in ResNet and WideResNetcompression. With 2% accuracy degradation, the TT-RN (rank = 14) and the TR-WRN (rank = 16) only obtain 3 times and99t times compression factors while the STT-RN (rank = 14) and the STR-WRN (rank = 16) achieve 9 times and 179 times compression factors, respectively.
2002.03704
Sebastian Farquhar
Sebastian Farquhar, Lewis Smith, Yarin Gal
Liberty or Depth: Deep Bayesian Neural Nets Do Not Need Complex Weight Posterior Approximations
Advances In Neural Information Processing Systems. 2020
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We challenge the longstanding assumption that the mean-field approximation for variational inference in Bayesian neural networks is severely restrictive, and show this is not the case in deep networks. We prove several results indicating that deep mean-field variational weight posteriors can induce similar distributions in function-space to those induced by shallower networks with complex weight posteriors. We validate our theoretical contributions empirically, both through examination of the weight posterior using Hamiltonian Monte Carlo in small models and by comparing diagonal- to structured-covariance in large settings. Since complex variational posteriors are often expensive and cumbersome to implement, our results suggest that using mean-field variational inference in a deeper model is both a practical and theoretically justified alternative to structured approximations.
[ { "created": "Mon, 10 Feb 2020 13:11:45 GMT", "version": "v1" }, { "created": "Wed, 8 Jul 2020 10:39:50 GMT", "version": "v2" }, { "created": "Mon, 2 Nov 2020 11:55:29 GMT", "version": "v3" }, { "created": "Wed, 10 Mar 2021 09:19:13 GMT", "version": "v4" } ]
2021-03-11
[ [ "Farquhar", "Sebastian", "" ], [ "Smith", "Lewis", "" ], [ "Gal", "Yarin", "" ] ]
We challenge the longstanding assumption that the mean-field approximation for variational inference in Bayesian neural networks is severely restrictive, and show this is not the case in deep networks. We prove several results indicating that deep mean-field variational weight posteriors can induce similar distributions in function-space to those induced by shallower networks with complex weight posteriors. We validate our theoretical contributions empirically, both through examination of the weight posterior using Hamiltonian Monte Carlo in small models and by comparing diagonal- to structured-covariance in large settings. Since complex variational posteriors are often expensive and cumbersome to implement, our results suggest that using mean-field variational inference in a deeper model is both a practical and theoretically justified alternative to structured approximations.
1303.2223
C. Titus Brown
Eric McDonald and C. Titus Brown
khmer: Working with Big Data in Bioinformatics
Invited chapter for forthcoming book on Performance of Open Source Applications
null
null
null
cs.CE q-bio.GN
http://creativecommons.org/licenses/by/3.0/
We introduce design and optimization considerations for the 'khmer' package.
[ { "created": "Sat, 9 Mar 2013 15:34:25 GMT", "version": "v1" } ]
2013-03-12
[ [ "McDonald", "Eric", "" ], [ "Brown", "C. Titus", "" ] ]
We introduce design and optimization considerations for the 'khmer' package.
1009.3145
Petros Boufounos
Petros T. Boufounos
Universal Rate-Efficient Scalar Quantization
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or Compressively Sensed signals can be inefficient in terms of the rate-distortion trade-off, especially as the oversampling rate or the sparsity of the signal increases. In this paper, we modify the scalar quantizer to have discontinuous quantization regions. We demonstrate that with this modification it is possible to achieve exponential decay of the quantization error as a function of the oversampling rate instead of the quadratic decay exhibited by current approaches. Our approach is universal in the sense that prior knowledge of the signal model is not necessary in the quantizer design, only in the reconstruction. Thus, we demonstrate that it is possible to reduce the quantization error by incorporating side information on the acquired signal, such as sparse signal models or signal similarity with known signals. In doing so, we establish a relationship between quantization performance and the Kolmogorov entropy of the signal model.
[ { "created": "Thu, 16 Sep 2010 11:14:09 GMT", "version": "v1" }, { "created": "Mon, 4 Oct 2010 17:56:24 GMT", "version": "v2" }, { "created": "Thu, 14 Jul 2011 23:53:44 GMT", "version": "v3" } ]
2011-07-18
[ [ "Boufounos", "Petros T.", "" ] ]
Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or Compressively Sensed signals can be inefficient in terms of the rate-distortion trade-off, especially as the oversampling rate or the sparsity of the signal increases. In this paper, we modify the scalar quantizer to have discontinuous quantization regions. We demonstrate that with this modification it is possible to achieve exponential decay of the quantization error as a function of the oversampling rate instead of the quadratic decay exhibited by current approaches. Our approach is universal in the sense that prior knowledge of the signal model is not necessary in the quantizer design, only in the reconstruction. Thus, we demonstrate that it is possible to reduce the quantization error by incorporating side information on the acquired signal, such as sparse signal models or signal similarity with known signals. In doing so, we establish a relationship between quantization performance and the Kolmogorov entropy of the signal model.
2002.05153
Andrew Bennett
Andrew Bennett and Nathan Kallus
Efficient Policy Learning from Surrogate-Loss Classification Reductions
null
null
null
null
cs.LG econ.EM math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work on policy learning from observational data has highlighted the importance of efficient policy evaluation and has proposed reductions to weighted (cost-sensitive) classification. But, efficient policy evaluation need not yield efficient estimation of policy parameters. We consider the estimation problem given by a weighted surrogate-loss classification reduction of policy learning with any score function, either direct, inverse-propensity weighted, or doubly robust. We show that, under a correct specification assumption, the weighted classification formulation need not be efficient for policy parameters. We draw a contrast to actual (possibly weighted) binary classification, where correct specification implies a parametric model, while for policy learning it only implies a semiparametric model. In light of this, we instead propose an estimation approach based on generalized method of moments, which is efficient for the policy parameters. We propose a particular method based on recent developments on solving moment problems using neural networks and demonstrate the efficiency and regret benefits of this method empirically.
[ { "created": "Wed, 12 Feb 2020 18:54:41 GMT", "version": "v1" } ]
2020-02-13
[ [ "Bennett", "Andrew", "" ], [ "Kallus", "Nathan", "" ] ]
Recent work on policy learning from observational data has highlighted the importance of efficient policy evaluation and has proposed reductions to weighted (cost-sensitive) classification. But, efficient policy evaluation need not yield efficient estimation of policy parameters. We consider the estimation problem given by a weighted surrogate-loss classification reduction of policy learning with any score function, either direct, inverse-propensity weighted, or doubly robust. We show that, under a correct specification assumption, the weighted classification formulation need not be efficient for policy parameters. We draw a contrast to actual (possibly weighted) binary classification, where correct specification implies a parametric model, while for policy learning it only implies a semiparametric model. In light of this, we instead propose an estimation approach based on generalized method of moments, which is efficient for the policy parameters. We propose a particular method based on recent developments on solving moment problems using neural networks and demonstrate the efficiency and regret benefits of this method empirically.
2011.09967
Bin Wang
Wanshi Hong, Cong Zhang, Cy Chan, Bin Wang
Electric Vehicle Charging Infrastructure Planning: A Scalable Computational Framework
null
null
null
null
cs.AI cs.NE eess.SP math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The optimal charging infrastructure planning problem over a large geospatial area is challenging due to the increasing network sizes of the transportation system and the electric grid. The coupling between the electric vehicle travel behaviors and charging events is therefore complex. This paper focuses on the demonstration of a scalable computational framework for the electric vehicle charging infrastructure planning over the tightly integrated transportation and electric grid networks. On the transportation side, a charging profile generation strategy is proposed leveraging the EV energy consumption model, trip routing, and charger selection methods. On the grid side, a genetic algorithm is utilized within the optimal power flow program to solve the optimal charger placement problem with integer variables by adaptively evaluating candidate solutions in the current iteration and generating new solutions for the next iterations.
[ { "created": "Tue, 17 Nov 2020 16:48:07 GMT", "version": "v1" } ]
2020-11-20
[ [ "Hong", "Wanshi", "" ], [ "Zhang", "Cong", "" ], [ "Chan", "Cy", "" ], [ "Wang", "Bin", "" ] ]
The optimal charging infrastructure planning problem over a large geospatial area is challenging due to the increasing network sizes of the transportation system and the electric grid. The coupling between the electric vehicle travel behaviors and charging events is therefore complex. This paper focuses on the demonstration of a scalable computational framework for the electric vehicle charging infrastructure planning over the tightly integrated transportation and electric grid networks. On the transportation side, a charging profile generation strategy is proposed leveraging the EV energy consumption model, trip routing, and charger selection methods. On the grid side, a genetic algorithm is utilized within the optimal power flow program to solve the optimal charger placement problem with integer variables by adaptively evaluating candidate solutions in the current iteration and generating new solutions for the next iterations.
1502.06108
Xiao Lin
Xiao Lin, Devi Parikh
Don't Just Listen, Use Your Imagination: Leveraging Visual Common Sense for Non-Visual Tasks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial agents today can answer factual questions. But they fall short on questions that require common sense reasoning. Perhaps this is because most existing common sense databases rely on text to learn and represent knowledge. But much of common sense knowledge is unwritten - partly because it tends not to be interesting enough to talk about, and partly because some common sense is unnatural to articulate in text. While unwritten, it is not unseen. In this paper we leverage semantic common sense knowledge learned from images - i.e. visual common sense - in two textual tasks: fill-in-the-blank and visual paraphrasing. We propose to "imagine" the scene behind the text, and leverage visual cues from the "imagined" scenes in addition to textual cues while answering these questions. We imagine the scenes as a visual abstraction. Our approach outperforms a strong text-only baseline on these tasks. Our proposed tasks can serve as benchmarks to quantitatively evaluate progress in solving tasks that go "beyond recognition". Our code and datasets are publicly available.
[ { "created": "Sat, 21 Feb 2015 15:25:40 GMT", "version": "v1" }, { "created": "Tue, 5 May 2015 18:54:05 GMT", "version": "v2" }, { "created": "Wed, 29 Jul 2015 03:04:19 GMT", "version": "v3" } ]
2015-07-30
[ [ "Lin", "Xiao", "" ], [ "Parikh", "Devi", "" ] ]
Artificial agents today can answer factual questions. But they fall short on questions that require common sense reasoning. Perhaps this is because most existing common sense databases rely on text to learn and represent knowledge. But much of common sense knowledge is unwritten - partly because it tends not to be interesting enough to talk about, and partly because some common sense is unnatural to articulate in text. While unwritten, it is not unseen. In this paper we leverage semantic common sense knowledge learned from images - i.e. visual common sense - in two textual tasks: fill-in-the-blank and visual paraphrasing. We propose to "imagine" the scene behind the text, and leverage visual cues from the "imagined" scenes in addition to textual cues while answering these questions. We imagine the scenes as a visual abstraction. Our approach outperforms a strong text-only baseline on these tasks. Our proposed tasks can serve as benchmarks to quantitatively evaluate progress in solving tasks that go "beyond recognition". Our code and datasets are publicly available.
1612.07117
Nam Khanh Tran
Nam Khanh Tran
Classification and Learning-to-rank Approaches for Cross-Device Matching at CIKM Cup 2016
CIKM Cup 2016
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose two methods for tackling the problem of cross-device matching for online advertising at CIKM Cup 2016. The first method considers the matching problem as a binary classification task and solve it by utilizing ensemble learning techniques. The second method defines the matching problem as a ranking task and effectively solve it with using learning-to-rank algorithms. The results show that the proposed methods obtain promising results, in which the ranking-based method outperforms the classification-based method for the task.
[ { "created": "Tue, 20 Dec 2016 15:02:41 GMT", "version": "v1" } ]
2016-12-22
[ [ "Tran", "Nam Khanh", "" ] ]
In this paper, we propose two methods for tackling the problem of cross-device matching for online advertising at CIKM Cup 2016. The first method considers the matching problem as a binary classification task and solve it by utilizing ensemble learning techniques. The second method defines the matching problem as a ranking task and effectively solve it with using learning-to-rank algorithms. The results show that the proposed methods obtain promising results, in which the ranking-based method outperforms the classification-based method for the task.
2010.09810
Congyu Wu
Congyu Wu
Connections between Relational Event Model and Inverse Reinforcement Learning for Characterizing Group Interaction Sequences
null
null
null
null
cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we explore previously unidentified connections between relational event model (REM) from the field of network science and inverse reinforcement learning (IRL) from the field of machine learning with respect to their ability to characterize sequences of directed social interaction events in group settings. REM is a conventional approach to tackle such a problem whereas the application of IRL is a largely unbeaten path. We begin by examining the mathematical components of both REM and IRL and find straightforward analogies between the two methods as well as unique characteristics of the IRL approach. We demonstrate the special utility of IRL in characterizing group social interactions with an empirical experiment, in which we use IRL to infer individual behavioral preferences based on a sequence of directed communication events from a group of virtual-reality game players interacting and cooperating to accomplish a shared goal. Our comparison and experiment introduce fresh perspectives for social behavior analytics and help inspire new research opportunities at the nexus of social network analysis and machine learning.
[ { "created": "Mon, 19 Oct 2020 19:40:29 GMT", "version": "v1" } ]
2020-10-21
[ [ "Wu", "Congyu", "" ] ]
In this paper we explore previously unidentified connections between relational event model (REM) from the field of network science and inverse reinforcement learning (IRL) from the field of machine learning with respect to their ability to characterize sequences of directed social interaction events in group settings. REM is a conventional approach to tackle such a problem whereas the application of IRL is a largely unbeaten path. We begin by examining the mathematical components of both REM and IRL and find straightforward analogies between the two methods as well as unique characteristics of the IRL approach. We demonstrate the special utility of IRL in characterizing group social interactions with an empirical experiment, in which we use IRL to infer individual behavioral preferences based on a sequence of directed communication events from a group of virtual-reality game players interacting and cooperating to accomplish a shared goal. Our comparison and experiment introduce fresh perspectives for social behavior analytics and help inspire new research opportunities at the nexus of social network analysis and machine learning.
2306.00858
Simon Keizer
Simon Keizer, Caroline Dockes, Norbert Braunschweiler, Svetlana Stoyanchev, Rama Doddipatla
Adversarial learning of neural user simulators for dialogue policy optimisation
UK Speech 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Reinforcement learning based dialogue policies are typically trained in interaction with a user simulator. To obtain an effective and robust policy, this simulator should generate user behaviour that is both realistic and varied. Current data-driven simulators are trained to accurately model the user behaviour in a dialogue corpus. We propose an alternative method using adversarial learning, with the aim to simulate realistic user behaviour with more variation. We train and evaluate several simulators on a corpus of restaurant search dialogues, and then use them to train dialogue system policies. In policy cross-evaluation experiments we demonstrate that an adversarially trained simulator produces policies with 8.3% higher success rate than those trained with a maximum likelihood simulator. Subjective results from a crowd-sourced dialogue system user evaluation confirm the effectiveness of adversarially training user simulators.
[ { "created": "Thu, 1 Jun 2023 16:17:16 GMT", "version": "v1" } ]
2023-06-02
[ [ "Keizer", "Simon", "" ], [ "Dockes", "Caroline", "" ], [ "Braunschweiler", "Norbert", "" ], [ "Stoyanchev", "Svetlana", "" ], [ "Doddipatla", "Rama", "" ] ]
Reinforcement learning based dialogue policies are typically trained in interaction with a user simulator. To obtain an effective and robust policy, this simulator should generate user behaviour that is both realistic and varied. Current data-driven simulators are trained to accurately model the user behaviour in a dialogue corpus. We propose an alternative method using adversarial learning, with the aim to simulate realistic user behaviour with more variation. We train and evaluate several simulators on a corpus of restaurant search dialogues, and then use them to train dialogue system policies. In policy cross-evaluation experiments we demonstrate that an adversarially trained simulator produces policies with 8.3% higher success rate than those trained with a maximum likelihood simulator. Subjective results from a crowd-sourced dialogue system user evaluation confirm the effectiveness of adversarially training user simulators.
2408.02320
Yuxin Chen
Gen Li and Yuting Wei and Yuejie Chi and Yuxin Chen
A Sharp Convergence Theory for The Probability Flow ODEs of Diffusion Models
This manuscript presents improved theory for probability flow ODEs compared to its earlier version arXiv:2306.09251
null
null
null
cs.LG cs.NA eess.SP math.NA math.ST stat.ML stat.TH
http://creativecommons.org/licenses/by/4.0/
Diffusion models, which convert noise into new data instances by learning to reverse a diffusion process, have become a cornerstone in contemporary generative modeling. In this work, we develop non-asymptotic convergence theory for a popular diffusion-based sampler (i.e., the probability flow ODE sampler) in discrete time, assuming access to $\ell_2$-accurate estimates of the (Stein) score functions. For distributions in $\mathbb{R}^d$, we prove that $d/\varepsilon$ iterations -- modulo some logarithmic and lower-order terms -- are sufficient to approximate the target distribution to within $\varepsilon$ total-variation distance. This is the first result establishing nearly linear dimension-dependency (in $d$) for the probability flow ODE sampler. Imposing only minimal assumptions on the target data distribution (e.g., no smoothness assumption is imposed), our results also characterize how $\ell_2$ score estimation errors affect the quality of the data generation processes. In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach without the need of resorting to SDE and ODE toolboxes.
[ { "created": "Mon, 5 Aug 2024 09:02:24 GMT", "version": "v1" } ]
2024-08-06
[ [ "Li", "Gen", "" ], [ "Wei", "Yuting", "" ], [ "Chi", "Yuejie", "" ], [ "Chen", "Yuxin", "" ] ]
Diffusion models, which convert noise into new data instances by learning to reverse a diffusion process, have become a cornerstone in contemporary generative modeling. In this work, we develop non-asymptotic convergence theory for a popular diffusion-based sampler (i.e., the probability flow ODE sampler) in discrete time, assuming access to $\ell_2$-accurate estimates of the (Stein) score functions. For distributions in $\mathbb{R}^d$, we prove that $d/\varepsilon$ iterations -- modulo some logarithmic and lower-order terms -- are sufficient to approximate the target distribution to within $\varepsilon$ total-variation distance. This is the first result establishing nearly linear dimension-dependency (in $d$) for the probability flow ODE sampler. Imposing only minimal assumptions on the target data distribution (e.g., no smoothness assumption is imposed), our results also characterize how $\ell_2$ score estimation errors affect the quality of the data generation processes. In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach without the need of resorting to SDE and ODE toolboxes.
2105.04683
Mattia Rigotti
Rong Zhu, Mattia Rigotti
Deep Bandits Show-Off: Simple and Efficient Exploration with Deep Networks
null
35th Conference on Neural Information Processing Systems (NeurIPS 2021), Sydney, Australia
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Designing efficient exploration is central to Reinforcement Learning due to the fundamental problem posed by the exploration-exploitation dilemma. Bayesian exploration strategies like Thompson Sampling resolve this trade-off in a principled way by modeling and updating the distribution of the parameters of the action-value function, the outcome model of the environment. However, this technique becomes infeasible for complex environments due to the computational intractability of maintaining probability distributions over parameters of outcome models of corresponding complexity. Moreover, the approximation techniques introduced to mitigate this issue typically result in poor exploration-exploitation trade-offs, as observed in the case of deep neural network models with approximate posterior methods that have been shown to underperform in the deep bandit scenario. In this paper we introduce Sample Average Uncertainty (SAU), a simple and efficient uncertainty measure for contextual bandits. While Bayesian approaches like Thompson Sampling estimate outcomes uncertainty indirectly by first quantifying the variability over the parameters of the outcome model, SAU is a frequentist approach that directly estimates the uncertainty of the outcomes based on the value predictions. Importantly, we show theoretically that the uncertainty measure estimated by SAU asymptotically matches the uncertainty provided by Thompson Sampling, as well as its regret bounds. Because of its simplicity SAU can be seamlessly applied to deep contextual bandits as a very scalable drop-in replacement for epsilon-greedy exploration. We confirm empirically our theory by showing that SAU-based exploration outperforms current state-of-the-art deep Bayesian bandit methods on several real-world datasets at modest computation cost. Code is available at \url{https://github.com/ibm/sau-explore}.
[ { "created": "Mon, 10 May 2021 21:45:01 GMT", "version": "v1" }, { "created": "Tue, 26 Oct 2021 09:28:25 GMT", "version": "v2" } ]
2021-10-27
[ [ "Zhu", "Rong", "" ], [ "Rigotti", "Mattia", "" ] ]
Designing efficient exploration is central to Reinforcement Learning due to the fundamental problem posed by the exploration-exploitation dilemma. Bayesian exploration strategies like Thompson Sampling resolve this trade-off in a principled way by modeling and updating the distribution of the parameters of the action-value function, the outcome model of the environment. However, this technique becomes infeasible for complex environments due to the computational intractability of maintaining probability distributions over parameters of outcome models of corresponding complexity. Moreover, the approximation techniques introduced to mitigate this issue typically result in poor exploration-exploitation trade-offs, as observed in the case of deep neural network models with approximate posterior methods that have been shown to underperform in the deep bandit scenario. In this paper we introduce Sample Average Uncertainty (SAU), a simple and efficient uncertainty measure for contextual bandits. While Bayesian approaches like Thompson Sampling estimate outcomes uncertainty indirectly by first quantifying the variability over the parameters of the outcome model, SAU is a frequentist approach that directly estimates the uncertainty of the outcomes based on the value predictions. Importantly, we show theoretically that the uncertainty measure estimated by SAU asymptotically matches the uncertainty provided by Thompson Sampling, as well as its regret bounds. Because of its simplicity SAU can be seamlessly applied to deep contextual bandits as a very scalable drop-in replacement for epsilon-greedy exploration. We confirm empirically our theory by showing that SAU-based exploration outperforms current state-of-the-art deep Bayesian bandit methods on several real-world datasets at modest computation cost. Code is available at \url{https://github.com/ibm/sau-explore}.
2106.00846
Babatunji Omoniwa
Babatunji Omoniwa, Riaz Hussain, Muhammad Adil, Atif Shakeel, Ahmed Kamal Tahir, Qadeer Ul Hasan, and Shahzad A. Malik
An Optimal Relay Scheme for Outage Minimization in Fog-based Internet-of-Things (IoT) Networks
Accepted and Published in IEEE Internet of Things Journal
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fog devices are beginning to play a key role in relaying data and services within the Internet-of-Things (IoT) ecosystem. These relays may be static or mobile, with the latter offering a new degree of freedom for performance improvement via careful relay mobility design. Besides that, power conservation has been a prevalent issue in IoT networks with devices being power-constrained, requiring optimal power-control mechanisms. In this paper, we consider a multi-tier fog-based IoT architecture where a mobile/static fog node acts as an amplify and forward relay that transmits received information from a sensor node to a higher hierarchically-placed static fog device, which offers some localized services. The outage probability of the presented scenario was efficiently minimized by jointly optimizing the mobility pattern and the transmit power of the fog relay. A closed-form analytical expression for the outage probability was derived. Furthermore, due to the intractability and non-convexity of the formulated problem, we applied an iterative algorithm based on the steepest descent method to arrive at a desirable objective. Simulations reveal that the outage probability was improved by 62.7% in the optimized-location fixed-power (OLFP) scheme, 79.3% in the optimized-power fixed-location (OPFL) scheme, and 94.2% in the optimized-location optimized-power (OLOP) scheme, as against the fixed-location and fixed-power (FLFP) scheme (i.e., without optimization). Lastly, we present an optimal relay selection strategy that chooses an appropriate relay node from randomly distributed relaying candidates.
[ { "created": "Tue, 1 Jun 2021 22:57:51 GMT", "version": "v1" } ]
2021-06-03
[ [ "Omoniwa", "Babatunji", "" ], [ "Hussain", "Riaz", "" ], [ "Adil", "Muhammad", "" ], [ "Shakeel", "Atif", "" ], [ "Tahir", "Ahmed Kamal", "" ], [ "Hasan", "Qadeer Ul", "" ], [ "Malik", "Shahzad A.", "" ] ]
Fog devices are beginning to play a key role in relaying data and services within the Internet-of-Things (IoT) ecosystem. These relays may be static or mobile, with the latter offering a new degree of freedom for performance improvement via careful relay mobility design. Besides that, power conservation has been a prevalent issue in IoT networks with devices being power-constrained, requiring optimal power-control mechanisms. In this paper, we consider a multi-tier fog-based IoT architecture where a mobile/static fog node acts as an amplify and forward relay that transmits received information from a sensor node to a higher hierarchically-placed static fog device, which offers some localized services. The outage probability of the presented scenario was efficiently minimized by jointly optimizing the mobility pattern and the transmit power of the fog relay. A closed-form analytical expression for the outage probability was derived. Furthermore, due to the intractability and non-convexity of the formulated problem, we applied an iterative algorithm based on the steepest descent method to arrive at a desirable objective. Simulations reveal that the outage probability was improved by 62.7% in the optimized-location fixed-power (OLFP) scheme, 79.3% in the optimized-power fixed-location (OPFL) scheme, and 94.2% in the optimized-location optimized-power (OLOP) scheme, as against the fixed-location and fixed-power (FLFP) scheme (i.e., without optimization). Lastly, we present an optimal relay selection strategy that chooses an appropriate relay node from randomly distributed relaying candidates.
2008.08750
Sayed Kamaledin Ghiasi-Shirazi
Ramin Zarei Sabzevar, Kamaledin Ghiasi-Shirazi, Ahad Harati
Prototype-based interpretation of the functionality of neurons in winner-take-all neural networks
null
null
null
null
cs.LG cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prototype-based learning (PbL) using a winner-take-all (WTA) network based on minimum Euclidean distance (ED-WTA) is an intuitive approach to multiclass classification. By constructing meaningful class centers, PbL provides higher interpretability and generalization than hyperplane-based learning (HbL) methods based on maximum Inner Product (IP-WTA) and can efficiently detect and reject samples that do not belong to any classes. In this paper, we first prove the equivalence of IP-WTA and ED-WTA from a representational point of view. Then, we show that naively using this equivalence leads to unintuitive ED-WTA networks in which the centers have high distances to data that they represent. We propose $\pm$ED-WTA which models each neuron with two prototypes: one positive prototype representing samples that are modeled by this neuron and a negative prototype representing the samples that are erroneously won by that neuron during training. We propose a novel training algorithm for the $\pm$ED-WTA network, which cleverly switches between updating the positive and negative prototypes and is essential to the emergence of interpretable prototypes. Unexpectedly, we observed that the negative prototype of each neuron is indistinguishably similar to the positive one. The rationale behind this observation is that the training data that are mistaken with a prototype are indeed similar to it. The main finding of this paper is this interpretation of the functionality of neurons as computing the difference between the distances to a positive and a negative prototype, which is in agreement with the BCM theory. In our experiments, we show that the proposed $\pm$ED-WTA method constructs highly interpretable prototypes that can be successfully used for detecting outlier and adversarial examples.
[ { "created": "Thu, 20 Aug 2020 03:15:37 GMT", "version": "v1" } ]
2020-08-21
[ [ "Sabzevar", "Ramin Zarei", "" ], [ "Ghiasi-Shirazi", "Kamaledin", "" ], [ "Harati", "Ahad", "" ] ]
Prototype-based learning (PbL) using a winner-take-all (WTA) network based on minimum Euclidean distance (ED-WTA) is an intuitive approach to multiclass classification. By constructing meaningful class centers, PbL provides higher interpretability and generalization than hyperplane-based learning (HbL) methods based on maximum Inner Product (IP-WTA) and can efficiently detect and reject samples that do not belong to any classes. In this paper, we first prove the equivalence of IP-WTA and ED-WTA from a representational point of view. Then, we show that naively using this equivalence leads to unintuitive ED-WTA networks in which the centers have high distances to data that they represent. We propose $\pm$ED-WTA which models each neuron with two prototypes: one positive prototype representing samples that are modeled by this neuron and a negative prototype representing the samples that are erroneously won by that neuron during training. We propose a novel training algorithm for the $\pm$ED-WTA network, which cleverly switches between updating the positive and negative prototypes and is essential to the emergence of interpretable prototypes. Unexpectedly, we observed that the negative prototype of each neuron is indistinguishably similar to the positive one. The rationale behind this observation is that the training data that are mistaken with a prototype are indeed similar to it. The main finding of this paper is this interpretation of the functionality of neurons as computing the difference between the distances to a positive and a negative prototype, which is in agreement with the BCM theory. In our experiments, we show that the proposed $\pm$ED-WTA method constructs highly interpretable prototypes that can be successfully used for detecting outlier and adversarial examples.
2011.05961
Orpaz Goldstein
Orpaz Goldstein, Mohammad Kachuee, Derek Shiell, Majid Sarrafzadeh
Real-Time Decentralized knowledge Transfer at the Edge
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
The proliferation of edge networks creates islands of learning agents working on local streams of data. Transferring knowledge between these agents in real-time without exposing private data allows for collaboration to decrease learning time and increase model confidence. Incorporating knowledge from data that a local model did not see creates an ability to debias a local model or add to classification abilities on data never before seen. Transferring knowledge in a selective decentralized approach enables models to retain their local insights, allowing for local flavors of a machine learning model. This approach suits the decentralized architecture of edge networks, as a local edge node will serve a community of learning agents that will likely encounter similar data. We propose a method based on knowledge distillation for pairwise knowledge transfer pipelines from models trained on non-i.i.d. data and compare it to other popular knowledge transfer methods. Additionally, we test different scenarios of knowledge transfer network construction and show the practicality of our approach. Our experiments show knowledge transfer using our model outperforms standard methods in a real-time transfer scenario.
[ { "created": "Wed, 11 Nov 2020 18:26:57 GMT", "version": "v1" }, { "created": "Fri, 25 Dec 2020 00:16:58 GMT", "version": "v2" }, { "created": "Tue, 28 Sep 2021 23:55:34 GMT", "version": "v3" }, { "created": "Fri, 1 Oct 2021 16:12:29 GMT", "version": "v4" } ]
2021-10-04
[ [ "Goldstein", "Orpaz", "" ], [ "Kachuee", "Mohammad", "" ], [ "Shiell", "Derek", "" ], [ "Sarrafzadeh", "Majid", "" ] ]
The proliferation of edge networks creates islands of learning agents working on local streams of data. Transferring knowledge between these agents in real-time without exposing private data allows for collaboration to decrease learning time and increase model confidence. Incorporating knowledge from data that a local model did not see creates an ability to debias a local model or add to classification abilities on data never before seen. Transferring knowledge in a selective decentralized approach enables models to retain their local insights, allowing for local flavors of a machine learning model. This approach suits the decentralized architecture of edge networks, as a local edge node will serve a community of learning agents that will likely encounter similar data. We propose a method based on knowledge distillation for pairwise knowledge transfer pipelines from models trained on non-i.i.d. data and compare it to other popular knowledge transfer methods. Additionally, we test different scenarios of knowledge transfer network construction and show the practicality of our approach. Our experiments show knowledge transfer using our model outperforms standard methods in a real-time transfer scenario.
2304.02560
Kumara Kahatapitiya
Kumara Kahatapitiya, Anurag Arnab, Arsha Nagrani, Michael S. Ryoo
VicTR: Video-conditioned Text Representations for Activity Recognition
To appear at CVPR 2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision-Language models (VLMs) have excelled in the image-domain -- especially in zero-shot settings -- thanks to the availability of vast pretraining data (i.e., paired image-text samples). However for videos, such paired data is not as abundant. Therefore, video-VLMs are usually designed by adapting pretrained image-VLMs to the video-domain, instead of training from scratch. All such recipes rely on augmenting visual embeddings with temporal information (i.e., image $\rightarrow$ video), often keeping text embeddings unchanged or even being discarded. In this paper, we argue the contrary, that better video-VLMs can be designed by focusing more on augmenting text, rather than visual information. More specifically, we introduce Video-conditioned Text Representations (VicTR): a form of text embeddings optimized w.r.t. visual embeddings, creating a more-flexible contrastive latent space. Our model can further make use of freely-available semantic information, in the form of visually-grounded auxiliary text (e.g. object or scene information). We evaluate our model on few-shot, zero-shot (HMDB-51, UCF-101), short-form (Kinetics-400) and long-form (Charades) activity recognition benchmarks, showing strong performance among video-VLMs.
[ { "created": "Wed, 5 Apr 2023 16:30:36 GMT", "version": "v1" }, { "created": "Fri, 29 Mar 2024 16:56:33 GMT", "version": "v2" } ]
2024-04-01
[ [ "Kahatapitiya", "Kumara", "" ], [ "Arnab", "Anurag", "" ], [ "Nagrani", "Arsha", "" ], [ "Ryoo", "Michael S.", "" ] ]
Vision-Language models (VLMs) have excelled in the image-domain -- especially in zero-shot settings -- thanks to the availability of vast pretraining data (i.e., paired image-text samples). However for videos, such paired data is not as abundant. Therefore, video-VLMs are usually designed by adapting pretrained image-VLMs to the video-domain, instead of training from scratch. All such recipes rely on augmenting visual embeddings with temporal information (i.e., image $\rightarrow$ video), often keeping text embeddings unchanged or even being discarded. In this paper, we argue the contrary, that better video-VLMs can be designed by focusing more on augmenting text, rather than visual information. More specifically, we introduce Video-conditioned Text Representations (VicTR): a form of text embeddings optimized w.r.t. visual embeddings, creating a more-flexible contrastive latent space. Our model can further make use of freely-available semantic information, in the form of visually-grounded auxiliary text (e.g. object or scene information). We evaluate our model on few-shot, zero-shot (HMDB-51, UCF-101), short-form (Kinetics-400) and long-form (Charades) activity recognition benchmarks, showing strong performance among video-VLMs.
2105.14465
Pradipta Biswas
Gowdham Prabhakar and Pradipta Biswas
A Brief Survey on Interactive Automotive UI
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Automotive User Interface (AutoUI) is relatively a new discipline in the context of both Transportation Engineering and Human Machine Interaction (HMI). It covers various HMI aspects both inside and outside vehicle ranging from operating the vehicle itself, undertaking various secondary tasks, driver behaviour analysis, cognitive load estimation and so on. This review paper discusses various interactive HMI inside a vehicle used for undertaking secondary tasks. We divided recent HMIs through four sections on virtual touch interfaces, wearable devices, speech recognition and non-visual interfaces and eye gaze controlled systems. Finally, we summarized advantages and disadvantages of various technologies.
[ { "created": "Sun, 30 May 2021 08:37:35 GMT", "version": "v1" } ]
2021-06-01
[ [ "Prabhakar", "Gowdham", "" ], [ "Biswas", "Pradipta", "" ] ]
Automotive User Interface (AutoUI) is relatively a new discipline in the context of both Transportation Engineering and Human Machine Interaction (HMI). It covers various HMI aspects both inside and outside vehicle ranging from operating the vehicle itself, undertaking various secondary tasks, driver behaviour analysis, cognitive load estimation and so on. This review paper discusses various interactive HMI inside a vehicle used for undertaking secondary tasks. We divided recent HMIs through four sections on virtual touch interfaces, wearable devices, speech recognition and non-visual interfaces and eye gaze controlled systems. Finally, we summarized advantages and disadvantages of various technologies.
1707.03319
Rahmat Widia Sembiring
Dewi Sartika Ginting, Kristin Sitompul, Jasael Simanulang, Rahmat Widia Sembiring, Muhammad Zarlis
Modification of Symmetric Cryptography with Combining Affine Chiper and Caesar Chiper which Dynamic Nature in Matrix of Chiper Transposition by Applying Flow Pattern in the Planting Rice
2nd International Conference of Computer, Environment, Social Science, Health Science, Agriculture & Technology (ICEST) 2017
Advances in Science, Technology and Engineering Systems Journal (ASTESJ), Adv. Sci. Technol. Eng. Syst. J. 2(5), 1-5 (2017)
10.25046/aj020502
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classical cryptography is a way of disguising the news done by the people when there was no computer. The goal is to protect information by way of encoding. This paper describesa modification of classical algorithms to make cryptanalis difficult to steal undisclosed messages. There are three types of classical algorithms that are combined affine chiper, Caesar chiper and chiper transposition. Where for chiperteks affine chiper and Caesar chiper can be looped as much as the initial key, because the result can be varied as much as key value, then affine chiper and Caesar chiper in this case is dynamic. Then the results of the affine and Caesar will be combined in the transposition chiper matrix by applying the pattern of rice cultivation path and for chipertext retrieval by finally applying the pattern of rice planting path. And the final digit of the digit shown in the form of binary digits so that 5 characters can be changed to 80 digit bits are scrambled. Thus the cryptanalyst will be more difficult and takes a very long time to hack information that has been kept secret.
[ { "created": "Tue, 11 Jul 2017 15:19:33 GMT", "version": "v1" } ]
2017-07-12
[ [ "Ginting", "Dewi Sartika", "" ], [ "Sitompul", "Kristin", "" ], [ "Simanulang", "Jasael", "" ], [ "Sembiring", "Rahmat Widia", "" ], [ "Zarlis", "Muhammad", "" ] ]
Classical cryptography is a way of disguising the news done by the people when there was no computer. The goal is to protect information by way of encoding. This paper describesa modification of classical algorithms to make cryptanalis difficult to steal undisclosed messages. There are three types of classical algorithms that are combined affine chiper, Caesar chiper and chiper transposition. Where for chiperteks affine chiper and Caesar chiper can be looped as much as the initial key, because the result can be varied as much as key value, then affine chiper and Caesar chiper in this case is dynamic. Then the results of the affine and Caesar will be combined in the transposition chiper matrix by applying the pattern of rice cultivation path and for chipertext retrieval by finally applying the pattern of rice planting path. And the final digit of the digit shown in the form of binary digits so that 5 characters can be changed to 80 digit bits are scrambled. Thus the cryptanalyst will be more difficult and takes a very long time to hack information that has been kept secret.
2306.02658
Etrit Haxholli
Etrit Haxholli, Marco Lorenzi
Faster Training of Diffusion Models and Improved Density Estimation via Parallel Score Matching
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
In Diffusion Probabilistic Models (DPMs), the task of modeling the score evolution via a single time-dependent neural network necessitates extended training periods and may potentially impede modeling flexibility and capacity. To counteract these challenges, we propose leveraging the independence of learning tasks at different time points inherent to DPMs. More specifically, we partition the learning task by utilizing independent networks, each dedicated to learning the evolution of scores within a specific time sub-interval. Further, inspired by residual flows, we extend this strategy to its logical conclusion by employing separate networks to independently model the score at each individual time point. As empirically demonstrated on synthetic and image datasets, our approach not only significantly accelerates the training process by introducing an additional layer of parallelization atop data parallelization, but it also enhances density estimation performance when compared to the conventional training methodology for DPMs.
[ { "created": "Mon, 5 Jun 2023 07:47:30 GMT", "version": "v1" } ]
2023-06-06
[ [ "Haxholli", "Etrit", "" ], [ "Lorenzi", "Marco", "" ] ]
In Diffusion Probabilistic Models (DPMs), the task of modeling the score evolution via a single time-dependent neural network necessitates extended training periods and may potentially impede modeling flexibility and capacity. To counteract these challenges, we propose leveraging the independence of learning tasks at different time points inherent to DPMs. More specifically, we partition the learning task by utilizing independent networks, each dedicated to learning the evolution of scores within a specific time sub-interval. Further, inspired by residual flows, we extend this strategy to its logical conclusion by employing separate networks to independently model the score at each individual time point. As empirically demonstrated on synthetic and image datasets, our approach not only significantly accelerates the training process by introducing an additional layer of parallelization atop data parallelization, but it also enhances density estimation performance when compared to the conventional training methodology for DPMs.
2307.11335
Wenbo Hu
Wenbo Hu, Yuling Wang, Lin Ma, Bangbang Yang, Lin Gao, Xiao Liu, Yuewen Ma
Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields
Accepted to ICCV 2023 Project page: https://wbhu.github.io/projects/Tri-MipRF
ICCV 2023
null
null
cs.CV cs.AI cs.GR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Despite the tremendous progress in neural radiance fields (NeRF), we still face a dilemma of the trade-off between quality and efficiency, e.g., MipNeRF presents fine-detailed and anti-aliased renderings but takes days for training, while Instant-ngp can accomplish the reconstruction in a few minutes but suffers from blurring or aliasing when rendering at various distances or resolutions due to ignoring the sampling area. To this end, we propose a novel Tri-Mip encoding that enables both instant reconstruction and anti-aliased high-fidelity rendering for neural radiance fields. The key is to factorize the pre-filtered 3D feature spaces in three orthogonal mipmaps. In this way, we can efficiently perform 3D area sampling by taking advantage of 2D pre-filtered feature maps, which significantly elevates the rendering quality without sacrificing efficiency. To cope with the novel Tri-Mip representation, we propose a cone-casting rendering technique to efficiently sample anti-aliased 3D features with the Tri-Mip encoding considering both pixel imaging and observing distance. Extensive experiments on both synthetic and real-world datasets demonstrate our method achieves state-of-the-art rendering quality and reconstruction speed while maintaining a compact representation that reduces 25% model size compared against Instant-ngp.
[ { "created": "Fri, 21 Jul 2023 03:47:28 GMT", "version": "v1" } ]
2023-07-24
[ [ "Hu", "Wenbo", "" ], [ "Wang", "Yuling", "" ], [ "Ma", "Lin", "" ], [ "Yang", "Bangbang", "" ], [ "Gao", "Lin", "" ], [ "Liu", "Xiao", "" ], [ "Ma", "Yuewen", "" ] ]
Despite the tremendous progress in neural radiance fields (NeRF), we still face a dilemma of the trade-off between quality and efficiency, e.g., MipNeRF presents fine-detailed and anti-aliased renderings but takes days for training, while Instant-ngp can accomplish the reconstruction in a few minutes but suffers from blurring or aliasing when rendering at various distances or resolutions due to ignoring the sampling area. To this end, we propose a novel Tri-Mip encoding that enables both instant reconstruction and anti-aliased high-fidelity rendering for neural radiance fields. The key is to factorize the pre-filtered 3D feature spaces in three orthogonal mipmaps. In this way, we can efficiently perform 3D area sampling by taking advantage of 2D pre-filtered feature maps, which significantly elevates the rendering quality without sacrificing efficiency. To cope with the novel Tri-Mip representation, we propose a cone-casting rendering technique to efficiently sample anti-aliased 3D features with the Tri-Mip encoding considering both pixel imaging and observing distance. Extensive experiments on both synthetic and real-world datasets demonstrate our method achieves state-of-the-art rendering quality and reconstruction speed while maintaining a compact representation that reduces 25% model size compared against Instant-ngp.
1802.07546
Johannes Fauser
Johannes Fauser and Georgios Sakas and Anirban Mukhopadhyay
Planning Nonlinear Access Paths for Temporal Bone Surgery
To be published in International Journal on Computer Assisted Radiology and Surgery (IJCARS), Spl. Issue IPCAI 2018
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose: Interventions at the otobasis operate in the narrow region of the temporal bone where several highly sensitive organs define obstacles with minimal clearance for surgical instruments. Nonlinear trajectories for potential minimally-invasive interventions can provide larger distances to risk structures and optimized orientations of surgical instruments, thus improving clinical outcomes when compared to existing linear approaches. In this paper, we present fast and accurate planning methods for such nonlinear access paths. Methods: We define a specific motion planning problem in SE(3) = R3 x SO(3) with notable constraints in computation time and goal pose that reflect the requirements of temporal bone surgery.We then present k-RRT-Connect: two suitable motion planners based on bidirectional Rapidly-exploring Random Trees (RRT) to solve this problem efficiently. Results: The benefits of k-RRT-Connect are demonstrated on real CT data of patients. Their general performance is shown on a large set of realistic synthetic anatomies. We also show that these new algorithms outperform state of the art methods based on circular arcs or Bezier-Splines when applied to this specific problem. Conclusion: With this work we demonstrate that pre- and intra-operative planning of nonlinear access paths is possible for minimally-invasive surgeries at the otobasis.
[ { "created": "Wed, 21 Feb 2018 12:53:23 GMT", "version": "v1" } ]
2018-02-22
[ [ "Fauser", "Johannes", "" ], [ "Sakas", "Georgios", "" ], [ "Mukhopadhyay", "Anirban", "" ] ]
Purpose: Interventions at the otobasis operate in the narrow region of the temporal bone where several highly sensitive organs define obstacles with minimal clearance for surgical instruments. Nonlinear trajectories for potential minimally-invasive interventions can provide larger distances to risk structures and optimized orientations of surgical instruments, thus improving clinical outcomes when compared to existing linear approaches. In this paper, we present fast and accurate planning methods for such nonlinear access paths. Methods: We define a specific motion planning problem in SE(3) = R3 x SO(3) with notable constraints in computation time and goal pose that reflect the requirements of temporal bone surgery.We then present k-RRT-Connect: two suitable motion planners based on bidirectional Rapidly-exploring Random Trees (RRT) to solve this problem efficiently. Results: The benefits of k-RRT-Connect are demonstrated on real CT data of patients. Their general performance is shown on a large set of realistic synthetic anatomies. We also show that these new algorithms outperform state of the art methods based on circular arcs or Bezier-Splines when applied to this specific problem. Conclusion: With this work we demonstrate that pre- and intra-operative planning of nonlinear access paths is possible for minimally-invasive surgeries at the otobasis.
2112.02779
Kwonyoung Ryu
Wei Dong, Kwonyoung Ryu, Michael Kaess, Jaesik Park
Revisiting LiDAR Registration and Reconstruction: A Range Image Perspective
14 pages, 9 figures. This paper is under the review
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spinning LiDAR data are prevalent for 3D vision tasks. Since LiDAR data is presented in the form of point clouds, expensive 3D operations are usually required. This paper revisits spinning LiDAR scan formation and presents a cylindrical range image representation with a ray-wise projection/unprojection model. It is built upon raw scans and supports lossless conversion from 2D to 3D, allowing fast 2D operations, including 2D index-based neighbor search and downsampling. We then propose, to the best of our knowledge, the first multi-scale registration and dense signed distance function (SDF) reconstruction system for LiDAR range images. We further collect a dataset of indoor and outdoor LiDAR scenes in the posed range image format. A comprehensive evaluation of registration and reconstruction is conducted on the proposed dataset and the KITTI dataset. Experiments demonstrate that our approach outperforms surface reconstruction baselines and achieves similar performance to state-of-the-art LiDAR registration methods, including a modern learning-based registration approach. Thanks to the simplicity, our registration runs at 100Hz and SDF reconstruction in real time. The dataset and a modularized C++/Python toolbox will be released.
[ { "created": "Mon, 6 Dec 2021 04:28:32 GMT", "version": "v1" }, { "created": "Mon, 28 Mar 2022 22:38:28 GMT", "version": "v2" } ]
2022-03-30
[ [ "Dong", "Wei", "" ], [ "Ryu", "Kwonyoung", "" ], [ "Kaess", "Michael", "" ], [ "Park", "Jaesik", "" ] ]
Spinning LiDAR data are prevalent for 3D vision tasks. Since LiDAR data is presented in the form of point clouds, expensive 3D operations are usually required. This paper revisits spinning LiDAR scan formation and presents a cylindrical range image representation with a ray-wise projection/unprojection model. It is built upon raw scans and supports lossless conversion from 2D to 3D, allowing fast 2D operations, including 2D index-based neighbor search and downsampling. We then propose, to the best of our knowledge, the first multi-scale registration and dense signed distance function (SDF) reconstruction system for LiDAR range images. We further collect a dataset of indoor and outdoor LiDAR scenes in the posed range image format. A comprehensive evaluation of registration and reconstruction is conducted on the proposed dataset and the KITTI dataset. Experiments demonstrate that our approach outperforms surface reconstruction baselines and achieves similar performance to state-of-the-art LiDAR registration methods, including a modern learning-based registration approach. Thanks to the simplicity, our registration runs at 100Hz and SDF reconstruction in real time. The dataset and a modularized C++/Python toolbox will be released.
2104.11568
Lorin Sweeney
Lorin Sweeney, Graham Healy, Alan F. Smeaton
The Influence of Audio on Video Memorability with an Audio Gestalt Regulated Video Memorability System
6 pages, 3 figures, 4 tables, paper accepted in CBMI 2021 for publication and oral presentation
null
10.1109/CBMI50038.2021.9461903
null
cs.MM cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Memories are the tethering threads that tie us to the world, and memorability is the measure of their tensile strength. The threads of memory are spun from fibres of many modalities, obscuring the contribution of a single fibre to a thread's overall tensile strength. Unfurling these fibres is the key to understanding the nature of their interaction, and how we can ultimately create more meaningful media content. In this paper, we examine the influence of audio on video recognition memorability, finding evidence to suggest that it can facilitate overall video recognition memorability rich in high-level (gestalt) audio features. We introduce a novel multimodal deep learning-based late-fusion system that uses audio gestalt to estimate the influence of a given video's audio on its overall short-term recognition memorability, and selectively leverages audio features to make a prediction accordingly. We benchmark our audio gestalt based system on the Memento10k short-term video memorability dataset, achieving top-2 state-of-the-art results.
[ { "created": "Fri, 23 Apr 2021 12:53:33 GMT", "version": "v1" } ]
2021-07-02
[ [ "Sweeney", "Lorin", "" ], [ "Healy", "Graham", "" ], [ "Smeaton", "Alan F.", "" ] ]
Memories are the tethering threads that tie us to the world, and memorability is the measure of their tensile strength. The threads of memory are spun from fibres of many modalities, obscuring the contribution of a single fibre to a thread's overall tensile strength. Unfurling these fibres is the key to understanding the nature of their interaction, and how we can ultimately create more meaningful media content. In this paper, we examine the influence of audio on video recognition memorability, finding evidence to suggest that it can facilitate overall video recognition memorability rich in high-level (gestalt) audio features. We introduce a novel multimodal deep learning-based late-fusion system that uses audio gestalt to estimate the influence of a given video's audio on its overall short-term recognition memorability, and selectively leverages audio features to make a prediction accordingly. We benchmark our audio gestalt based system on the Memento10k short-term video memorability dataset, achieving top-2 state-of-the-art results.
2105.01011
Peng Liang
Liming Fu, Peng Liang, Xueying Li, Chen Yang
A Machine Learning Based Ensemble Method for Automatic Multiclass Classification of Decisions
The 25th International Conference on Evaluation and Assessment in Software Engineering (EASE)
null
10.1145/3463274.3463325
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Stakeholders make various types of decisions with respect to requirements, design, management, and so on during the software development life cycle. Nevertheless, these decisions are typically not well documented and classified due to limited human resources, time, and budget. To this end, automatic approaches provide a promising way. In this paper, we aimed at automatically classifying decisions into five types to help stakeholders better document and understand decisions. First, we collected a dataset from the Hibernate developer mailing list. We then experimented and evaluated 270 configurations regarding feature selection, feature extraction techniques, and machine learning classifiers to seek the best configuration for classifying decisions. Especially, we applied an ensemble learning method and constructed ensemble classifiers to compare the performance between ensemble classifiers and base classifiers. Our experiment results show that (1) feature selection can decently improve the classification results; (2) ensemble classifiers can outperform base classifiers provided that ensemble classifiers are well constructed; (3) BoW + 50% features selected by feature selection with an ensemble classifier that combines Na\"ive Bayes (NB), Logistic Regression (LR), and Support Vector Machine (SVM) achieves the best classification result (with a weighted precision of 0.750, a weighted recall of 0.739, and a weighted F1-score of 0.727) among all the configurations. Our work can benefit various types of stakeholders in software development through providing an automatic approach for effectively classifying decisions into specific types that are relevant to their interests.
[ { "created": "Mon, 3 May 2021 16:55:00 GMT", "version": "v1" }, { "created": "Tue, 4 May 2021 04:21:23 GMT", "version": "v2" } ]
2021-05-05
[ [ "Fu", "Liming", "" ], [ "Liang", "Peng", "" ], [ "Li", "Xueying", "" ], [ "Yang", "Chen", "" ] ]
Stakeholders make various types of decisions with respect to requirements, design, management, and so on during the software development life cycle. Nevertheless, these decisions are typically not well documented and classified due to limited human resources, time, and budget. To this end, automatic approaches provide a promising way. In this paper, we aimed at automatically classifying decisions into five types to help stakeholders better document and understand decisions. First, we collected a dataset from the Hibernate developer mailing list. We then experimented and evaluated 270 configurations regarding feature selection, feature extraction techniques, and machine learning classifiers to seek the best configuration for classifying decisions. Especially, we applied an ensemble learning method and constructed ensemble classifiers to compare the performance between ensemble classifiers and base classifiers. Our experiment results show that (1) feature selection can decently improve the classification results; (2) ensemble classifiers can outperform base classifiers provided that ensemble classifiers are well constructed; (3) BoW + 50% features selected by feature selection with an ensemble classifier that combines Na\"ive Bayes (NB), Logistic Regression (LR), and Support Vector Machine (SVM) achieves the best classification result (with a weighted precision of 0.750, a weighted recall of 0.739, and a weighted F1-score of 0.727) among all the configurations. Our work can benefit various types of stakeholders in software development through providing an automatic approach for effectively classifying decisions into specific types that are relevant to their interests.
2407.04272
Dingwen Tao
Hao Feng, Boyuan Zhang, Fanjiang Ye, Min Si, Ching-Hsiang Chu, Jiannan Tian, Chunxing Yin, Summer Deng, Yuchen Hao, Pavan Balaji, Tong Geng, Dingwen Tao
Accelerating Communication in Deep Learning Recommendation Model Training with Dual-Level Adaptive Lossy Compression
accepted by SC '24
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by/4.0/
DLRM is a state-of-the-art recommendation system model that has gained widespread adoption across various industry applications. The large size of DLRM models, however, necessitates the use of multiple devices/GPUs for efficient training. A significant bottleneck in this process is the time-consuming all-to-all communication required to collect embedding data from all devices. To mitigate this, we introduce a method that employs error-bounded lossy compression to reduce the communication data size and accelerate DLRM training. We develop a novel error-bounded lossy compression algorithm, informed by an in-depth analysis of embedding data features, to achieve high compression ratios. Moreover, we introduce a dual-level adaptive strategy for error-bound adjustment, spanning both table-wise and iteration-wise aspects, to balance the compression benefits with the potential impacts on accuracy. We further optimize our compressor for PyTorch tensors on GPUs, minimizing compression overhead. Evaluation shows that our method achieves a 1.38$\times$ training speedup with a minimal accuracy impact.
[ { "created": "Fri, 5 Jul 2024 05:55:18 GMT", "version": "v1" }, { "created": "Mon, 8 Jul 2024 05:53:10 GMT", "version": "v2" }, { "created": "Thu, 11 Jul 2024 15:31:53 GMT", "version": "v3" } ]
2024-07-12
[ [ "Feng", "Hao", "" ], [ "Zhang", "Boyuan", "" ], [ "Ye", "Fanjiang", "" ], [ "Si", "Min", "" ], [ "Chu", "Ching-Hsiang", "" ], [ "Tian", "Jiannan", "" ], [ "Yin", "Chunxing", "" ], [ "Deng", "Summer", "" ], [ "Hao", "Yuchen", "" ], [ "Balaji", "Pavan", "" ], [ "Geng", "Tong", "" ], [ "Tao", "Dingwen", "" ] ]
DLRM is a state-of-the-art recommendation system model that has gained widespread adoption across various industry applications. The large size of DLRM models, however, necessitates the use of multiple devices/GPUs for efficient training. A significant bottleneck in this process is the time-consuming all-to-all communication required to collect embedding data from all devices. To mitigate this, we introduce a method that employs error-bounded lossy compression to reduce the communication data size and accelerate DLRM training. We develop a novel error-bounded lossy compression algorithm, informed by an in-depth analysis of embedding data features, to achieve high compression ratios. Moreover, we introduce a dual-level adaptive strategy for error-bound adjustment, spanning both table-wise and iteration-wise aspects, to balance the compression benefits with the potential impacts on accuracy. We further optimize our compressor for PyTorch tensors on GPUs, minimizing compression overhead. Evaluation shows that our method achieves a 1.38$\times$ training speedup with a minimal accuracy impact.
2403.02234
Fangzhou Hong
Fangzhou Hong, Jiaxiang Tang, Ziang Cao, Min Shi, Tong Wu, Zhaoxi Chen, Shuai Yang, Tengfei Wang, Liang Pan, Dahua Lin, Ziwei Liu
3DTopia: Large Text-to-3D Generation Model with Hybrid Diffusion Priors
Code available at https://github.com/3DTopia/3DTopia
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present a two-stage text-to-3D generation system, namely 3DTopia, which generates high-quality general 3D assets within 5 minutes using hybrid diffusion priors. The first stage samples from a 3D diffusion prior directly learned from 3D data. Specifically, it is powered by a text-conditioned tri-plane latent diffusion model, which quickly generates coarse 3D samples for fast prototyping. The second stage utilizes 2D diffusion priors to further refine the texture of coarse 3D models from the first stage. The refinement consists of both latent and pixel space optimization for high-quality texture generation. To facilitate the training of the proposed system, we clean and caption the largest open-source 3D dataset, Objaverse, by combining the power of vision language models and large language models. Experiment results are reported qualitatively and quantitatively to show the performance of the proposed system. Our codes and models are available at https://github.com/3DTopia/3DTopia
[ { "created": "Mon, 4 Mar 2024 17:26:28 GMT", "version": "v1" }, { "created": "Tue, 7 May 2024 03:25:50 GMT", "version": "v2" } ]
2024-05-08
[ [ "Hong", "Fangzhou", "" ], [ "Tang", "Jiaxiang", "" ], [ "Cao", "Ziang", "" ], [ "Shi", "Min", "" ], [ "Wu", "Tong", "" ], [ "Chen", "Zhaoxi", "" ], [ "Yang", "Shuai", "" ], [ "Wang", "Tengfei", "" ], [ "Pan", "Liang", "" ], [ "Lin", "Dahua", "" ], [ "Liu", "Ziwei", "" ] ]
We present a two-stage text-to-3D generation system, namely 3DTopia, which generates high-quality general 3D assets within 5 minutes using hybrid diffusion priors. The first stage samples from a 3D diffusion prior directly learned from 3D data. Specifically, it is powered by a text-conditioned tri-plane latent diffusion model, which quickly generates coarse 3D samples for fast prototyping. The second stage utilizes 2D diffusion priors to further refine the texture of coarse 3D models from the first stage. The refinement consists of both latent and pixel space optimization for high-quality texture generation. To facilitate the training of the proposed system, we clean and caption the largest open-source 3D dataset, Objaverse, by combining the power of vision language models and large language models. Experiment results are reported qualitatively and quantitatively to show the performance of the proposed system. Our codes and models are available at https://github.com/3DTopia/3DTopia
1510.05182
Fereydoun Farrahi Moghaddam
Fereydoun Farrahi Moghaddam and Mohamed Cheriet
Sustainability-Aware Cloud Computing Using Virtual Carbon Tax
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a solution for sustainable cloud system is proposed and then implemented on a real testbed. The solution composes of optimization of a profit model and introduction of virtual carbon tax to limit environmental footprint of the cloud. The proposed multi-criteria optimizer of the cloud system suggests new optimum CPU frequencies for CPU-cores when the local grid energy mix or the cloud workload changes. The cloud system is implemented on a blade system, and proper middlewares are developed to interact with the blades. The experimental results show that it is possible to significantly decrease the targeted environmental footprint of the system and keep it profitable.
[ { "created": "Sat, 17 Oct 2015 23:33:20 GMT", "version": "v1" }, { "created": "Wed, 1 Nov 2017 12:55:42 GMT", "version": "v2" } ]
2017-11-02
[ [ "Moghaddam", "Fereydoun Farrahi", "" ], [ "Cheriet", "Mohamed", "" ] ]
In this paper, a solution for sustainable cloud system is proposed and then implemented on a real testbed. The solution composes of optimization of a profit model and introduction of virtual carbon tax to limit environmental footprint of the cloud. The proposed multi-criteria optimizer of the cloud system suggests new optimum CPU frequencies for CPU-cores when the local grid energy mix or the cloud workload changes. The cloud system is implemented on a blade system, and proper middlewares are developed to interact with the blades. The experimental results show that it is possible to significantly decrease the targeted environmental footprint of the system and keep it profitable.
1705.08568
Grant Storey
Grant Storey, Dillon Reisman, Jonathan Mayer, Arvind Narayanan
The Future of Ad Blocking: An Analytical Framework and New Techniques
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a systematic study of ad blocking - and the associated "arms race" - as a security problem. We model ad blocking as a state space with four states and six state transitions, which correspond to techniques that can be deployed by either publishers or ad blockers. We argue that this is a complete model of the system. We propose several new ad blocking techniques, including ones that borrow ideas from rootkits to prevent detection by anti-ad blocking scripts. Another technique uses the insight that ads must be recognizable by humans to comply with laws and industry self-regulation. We have built prototype implementations of three of these techniques, successfully blocking ads and evading detection. We systematically evaluate our proposed techniques, along with existing ones, in terms of security, practicality, and legality. We characterize the order of growth of the development effort required to create/maintain ad blockers as a function of the growth of the web. Based on our state-space model, our new techniques, and this systematization, we offer insights into the likely "end game" of the arms race. We challenge the widespread assumption that the arms race will escalate indefinitely, and instead identify a combination of evolving technical and legal factors that will determine the outcome.
[ { "created": "Wed, 24 May 2017 00:28:51 GMT", "version": "v1" } ]
2017-05-25
[ [ "Storey", "Grant", "" ], [ "Reisman", "Dillon", "" ], [ "Mayer", "Jonathan", "" ], [ "Narayanan", "Arvind", "" ] ]
We present a systematic study of ad blocking - and the associated "arms race" - as a security problem. We model ad blocking as a state space with four states and six state transitions, which correspond to techniques that can be deployed by either publishers or ad blockers. We argue that this is a complete model of the system. We propose several new ad blocking techniques, including ones that borrow ideas from rootkits to prevent detection by anti-ad blocking scripts. Another technique uses the insight that ads must be recognizable by humans to comply with laws and industry self-regulation. We have built prototype implementations of three of these techniques, successfully blocking ads and evading detection. We systematically evaluate our proposed techniques, along with existing ones, in terms of security, practicality, and legality. We characterize the order of growth of the development effort required to create/maintain ad blockers as a function of the growth of the web. Based on our state-space model, our new techniques, and this systematization, we offer insights into the likely "end game" of the arms race. We challenge the widespread assumption that the arms race will escalate indefinitely, and instead identify a combination of evolving technical and legal factors that will determine the outcome.
1603.05623
Dimitri Van De Ville
Dimitri Van De Ville
Steering Macro-Scale Network Community Structure by Micro-Scale Features
15 pages, 7 figures
null
null
null
cs.SI cs.CE cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network science plays an increasingly important role to model complex data in many scientific disciplines. One notable feature of network organization is community structure, which refers to clusters of tightly interconnected nodes. A prominent problem is how to investigate the relationship between macro-scale modules that are retrieved by optimizing global network measures, and micro-scale structure that are defined by specific queries of the analysis (e.g., nodal features). By generalizing fundamental concepts of joint space-frequency localization to network theory, here we propose a flexible framework to study interactions between micro- and macro-structure. Similar to pointing and focusing a magnifying glass, the analysis can be directed to specific micro-scale structure, while the degree of interaction with the macro-scale community structure can be seamlessly controlled. In addition, the method is computationally efficient as a result of the underlying low-dimensional optimization problem.
[ { "created": "Thu, 17 Mar 2016 19:11:46 GMT", "version": "v1" } ]
2016-03-18
[ [ "Van De Ville", "Dimitri", "" ] ]
Network science plays an increasingly important role to model complex data in many scientific disciplines. One notable feature of network organization is community structure, which refers to clusters of tightly interconnected nodes. A prominent problem is how to investigate the relationship between macro-scale modules that are retrieved by optimizing global network measures, and micro-scale structure that are defined by specific queries of the analysis (e.g., nodal features). By generalizing fundamental concepts of joint space-frequency localization to network theory, here we propose a flexible framework to study interactions between micro- and macro-structure. Similar to pointing and focusing a magnifying glass, the analysis can be directed to specific micro-scale structure, while the degree of interaction with the macro-scale community structure can be seamlessly controlled. In addition, the method is computationally efficient as a result of the underlying low-dimensional optimization problem.
1710.08338
Mehdi Samiee
M. Samiee, M. Zayernouri. Mark M. Meerschaert
A Unified Spectral Method for FPDEs with Two-sided Derivatives; A Fast Solver
null
https://doi.org/10.1016/j.jcp.2018.02.014
10.1016/j.jcp.2018.02.014
null
cs.CE math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a unified Petrov-Galerkin spectral method for a class of fractional partial differential equations with two-sided derivatives and constant coefficients of the form $ _{0}{\mathcal{D}}_{t}^{2\tau}u^{} + \sum_{i=1}^{d}$ $[c_{l_i}$ $_{a_i}{\mathcal{D}}_{x_i}^{2\mu_i} u^{} +c_{r_i}$ $_{x_i}{\mathcal{D}}_{b_i}^{2\mu_i}$ $u^{} ] +$ $\gamma$ $u^{} = \sum_{j=1}^{d} [ \kappa_{l_j}$ $_{a_j}{\mathcal{D}}_{x_j}^{2\nu_j} u^{}$ $+\kappa_{r_j}$ $_{x_j}{\mathcal{D}}_{b_j}^{2\nu_j}$ $u^{} ]$ $+ f$, where $2\tau \in (0,2)$, $2\mu_i \in (0,1)$ and $2\nu_j \in (1,2)$, in a ($1+d$)-dimensional \textit{space-time} hypercube, $d = 1, 2, 3, \cdots$, subject to homogeneous Dirichlet initial/boundary conditions. We employ the eigenfunctions of the fractional Sturm-Liouville eigen-problems of the first kind in \cite{zayernouri2013fractional}, called \textit{Jacobi poly-fractonomial}s, as temporal bases, and the eigen-functions of the boundary-value problem of the second kind as temporal test functions. Next, we construct our spatial basis/test functions using Legendre polynomials, yielding mass matrices being independent of the spatial fractional orders ($\mu_i, \, \nu_j, \, i, \,j=1,2,\cdots,d$). Furthermore, we formulate a novel unified fast linear solver for the resulting high-dimensional linear system based on the solution of generalized eigen-problem of spatial mass matrices with respect to the corresponding stiffness matrices, hence, making the complexity of the problem optimal, i.e., $\mathcal{O}(N^{d+2})$. We carry out several numerical test cases to examine the CPU time and convergence rate of the method. The corresponding stability and error analysis of the Petrov-Galerkin method are carried out in \cite{samiee2016Unified2}.
[ { "created": "Sun, 15 Oct 2017 20:49:56 GMT", "version": "v1" } ]
2019-10-02
[ [ "Samiee", "M.", "" ], [ "Meerschaert", "M. Zayernouri. Mark M.", "" ] ]
We develop a unified Petrov-Galerkin spectral method for a class of fractional partial differential equations with two-sided derivatives and constant coefficients of the form $ _{0}{\mathcal{D}}_{t}^{2\tau}u^{} + \sum_{i=1}^{d}$ $[c_{l_i}$ $_{a_i}{\mathcal{D}}_{x_i}^{2\mu_i} u^{} +c_{r_i}$ $_{x_i}{\mathcal{D}}_{b_i}^{2\mu_i}$ $u^{} ] +$ $\gamma$ $u^{} = \sum_{j=1}^{d} [ \kappa_{l_j}$ $_{a_j}{\mathcal{D}}_{x_j}^{2\nu_j} u^{}$ $+\kappa_{r_j}$ $_{x_j}{\mathcal{D}}_{b_j}^{2\nu_j}$ $u^{} ]$ $+ f$, where $2\tau \in (0,2)$, $2\mu_i \in (0,1)$ and $2\nu_j \in (1,2)$, in a ($1+d$)-dimensional \textit{space-time} hypercube, $d = 1, 2, 3, \cdots$, subject to homogeneous Dirichlet initial/boundary conditions. We employ the eigenfunctions of the fractional Sturm-Liouville eigen-problems of the first kind in \cite{zayernouri2013fractional}, called \textit{Jacobi poly-fractonomial}s, as temporal bases, and the eigen-functions of the boundary-value problem of the second kind as temporal test functions. Next, we construct our spatial basis/test functions using Legendre polynomials, yielding mass matrices being independent of the spatial fractional orders ($\mu_i, \, \nu_j, \, i, \,j=1,2,\cdots,d$). Furthermore, we formulate a novel unified fast linear solver for the resulting high-dimensional linear system based on the solution of generalized eigen-problem of spatial mass matrices with respect to the corresponding stiffness matrices, hence, making the complexity of the problem optimal, i.e., $\mathcal{O}(N^{d+2})$. We carry out several numerical test cases to examine the CPU time and convergence rate of the method. The corresponding stability and error analysis of the Petrov-Galerkin method are carried out in \cite{samiee2016Unified2}.
2201.07060
Sourav Mondal
Sourav Mondal and Marco Ruffini
A Min-Max Fair Resource Allocation Framework for Optical x-haul and DU/CU in Multi-tenant O-RANs
This article is accepted for publication in IEEE International Conference on Communications (ICC) 2022. Copyright @ IEEE
null
null
null
cs.NI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The recently proposed open-radio access network (O-RAN) architecture embraces cloudification and network function virtualization techniques to perform the base-band function processing by dis-aggregated radio units (RUs), distributed units (DUs), and centralized units (CUs). This enables the cloud-RAN vision in full, where mobile network operators (MNOs) could install their own RUs, but then lease on-demand computational resources for the processing of DU and CU functions from commonly available open-cloud (O-Cloud) servers via open x-haul interfaces due to variation of load over the day. This creates a multi-tenant scenario where multiple MNOs share networking as well as computational resources. In this paper, we propose a framework that dynamically allocates x-haul and DU/CU resources in a multi-tenant O-RAN ecosystem with min-max fairness guarantees. This framework ensures that a maximum number of RUs get sufficient resources while minimizing the OPEX for their MNOs. Moreover, in order to provide an access network architecture capable of sustaining low-latency and high capacity between RUs and edge-computing devices, we consider time-wavelength division multiplexed (TWDM) passive optical network (PON)-based x-haul interfaces where the PON virtualization technique is used to provide a direct optical connection between end-points. This creates a virtual mesh interconnection among all the nodes such that the RUs can be connected to the Edge-Clouds at macro-cell RU locations as well as to the O-Cloud servers at the central office locations. Furthermore, we analyze the system performance with our proposed framework and show that MNOs can operate with a better cost-efficiency than baseline greedy resource allocation with uniform cost-sharing.
[ { "created": "Tue, 18 Jan 2022 15:38:16 GMT", "version": "v1" }, { "created": "Thu, 20 Jan 2022 18:29:09 GMT", "version": "v2" }, { "created": "Tue, 25 Jan 2022 03:56:32 GMT", "version": "v3" }, { "created": "Tue, 22 Feb 2022 15:22:59 GMT", "version": "v4" } ]
2022-02-23
[ [ "Mondal", "Sourav", "" ], [ "Ruffini", "Marco", "" ] ]
The recently proposed open-radio access network (O-RAN) architecture embraces cloudification and network function virtualization techniques to perform the base-band function processing by dis-aggregated radio units (RUs), distributed units (DUs), and centralized units (CUs). This enables the cloud-RAN vision in full, where mobile network operators (MNOs) could install their own RUs, but then lease on-demand computational resources for the processing of DU and CU functions from commonly available open-cloud (O-Cloud) servers via open x-haul interfaces due to variation of load over the day. This creates a multi-tenant scenario where multiple MNOs share networking as well as computational resources. In this paper, we propose a framework that dynamically allocates x-haul and DU/CU resources in a multi-tenant O-RAN ecosystem with min-max fairness guarantees. This framework ensures that a maximum number of RUs get sufficient resources while minimizing the OPEX for their MNOs. Moreover, in order to provide an access network architecture capable of sustaining low-latency and high capacity between RUs and edge-computing devices, we consider time-wavelength division multiplexed (TWDM) passive optical network (PON)-based x-haul interfaces where the PON virtualization technique is used to provide a direct optical connection between end-points. This creates a virtual mesh interconnection among all the nodes such that the RUs can be connected to the Edge-Clouds at macro-cell RU locations as well as to the O-Cloud servers at the central office locations. Furthermore, we analyze the system performance with our proposed framework and show that MNOs can operate with a better cost-efficiency than baseline greedy resource allocation with uniform cost-sharing.
1905.11924
Ari Kobren
Ari Kobren, Barna Saha, Andrew McCallum
Paper Matching with Local Fairness Constraints
Appears at KDD 2019 Research Track, 20 pages
null
null
null
cs.DS cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatically matching reviewers to papers is a crucial step of the peer review process for venues receiving thousands of submissions. Unfortunately, common paper matching algorithms often construct matchings suffering from two critical problems: (1) the group of reviewers assigned to a paper do not collectively possess sufficient expertise, and (2) reviewer workloads are highly skewed. In this paper, we propose a novel local fairness formulation of paper matching that directly addresses both of these issues. Since optimizing our formulation is not always tractable, we introduce two new algorithms, FairIR and FairFlow, for computing fair matchings that approximately optimize the new formulation. FairIR solves a relaxation of the local fairness formulation and then employs a rounding technique to construct a valid matching that provably maximizes the objective and only compromises on fairness with respect to reviewer loads and papers by a small constant. In contrast, FairFlow is not provably guaranteed to produce fair matchings, however it can be 2x as efficient as FairIR and an order of magnitude faster than matching algorithms that directly optimize for fairness. Empirically, we demonstrate that both FairIR and FairFlow improve fairness over standard matching algorithms on real conference data. Moreover, in comparison to state-of-the-art matching algorithms that optimize for fairness only, FairIR achieves higher objective scores, FairFlow achieves competitive fairness, and both are capable of more evenly allocating reviewers.
[ { "created": "Tue, 28 May 2019 16:36:51 GMT", "version": "v1" } ]
2019-05-29
[ [ "Kobren", "Ari", "" ], [ "Saha", "Barna", "" ], [ "McCallum", "Andrew", "" ] ]
Automatically matching reviewers to papers is a crucial step of the peer review process for venues receiving thousands of submissions. Unfortunately, common paper matching algorithms often construct matchings suffering from two critical problems: (1) the group of reviewers assigned to a paper do not collectively possess sufficient expertise, and (2) reviewer workloads are highly skewed. In this paper, we propose a novel local fairness formulation of paper matching that directly addresses both of these issues. Since optimizing our formulation is not always tractable, we introduce two new algorithms, FairIR and FairFlow, for computing fair matchings that approximately optimize the new formulation. FairIR solves a relaxation of the local fairness formulation and then employs a rounding technique to construct a valid matching that provably maximizes the objective and only compromises on fairness with respect to reviewer loads and papers by a small constant. In contrast, FairFlow is not provably guaranteed to produce fair matchings, however it can be 2x as efficient as FairIR and an order of magnitude faster than matching algorithms that directly optimize for fairness. Empirically, we demonstrate that both FairIR and FairFlow improve fairness over standard matching algorithms on real conference data. Moreover, in comparison to state-of-the-art matching algorithms that optimize for fairness only, FairIR achieves higher objective scores, FairFlow achieves competitive fairness, and both are capable of more evenly allocating reviewers.
2406.03897
Tzuf Paz-Argaman
Tzuf Paz-Argaman, Itai Mondshine, Asaf Achi Mordechai, and Reut Tsarfaty
HeSum: a Novel Dataset for Abstractive Text Summarization in Hebrew
null
ACL 2024 Findings
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
While large language models (LLMs) excel in various natural language tasks in English, their performance in lower-resourced languages like Hebrew, especially for generative tasks such as abstractive summarization, remains unclear. The high morphological richness in Hebrew adds further challenges due to the ambiguity in sentence comprehension and the complexities in meaning construction. In this paper, we address this resource and evaluation gap by introducing HeSum, a novel benchmark specifically designed for abstractive text summarization in Modern Hebrew. HeSum consists of 10,000 article-summary pairs sourced from Hebrew news websites written by professionals. Linguistic analysis confirms HeSum's high abstractness and unique morphological challenges. We show that HeSum presents distinct difficulties for contemporary state-of-the-art LLMs, establishing it as a valuable testbed for generative language technology in Hebrew, and MRLs generative challenges in general.
[ { "created": "Thu, 6 Jun 2024 09:36:14 GMT", "version": "v1" }, { "created": "Mon, 10 Jun 2024 05:45:25 GMT", "version": "v2" } ]
2024-06-11
[ [ "Paz-Argaman", "Tzuf", "" ], [ "Mondshine", "Itai", "" ], [ "Mordechai", "Asaf Achi", "" ], [ "Tsarfaty", "Reut", "" ] ]
While large language models (LLMs) excel in various natural language tasks in English, their performance in lower-resourced languages like Hebrew, especially for generative tasks such as abstractive summarization, remains unclear. The high morphological richness in Hebrew adds further challenges due to the ambiguity in sentence comprehension and the complexities in meaning construction. In this paper, we address this resource and evaluation gap by introducing HeSum, a novel benchmark specifically designed for abstractive text summarization in Modern Hebrew. HeSum consists of 10,000 article-summary pairs sourced from Hebrew news websites written by professionals. Linguistic analysis confirms HeSum's high abstractness and unique morphological challenges. We show that HeSum presents distinct difficulties for contemporary state-of-the-art LLMs, establishing it as a valuable testbed for generative language technology in Hebrew, and MRLs generative challenges in general.
2009.00530
Duy Phan Mr
Phan The Duy, Do Thi Thu Hien, Van-Hau Pham
A survey on Blockchain-based applications for reforming data protection, privacy and security
8 pages, 2 figures
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
The modern society, economy and industry have been changed remarkably by many cutting-edge technologies over the last years, and many more are in development and early implementation that will in turn led even wider spread of adoptions and greater alteration. Blockchain technology along with other rising ones is expected to transform virtually every aspect of global business and individuals' lifestyle in some areas. It has been spreading with multi-sector applications from financial services to healthcare, supply chain, and cybersecurity emerging every passing day. Simultaneously, in the digital world, data protection and privacy are the most enormous issues which customers, companies and policymakers also take seriously into consideration due to the recent increase of security breaches and surveillance in reported incidents. In this case, blockchain has the capability and potential to revolutionize trust, security and privacy of individual data in the online world. Hence, the purpose of this paper is to study the actual cases of Blockchain applied in the reformation of privacy and security field by discussing its impacts as well as the opportunities and challenges.
[ { "created": "Tue, 1 Sep 2020 16:04:57 GMT", "version": "v1" } ]
2020-09-02
[ [ "Duy", "Phan The", "" ], [ "Hien", "Do Thi Thu", "" ], [ "Pham", "Van-Hau", "" ] ]
The modern society, economy and industry have been changed remarkably by many cutting-edge technologies over the last years, and many more are in development and early implementation that will in turn led even wider spread of adoptions and greater alteration. Blockchain technology along with other rising ones is expected to transform virtually every aspect of global business and individuals' lifestyle in some areas. It has been spreading with multi-sector applications from financial services to healthcare, supply chain, and cybersecurity emerging every passing day. Simultaneously, in the digital world, data protection and privacy are the most enormous issues which customers, companies and policymakers also take seriously into consideration due to the recent increase of security breaches and surveillance in reported incidents. In this case, blockchain has the capability and potential to revolutionize trust, security and privacy of individual data in the online world. Hence, the purpose of this paper is to study the actual cases of Blockchain applied in the reformation of privacy and security field by discussing its impacts as well as the opportunities and challenges.
2006.12645
Vinod Grover
Somashekaracharya G. Bhaskaracharya, Julien Demouth, Vinod Grover
Automatic Kernel Generation for Volta Tensor Cores
null
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A commonly occurring computation idiom in neural networks is to perform some pointwise operations on the result of a matrix multiplication. Such a sequence of operations is typically represented as a computation graph in deep learning compilers. When compiling to a GPU target, these computations can be individually mapped to manually tuned implementations provided by libraries such as cuBLAS and cuDNN. These libraries also provide off-the-shelf support for targeting tensor cores in NVIDIA GPUs, which can lead to huge performance boosts through their specialized support for mixed-precision matrix math. Alternatively, tensor cores can be programmed directly using CUDA APIs or inline assembly instructions, which opens up the possibility of generating efficient CUDA kernels automatically for such computations. Automatic kernel generation is particularly crucial when it is beneficial to generate efficient code for an entire computation graph by fusing several operations into a single device function instead of invoking a separate kernel for each of them. Polyhedral compilation techniques provide a systematic approach for the analysis and transformation of a sequence of affine loop-nests. In this paper, we describe a polyhedral approach to generate efficient CUDA kernels for matrix multiplication using inline assembly instructions for programming tensor cores on NVIDIA Volta GPUs. Furthermore, we build on this approach to generate fused kernels for computation sequences involving matrix multiplication and pointwise operations such as bias addition, ReLU activation etc. Experimental evaluation of these techniques show that automatically generated kernels can provide significantly better performance than manually tuned library implementations, with speedups ranging up to 2.55X.
[ { "created": "Mon, 22 Jun 2020 22:16:00 GMT", "version": "v1" }, { "created": "Mon, 29 Jun 2020 22:20:24 GMT", "version": "v2" }, { "created": "Sat, 1 Aug 2020 21:41:41 GMT", "version": "v3" } ]
2020-08-04
[ [ "Bhaskaracharya", "Somashekaracharya G.", "" ], [ "Demouth", "Julien", "" ], [ "Grover", "Vinod", "" ] ]
A commonly occurring computation idiom in neural networks is to perform some pointwise operations on the result of a matrix multiplication. Such a sequence of operations is typically represented as a computation graph in deep learning compilers. When compiling to a GPU target, these computations can be individually mapped to manually tuned implementations provided by libraries such as cuBLAS and cuDNN. These libraries also provide off-the-shelf support for targeting tensor cores in NVIDIA GPUs, which can lead to huge performance boosts through their specialized support for mixed-precision matrix math. Alternatively, tensor cores can be programmed directly using CUDA APIs or inline assembly instructions, which opens up the possibility of generating efficient CUDA kernels automatically for such computations. Automatic kernel generation is particularly crucial when it is beneficial to generate efficient code for an entire computation graph by fusing several operations into a single device function instead of invoking a separate kernel for each of them. Polyhedral compilation techniques provide a systematic approach for the analysis and transformation of a sequence of affine loop-nests. In this paper, we describe a polyhedral approach to generate efficient CUDA kernels for matrix multiplication using inline assembly instructions for programming tensor cores on NVIDIA Volta GPUs. Furthermore, we build on this approach to generate fused kernels for computation sequences involving matrix multiplication and pointwise operations such as bias addition, ReLU activation etc. Experimental evaluation of these techniques show that automatically generated kernels can provide significantly better performance than manually tuned library implementations, with speedups ranging up to 2.55X.
2111.06995
Zhimin Gao
Shuangyan Miao, Yonghong Hou, Zhimin Gao, Mingliang Xu, and Wanqing Li
A Central Difference Graph Convolutional Operator for Skeleton-Based Action Recognition
Accepted by IEEE Transactions on Circuits and Systems for Video Technology (TCSVT)
null
10.1109/TCSVT.2021.3124562
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper proposes a new graph convolutional operator called central difference graph convolution (CDGC) for skeleton based action recognition. It is not only able to aggregate node information like a vanilla graph convolutional operation but also gradient information. Without introducing any additional parameters, CDGC can replace vanilla graph convolution in any existing Graph Convolutional Networks (GCNs). In addition, an accelerated version of the CDGC is developed which greatly improves the speed of training. Experiments on two popular large-scale datasets NTU RGB+D 60 & 120 have demonstrated the efficacy of the proposed CDGC. Code is available at https://github.com/iesymiao/CD-GCN.
[ { "created": "Sat, 13 Nov 2021 00:02:57 GMT", "version": "v1" } ]
2021-11-16
[ [ "Miao", "Shuangyan", "" ], [ "Hou", "Yonghong", "" ], [ "Gao", "Zhimin", "" ], [ "Xu", "Mingliang", "" ], [ "Li", "Wanqing", "" ] ]
This paper proposes a new graph convolutional operator called central difference graph convolution (CDGC) for skeleton based action recognition. It is not only able to aggregate node information like a vanilla graph convolutional operation but also gradient information. Without introducing any additional parameters, CDGC can replace vanilla graph convolution in any existing Graph Convolutional Networks (GCNs). In addition, an accelerated version of the CDGC is developed which greatly improves the speed of training. Experiments on two popular large-scale datasets NTU RGB+D 60 & 120 have demonstrated the efficacy of the proposed CDGC. Code is available at https://github.com/iesymiao/CD-GCN.
2303.16154
Danko Nikolic
Danko Nikoli\'c, Davor Andri\'c, Vjekoslav Nikoli\'c
Guided Transfer Learning
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Machine learning requires exuberant amounts of data and computation. Also, models require equally excessive growth in the number of parameters. It is, therefore, sensible to look for technologies that reduce these demands on resources. Here, we propose an approach called guided transfer learning. Each weight and bias in the network has its own guiding parameter that indicates how much this parameter is allowed to change while learning a new task. Guiding parameters are learned during an initial scouting process. Guided transfer learning can result in a reduction in resources needed to train a network. In some applications, guided transfer learning enables the network to learn from a small amount of data. In other cases, a network with a smaller number of parameters can learn a task which otherwise only a larger network could learn. Guided transfer learning potentially has many applications when the amount of data, model size, or the availability of computational resources reach their limits.
[ { "created": "Sun, 26 Mar 2023 18:21:24 GMT", "version": "v1" } ]
2023-03-29
[ [ "Nikolić", "Danko", "" ], [ "Andrić", "Davor", "" ], [ "Nikolić", "Vjekoslav", "" ] ]
Machine learning requires exuberant amounts of data and computation. Also, models require equally excessive growth in the number of parameters. It is, therefore, sensible to look for technologies that reduce these demands on resources. Here, we propose an approach called guided transfer learning. Each weight and bias in the network has its own guiding parameter that indicates how much this parameter is allowed to change while learning a new task. Guiding parameters are learned during an initial scouting process. Guided transfer learning can result in a reduction in resources needed to train a network. In some applications, guided transfer learning enables the network to learn from a small amount of data. In other cases, a network with a smaller number of parameters can learn a task which otherwise only a larger network could learn. Guided transfer learning potentially has many applications when the amount of data, model size, or the availability of computational resources reach their limits.
1810.05934
Liam Li
Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin Recht, Ameet Talwalkar
A System for Massively Parallel Hyperparameter Tuning
v2: Corrected typo in Algorithm 1 v3: Added comparison to BOHB and parallel version of synchronous SHA. Add PBT to experiment in Section 4.3.1 v4: Added acknowledgements and slight edit to related work
Conference on Machine Learning and Systems 2020
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern learning models are characterized by large hyperparameter spaces and long training times. These properties, coupled with the rise of parallel computing and the growing demand to productionize machine learning workloads, motivate the need to develop mature hyperparameter optimization functionality in distributed computing settings. We address this challenge by first introducing a simple and robust hyperparameter optimization algorithm called ASHA, which exploits parallelism and aggressive early-stopping to tackle large-scale hyperparameter optimization problems. Our extensive empirical results show that ASHA outperforms existing state-of-the-art hyperparameter optimization methods; scales linearly with the number of workers in distributed settings; and is suitable for massive parallelism, as demonstrated on a task with 500 workers. We then describe several design decisions we encountered, along with our associated solutions, when integrating ASHA in Determined AI's end-to-end production-quality machine learning system that offers hyperparameter tuning as a service.
[ { "created": "Sat, 13 Oct 2018 22:02:52 GMT", "version": "v1" }, { "created": "Wed, 17 Oct 2018 00:23:57 GMT", "version": "v2" }, { "created": "Thu, 29 Nov 2018 04:41:42 GMT", "version": "v3" }, { "created": "Wed, 23 Jan 2019 02:15:22 GMT", "version": "v4" }, { "created": "Mon, 16 Mar 2020 01:28:21 GMT", "version": "v5" } ]
2020-03-17
[ [ "Li", "Liam", "" ], [ "Jamieson", "Kevin", "" ], [ "Rostamizadeh", "Afshin", "" ], [ "Gonina", "Ekaterina", "" ], [ "Hardt", "Moritz", "" ], [ "Recht", "Benjamin", "" ], [ "Talwalkar", "Ameet", "" ] ]
Modern learning models are characterized by large hyperparameter spaces and long training times. These properties, coupled with the rise of parallel computing and the growing demand to productionize machine learning workloads, motivate the need to develop mature hyperparameter optimization functionality in distributed computing settings. We address this challenge by first introducing a simple and robust hyperparameter optimization algorithm called ASHA, which exploits parallelism and aggressive early-stopping to tackle large-scale hyperparameter optimization problems. Our extensive empirical results show that ASHA outperforms existing state-of-the-art hyperparameter optimization methods; scales linearly with the number of workers in distributed settings; and is suitable for massive parallelism, as demonstrated on a task with 500 workers. We then describe several design decisions we encountered, along with our associated solutions, when integrating ASHA in Determined AI's end-to-end production-quality machine learning system that offers hyperparameter tuning as a service.
1705.02883
Umar Iqbal
Umar Iqbal, Andreas Doering, Hashim Yasin, Bj\"orn Kr\"uger, Andreas Weber, Juergen Gall
A Dual-Source Approach for 3D Human Pose Estimation from a Single Image
under consideration at Computer Vision and Image Understanding. Extended version of CVPR-2016 paper, arXiv:1509.06720
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we address the challenging problem of 3D human pose estimation from single images. Recent approaches learn deep neural networks to regress 3D pose directly from images. One major challenge for such methods, however, is the collection of training data. Specifically, collecting large amounts of training data containing unconstrained images annotated with accurate 3D poses is infeasible. We therefore propose to use two independent training sources. The first source consists of accurate 3D motion capture data, and the second source consists of unconstrained images with annotated 2D poses. To integrate both sources, we propose a dual-source approach that combines 2D pose estimation with efficient 3D pose retrieval. To this end, we first convert the motion capture data into a normalized 2D pose space, and separately learn a 2D pose estimation model from the image data. During inference, we estimate the 2D pose and efficiently retrieve the nearest 3D poses. We then jointly estimate a mapping from the 3D pose space to the image and reconstruct the 3D pose. We provide a comprehensive evaluation of the proposed method and experimentally demonstrate the effectiveness of our approach, even when the skeleton structures of the two sources differ substantially.
[ { "created": "Mon, 8 May 2017 14:03:48 GMT", "version": "v1" }, { "created": "Wed, 6 Sep 2017 13:24:52 GMT", "version": "v2" } ]
2017-09-07
[ [ "Iqbal", "Umar", "" ], [ "Doering", "Andreas", "" ], [ "Yasin", "Hashim", "" ], [ "Krüger", "Björn", "" ], [ "Weber", "Andreas", "" ], [ "Gall", "Juergen", "" ] ]
In this work we address the challenging problem of 3D human pose estimation from single images. Recent approaches learn deep neural networks to regress 3D pose directly from images. One major challenge for such methods, however, is the collection of training data. Specifically, collecting large amounts of training data containing unconstrained images annotated with accurate 3D poses is infeasible. We therefore propose to use two independent training sources. The first source consists of accurate 3D motion capture data, and the second source consists of unconstrained images with annotated 2D poses. To integrate both sources, we propose a dual-source approach that combines 2D pose estimation with efficient 3D pose retrieval. To this end, we first convert the motion capture data into a normalized 2D pose space, and separately learn a 2D pose estimation model from the image data. During inference, we estimate the 2D pose and efficiently retrieve the nearest 3D poses. We then jointly estimate a mapping from the 3D pose space to the image and reconstruct the 3D pose. We provide a comprehensive evaluation of the proposed method and experimentally demonstrate the effectiveness of our approach, even when the skeleton structures of the two sources differ substantially.
2404.17597
Alexander Rogiers
Alexander Rogiers, Maarten Buyl, Bo Kang, and Tijl De Bie
KamerRaad: Enhancing Information Retrieval in Belgian National Politics through Hierarchical Summarization and Conversational Interfaces
4 pages, 2 figures, submitted to 2024 ECML-PKDD demo track
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
KamerRaad is an AI tool that leverages large language models to help citizens interactively engage with Belgian political information. The tool extracts and concisely summarizes key excerpts from parliamentary proceedings, followed by the potential for interaction based on generative AI that allows users to steadily build up their understanding. KamerRaad's front-end, built with Streamlit, facilitates easy interaction, while the back-end employs open-source models for text embedding and generation to ensure accurate and relevant responses. By collecting feedback, we intend to enhance the relevancy of our source retrieval and the quality of our summarization, thereby enriching the user experience with a focus on source-driven dialogue.
[ { "created": "Mon, 22 Apr 2024 15:01:39 GMT", "version": "v1" } ]
2024-04-30
[ [ "Rogiers", "Alexander", "" ], [ "Buyl", "Maarten", "" ], [ "Kang", "Bo", "" ], [ "De Bie", "Tijl", "" ] ]
KamerRaad is an AI tool that leverages large language models to help citizens interactively engage with Belgian political information. The tool extracts and concisely summarizes key excerpts from parliamentary proceedings, followed by the potential for interaction based on generative AI that allows users to steadily build up their understanding. KamerRaad's front-end, built with Streamlit, facilitates easy interaction, while the back-end employs open-source models for text embedding and generation to ensure accurate and relevant responses. By collecting feedback, we intend to enhance the relevancy of our source retrieval and the quality of our summarization, thereby enriching the user experience with a focus on source-driven dialogue.
2202.11784
Jiajia Zhang
Jiajia Zhang and Jiyuan Tian and Dibin Zhu and Yang Liu and Shyam Prasad
Design and experimental investigation of a vibro-impact self-propelled capsule robot with orientation control
ICRA 2022 Conference paper
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents a novel design and experimental investigation for a self-propelled capsule robot that can be used for painless colonoscopy during a retrograde progression from the patient's rectum. The steerable robot is driven forward and backward via its internal vibration and impact with orientation control by using an electromagnetic actuator. The actuator contains four sets of coils and a shaft made by permanent magnet. The shaft can be excited linearly in a controllable and tilted angle, so guide the progression orientation of the robot. Two control strategies are studied in this work and compared via simulation and experiment. Extensive results are presented to demonstrate the progression efficiency of the robot and its potential for robotic colonoscopy.
[ { "created": "Wed, 23 Feb 2022 21:00:32 GMT", "version": "v1" }, { "created": "Tue, 1 Mar 2022 18:52:30 GMT", "version": "v2" } ]
2022-03-02
[ [ "Zhang", "Jiajia", "" ], [ "Tian", "Jiyuan", "" ], [ "Zhu", "Dibin", "" ], [ "Liu", "Yang", "" ], [ "Prasad", "Shyam", "" ] ]
This paper presents a novel design and experimental investigation for a self-propelled capsule robot that can be used for painless colonoscopy during a retrograde progression from the patient's rectum. The steerable robot is driven forward and backward via its internal vibration and impact with orientation control by using an electromagnetic actuator. The actuator contains four sets of coils and a shaft made by permanent magnet. The shaft can be excited linearly in a controllable and tilted angle, so guide the progression orientation of the robot. Two control strategies are studied in this work and compared via simulation and experiment. Extensive results are presented to demonstrate the progression efficiency of the robot and its potential for robotic colonoscopy.
2202.04513
Tom Sterkenburg
Tom F. Sterkenburg, Peter D. Gr\"unwald
The no-free-lunch theorems of supervised learning
null
Synthese 199:9979-10015 (2021)
10.1007/s11229-021-03233-1
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather be understood as model-dependent: in each application they also require for input a model, representing a bias. Generic algorithms themselves, they can be given a model-relative justification.
[ { "created": "Wed, 9 Feb 2022 15:24:30 GMT", "version": "v1" } ]
2022-02-10
[ [ "Sterkenburg", "Tom F.", "" ], [ "Grünwald", "Peter D.", "" ] ]
The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather be understood as model-dependent: in each application they also require for input a model, representing a bias. Generic algorithms themselves, they can be given a model-relative justification.
2307.09762
Abhishek Ajayakumar
Abhishek Ajayakumar, Soumyendu Raha
Reinforcing POD-based model reduction techniques in reaction-diffusion complex networks using stochastic filtering and pattern recognition
19 pages, 6 figures
null
null
null
cs.CE cs.AI cs.LG math.OC
http://creativecommons.org/licenses/by/4.0/
Complex networks are used to model many real-world systems. However, the dimensionality of these systems can make them challenging to analyze. Dimensionality reduction techniques like POD can be used in such cases. However, these models are susceptible to perturbations in the input data. We propose an algorithmic framework that combines techniques from pattern recognition (PR) and stochastic filtering theory to enhance the output of such models. The results of our study show that our method can improve the accuracy of the surrogate model under perturbed inputs. Deep Neural Networks (DNNs) are susceptible to adversarial attacks. However, recent research has revealed that Neural Ordinary Differential Equations (neural ODEs) exhibit robustness in specific applications. We benchmark our algorithmic framework with the neural ODE-based approach as a reference.
[ { "created": "Wed, 19 Jul 2023 05:45:05 GMT", "version": "v1" }, { "created": "Sat, 16 Sep 2023 14:09:43 GMT", "version": "v2" } ]
2023-09-19
[ [ "Ajayakumar", "Abhishek", "" ], [ "Raha", "Soumyendu", "" ] ]
Complex networks are used to model many real-world systems. However, the dimensionality of these systems can make them challenging to analyze. Dimensionality reduction techniques like POD can be used in such cases. However, these models are susceptible to perturbations in the input data. We propose an algorithmic framework that combines techniques from pattern recognition (PR) and stochastic filtering theory to enhance the output of such models. The results of our study show that our method can improve the accuracy of the surrogate model under perturbed inputs. Deep Neural Networks (DNNs) are susceptible to adversarial attacks. However, recent research has revealed that Neural Ordinary Differential Equations (neural ODEs) exhibit robustness in specific applications. We benchmark our algorithmic framework with the neural ODE-based approach as a reference.
2208.07601
Wenhao Ye
Wenhao Ye, Huihui Wu, Shitong Wu, Yizhu Wang, Wenyi Zhang, Hao Wu and Bo Bai
An Optimal Transport Approach to the Computation of the LM Rate
null
null
null
null
cs.IT math.IT stat.CO
http://creativecommons.org/licenses/by/4.0/
Mismatch capacity characterizes the highest information rate for a channel under a prescribed decoding metric, and is thus a highly relevant fundamental performance metric when dealing with many practically important communication scenarios. Compared with the frequently used generalized mutual information (GMI), the LM rate has been known as a tighter lower bound of the mismatch capacity. The computation of the LM rate, however, has been a difficult task, due to the fact that the LM rate involves a maximization over a function of the channel input, which becomes challenging as the input alphabet size grows, and direct numerical methods (e.g., interior point methods) suffer from intensive memory and computational resource requirements. Noting that the computation of the LM rate can also be formulated as an entropy-based optimization problem with constraints, in this work, we transform the task into an optimal transport (OT) problem with an extra constraint. This allows us to efficiently and accurately accomplish our task by using the well-known Sinkhorn algorithm. Indeed, only a few iterations are required for convergence, due to the fact that the formulated problem does not contain additional regularization terms. Moreover, we convert the extra constraint into a root-finding procedure for a one-dimensional monotonic function. Numerical experiments demonstrate the feasibility and efficiency of our OT approach to the computation of the LM rate.
[ { "created": "Tue, 16 Aug 2022 08:33:20 GMT", "version": "v1" } ]
2022-08-17
[ [ "Ye", "Wenhao", "" ], [ "Wu", "Huihui", "" ], [ "Wu", "Shitong", "" ], [ "Wang", "Yizhu", "" ], [ "Zhang", "Wenyi", "" ], [ "Wu", "Hao", "" ], [ "Bai", "Bo", "" ] ]
Mismatch capacity characterizes the highest information rate for a channel under a prescribed decoding metric, and is thus a highly relevant fundamental performance metric when dealing with many practically important communication scenarios. Compared with the frequently used generalized mutual information (GMI), the LM rate has been known as a tighter lower bound of the mismatch capacity. The computation of the LM rate, however, has been a difficult task, due to the fact that the LM rate involves a maximization over a function of the channel input, which becomes challenging as the input alphabet size grows, and direct numerical methods (e.g., interior point methods) suffer from intensive memory and computational resource requirements. Noting that the computation of the LM rate can also be formulated as an entropy-based optimization problem with constraints, in this work, we transform the task into an optimal transport (OT) problem with an extra constraint. This allows us to efficiently and accurately accomplish our task by using the well-known Sinkhorn algorithm. Indeed, only a few iterations are required for convergence, due to the fact that the formulated problem does not contain additional regularization terms. Moreover, we convert the extra constraint into a root-finding procedure for a one-dimensional monotonic function. Numerical experiments demonstrate the feasibility and efficiency of our OT approach to the computation of the LM rate.
1401.1458
Young-Ho Eom
Young-Ho Eom, Hang-Hyun Jo
Generalized friendship paradox in complex networks: The case of scientific collaboration
Published in Scientific Reports. 9 pages, 3 figures
Scientific Reports 4, 4603 (2014)
10.1038/srep04603
null
cs.SI physics.data-an physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The friendship paradox states that your friends have on average more friends than you have. Does the paradox "hold" for other individual characteristics like income or happiness? To address this question, we generalize the friendship paradox for arbitrary node characteristics in complex networks. By analyzing two coauthorship networks of Physical Review journals and Google Scholar profiles, we find that the generalized friendship paradox (GFP) holds at the individual and network levels for various characteristics, including the number of coauthors, the number of citations, and the number of publications. The origin of the GFP is shown to be rooted in positive correlations between degree and characteristics. As a fruitful application of the GFP, we suggest effective and efficient sampling methods for identifying high characteristic nodes in large-scale networks. Our study on the GFP can shed lights on understanding the interplay between network structure and node characteristics in complex networks.
[ { "created": "Tue, 7 Jan 2014 17:51:14 GMT", "version": "v1" }, { "created": "Mon, 31 Mar 2014 16:59:54 GMT", "version": "v2" }, { "created": "Thu, 10 Apr 2014 09:21:46 GMT", "version": "v3" } ]
2014-04-11
[ [ "Eom", "Young-Ho", "" ], [ "Jo", "Hang-Hyun", "" ] ]
The friendship paradox states that your friends have on average more friends than you have. Does the paradox "hold" for other individual characteristics like income or happiness? To address this question, we generalize the friendship paradox for arbitrary node characteristics in complex networks. By analyzing two coauthorship networks of Physical Review journals and Google Scholar profiles, we find that the generalized friendship paradox (GFP) holds at the individual and network levels for various characteristics, including the number of coauthors, the number of citations, and the number of publications. The origin of the GFP is shown to be rooted in positive correlations between degree and characteristics. As a fruitful application of the GFP, we suggest effective and efficient sampling methods for identifying high characteristic nodes in large-scale networks. Our study on the GFP can shed lights on understanding the interplay between network structure and node characteristics in complex networks.
2210.11065
Digbalay Bose
Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Haoyang Zhang, Yin Cui, Kree Cole-McLaughlin, Huisheng Wang, Shrikanth Narayanan
MovieCLIP: Visual Scene Recognition in Movies
Accepted to 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2023). Project website with supplemental material: https://sail.usc.edu/~mica/MovieCLIP/. Revised version with updated author affiliations
null
null
null
cs.CV cs.CL cs.MM
http://creativecommons.org/licenses/by/4.0/
Longform media such as movies have complex narrative structures, with events spanning a rich variety of ambient visual scenes. Domain specific challenges associated with visual scenes in movies include transitions, person coverage, and a wide array of real-life and fictional scenarios. Existing visual scene datasets in movies have limited taxonomies and don't consider the visual scene transition within movie clips. In this work, we address the problem of visual scene recognition in movies by first automatically curating a new and extensive movie-centric taxonomy of 179 scene labels derived from movie scripts and auxiliary web-based video datasets. Instead of manual annotations which can be expensive, we use CLIP to weakly label 1.12 million shots from 32K movie clips based on our proposed taxonomy. We provide baseline visual models trained on the weakly labeled dataset called MovieCLIP and evaluate them on an independent dataset verified by human raters. We show that leveraging features from models pretrained on MovieCLIP benefits downstream tasks such as multi-label scene and genre classification of web videos and movie trailers.
[ { "created": "Thu, 20 Oct 2022 07:38:56 GMT", "version": "v1" }, { "created": "Sun, 23 Oct 2022 01:25:13 GMT", "version": "v2" } ]
2022-10-25
[ [ "Bose", "Digbalay", "" ], [ "Hebbar", "Rajat", "" ], [ "Somandepalli", "Krishna", "" ], [ "Zhang", "Haoyang", "" ], [ "Cui", "Yin", "" ], [ "Cole-McLaughlin", "Kree", "" ], [ "Wang", "Huisheng", "" ], [ "Narayanan", "Shrikanth", "" ] ]
Longform media such as movies have complex narrative structures, with events spanning a rich variety of ambient visual scenes. Domain specific challenges associated with visual scenes in movies include transitions, person coverage, and a wide array of real-life and fictional scenarios. Existing visual scene datasets in movies have limited taxonomies and don't consider the visual scene transition within movie clips. In this work, we address the problem of visual scene recognition in movies by first automatically curating a new and extensive movie-centric taxonomy of 179 scene labels derived from movie scripts and auxiliary web-based video datasets. Instead of manual annotations which can be expensive, we use CLIP to weakly label 1.12 million shots from 32K movie clips based on our proposed taxonomy. We provide baseline visual models trained on the weakly labeled dataset called MovieCLIP and evaluate them on an independent dataset verified by human raters. We show that leveraging features from models pretrained on MovieCLIP benefits downstream tasks such as multi-label scene and genre classification of web videos and movie trailers.
1807.04040
Jeevan Manavalan
Jeevan Manavalan, Matthew Howard
Learning Singularity Avoidance
null
null
null
null
cs.RO cs.LG
http://creativecommons.org/licenses/by/4.0/
With the increase in complexity of robotic systems and the rise in non-expert users, it can be assumed that task constraints are not explicitly known. In tasks where avoiding singularity is critical to its success, this paper provides an approach, especially for non-expert users, for the system to learn the constraints contained in a set of demonstrations, such that they can be used to optimise an autonomous controller to avoid singularity, without having to explicitly know the task constraints. The proposed approach avoids singularity, and thereby unpredictable behaviour when carrying out a task, by maximising the learnt manipulability throughout the motion of the constrained system, and is not limited to kinematic systems. Its benefits are demonstrated through comparisons with other control policies which show that the constrained manipulability of a system learnt through demonstration can be used to avoid singularities in cases where these other policies would fail. In the absence of the systems manipulability subject to a tasks constraints, the proposed approach can be used instead to infer these with results showing errors less than 10^-5 in 3DOF simulated systems as well as 10^-2 using a 7DOF real world robotic system.
[ { "created": "Wed, 11 Jul 2018 09:46:05 GMT", "version": "v1" }, { "created": "Mon, 25 Mar 2019 22:03:01 GMT", "version": "v2" } ]
2019-03-27
[ [ "Manavalan", "Jeevan", "" ], [ "Howard", "Matthew", "" ] ]
With the increase in complexity of robotic systems and the rise in non-expert users, it can be assumed that task constraints are not explicitly known. In tasks where avoiding singularity is critical to its success, this paper provides an approach, especially for non-expert users, for the system to learn the constraints contained in a set of demonstrations, such that they can be used to optimise an autonomous controller to avoid singularity, without having to explicitly know the task constraints. The proposed approach avoids singularity, and thereby unpredictable behaviour when carrying out a task, by maximising the learnt manipulability throughout the motion of the constrained system, and is not limited to kinematic systems. Its benefits are demonstrated through comparisons with other control policies which show that the constrained manipulability of a system learnt through demonstration can be used to avoid singularities in cases where these other policies would fail. In the absence of the systems manipulability subject to a tasks constraints, the proposed approach can be used instead to infer these with results showing errors less than 10^-5 in 3DOF simulated systems as well as 10^-2 using a 7DOF real world robotic system.
2005.09512
Frederico Gadelha Guimaraes
Leonardo Augusto Ferreira and Frederico Gadelha Guimar\~aes and Rodrigo Silva
Applying Genetic Programming to Improve Interpretability in Machine Learning Models
8 pages, 8 figures, submitted and accepted to 2020 IEEE Congress on Evolutionary Computation (IEEE CEC 2020). Copyright 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses
null
null
null
cs.LG cs.AI cs.NE cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explainable Artificial Intelligence (or xAI) has become an important research topic in the fields of Machine Learning and Deep Learning. In this paper, we propose a Genetic Programming (GP) based approach, named Genetic Programming Explainer (GPX), to the problem of explaining decisions computed by AI systems. The method generates a noise set located in the neighborhood of the point of interest, whose prediction should be explained, and fits a local explanation model for the analyzed sample. The tree structure generated by GPX provides a comprehensible analytical, possibly non-linear, symbolic expression which reflects the local behavior of the complex model. We considered three machine learning techniques that can be recognized as complex black-box models: Random Forest, Deep Neural Network and Support Vector Machine in twenty data sets for regression and classifications problems. Our results indicate that the GPX is able to produce more accurate understanding of complex models than the state of the art. The results validate the proposed approach as a novel way to deploy GP to improve interpretability.
[ { "created": "Mon, 18 May 2020 16:09:49 GMT", "version": "v1" } ]
2020-05-20
[ [ "Ferreira", "Leonardo Augusto", "" ], [ "Guimarães", "Frederico Gadelha", "" ], [ "Silva", "Rodrigo", "" ] ]
Explainable Artificial Intelligence (or xAI) has become an important research topic in the fields of Machine Learning and Deep Learning. In this paper, we propose a Genetic Programming (GP) based approach, named Genetic Programming Explainer (GPX), to the problem of explaining decisions computed by AI systems. The method generates a noise set located in the neighborhood of the point of interest, whose prediction should be explained, and fits a local explanation model for the analyzed sample. The tree structure generated by GPX provides a comprehensible analytical, possibly non-linear, symbolic expression which reflects the local behavior of the complex model. We considered three machine learning techniques that can be recognized as complex black-box models: Random Forest, Deep Neural Network and Support Vector Machine in twenty data sets for regression and classifications problems. Our results indicate that the GPX is able to produce more accurate understanding of complex models than the state of the art. The results validate the proposed approach as a novel way to deploy GP to improve interpretability.
2011.14365
Weifeng Zhu
Jiazhu Dai, Weifeng Zhu, Xiangfeng Luo
A Targeted Universal Attack on Graph Convolutional Network
null
null
null
null
cs.LG cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph-structured data exist in numerous applications in real life. As a state-of-the-art graph neural network, the graph convolutional network (GCN) plays an important role in processing graph-structured data. However, a recent study reported that GCNs are also vulnerable to adversarial attacks, which means that GCN models may suffer malicious attacks with unnoticeable modifications of the data. Among all the adversarial attacks on GCNs, there is a special kind of attack method called the universal adversarial attack, which generates a perturbation that can be applied to any sample and causes GCN models to output incorrect results. Although universal adversarial attacks in computer vision have been extensively researched, there are few research works on universal adversarial attacks on graph structured data. In this paper, we propose a targeted universal adversarial attack against GCNs. Our method employs a few nodes as the attack nodes. The attack capability of the attack nodes is enhanced through a small number of fake nodes connected to them. During an attack, any victim node will be misclassified by the GCN as the attack node class as long as it is linked to them. The experiments on three popular datasets show that the average attack success rate of the proposed attack on any victim node in the graph reaches 83% when using only 3 attack nodes and 6 fake nodes. We hope that our work will make the community aware of the threat of this type of attack and raise the attention given to its future defense.
[ { "created": "Sun, 29 Nov 2020 13:19:53 GMT", "version": "v1" } ]
2020-12-01
[ [ "Dai", "Jiazhu", "" ], [ "Zhu", "Weifeng", "" ], [ "Luo", "Xiangfeng", "" ] ]
Graph-structured data exist in numerous applications in real life. As a state-of-the-art graph neural network, the graph convolutional network (GCN) plays an important role in processing graph-structured data. However, a recent study reported that GCNs are also vulnerable to adversarial attacks, which means that GCN models may suffer malicious attacks with unnoticeable modifications of the data. Among all the adversarial attacks on GCNs, there is a special kind of attack method called the universal adversarial attack, which generates a perturbation that can be applied to any sample and causes GCN models to output incorrect results. Although universal adversarial attacks in computer vision have been extensively researched, there are few research works on universal adversarial attacks on graph structured data. In this paper, we propose a targeted universal adversarial attack against GCNs. Our method employs a few nodes as the attack nodes. The attack capability of the attack nodes is enhanced through a small number of fake nodes connected to them. During an attack, any victim node will be misclassified by the GCN as the attack node class as long as it is linked to them. The experiments on three popular datasets show that the average attack success rate of the proposed attack on any victim node in the graph reaches 83% when using only 3 attack nodes and 6 fake nodes. We hope that our work will make the community aware of the threat of this type of attack and raise the attention given to its future defense.
2202.05977
Yc Huo
Hangming Fan, Rui Wang, Yuchi Huo, Hujun Bao
Real-time Monte Carlo Denoising with Weight Sharing Kernel Prediction Network
null
Computer Graphics Forum. 2021, 40(4): 15-27
10.1111/cgf.14338
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-time Monte Carlo denoising aims at removing severe noise under low samples per pixel (spp) in a strict time budget. Recently, kernel-prediction methods use a neural network to predict each pixel's filtering kernel and have shown a great potential to remove Monte Carlo noise. However, the heavy computation overhead blocks these methods from real-time applications. This paper expands the kernel-prediction method and proposes a novel approach to denoise very low spp (e.g., 1-spp) Monte Carlo path traced images at real-time frame rates. Instead of using the neural network to directly predict the kernel map, i.e., the complete weights of each per-pixel filtering kernel, we predict an encoding of the kernel map, followed by a high-efficiency decoder with unfolding operations for a high-quality reconstruction of the filtering kernels. The kernel map encoding yields a compact single-channel representation of the kernel map, which can significantly reduce the kernel-prediction network's throughput. In addition, we adopt a scalable kernel fusion module to improve denoising quality. The proposed approach preserves kernel prediction methods' denoising quality while roughly halving its denoising time for 1-spp noisy inputs. In addition, compared with the recent neural bilateral grid-based real-time denoiser, our approach benefits from the high parallelism of kernel-based reconstruction and produces better denoising results at equal time.
[ { "created": "Sat, 12 Feb 2022 04:21:37 GMT", "version": "v1" }, { "created": "Fri, 25 Feb 2022 09:16:14 GMT", "version": "v2" } ]
2022-02-28
[ [ "Fan", "Hangming", "" ], [ "Wang", "Rui", "" ], [ "Huo", "Yuchi", "" ], [ "Bao", "Hujun", "" ] ]
Real-time Monte Carlo denoising aims at removing severe noise under low samples per pixel (spp) in a strict time budget. Recently, kernel-prediction methods use a neural network to predict each pixel's filtering kernel and have shown a great potential to remove Monte Carlo noise. However, the heavy computation overhead blocks these methods from real-time applications. This paper expands the kernel-prediction method and proposes a novel approach to denoise very low spp (e.g., 1-spp) Monte Carlo path traced images at real-time frame rates. Instead of using the neural network to directly predict the kernel map, i.e., the complete weights of each per-pixel filtering kernel, we predict an encoding of the kernel map, followed by a high-efficiency decoder with unfolding operations for a high-quality reconstruction of the filtering kernels. The kernel map encoding yields a compact single-channel representation of the kernel map, which can significantly reduce the kernel-prediction network's throughput. In addition, we adopt a scalable kernel fusion module to improve denoising quality. The proposed approach preserves kernel prediction methods' denoising quality while roughly halving its denoising time for 1-spp noisy inputs. In addition, compared with the recent neural bilateral grid-based real-time denoiser, our approach benefits from the high parallelism of kernel-based reconstruction and produces better denoising results at equal time.
2211.14742
YuTeng Ye
YuTeng Ye, Hang Zhou, Jiale Cai, Chenxing Gao, Youjia Zhang, Junle Wang, Qiang Hu, Junqing Yu, Wei Yang
Dynamic Feature Pruning and Consolidation for Occluded Person Re-Identification
Accepted by AAAI-24
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Occluded person re-identification (ReID) is a challenging problem due to contamination from occluders. Existing approaches address the issue with prior knowledge cues, such as human body key points and semantic segmentations, which easily fail in the presence of heavy occlusion and other humans as occluders. In this paper, we propose a feature pruning and consolidation (FPC) framework to circumvent explicit human structure parsing. The framework mainly consists of a sparse encoder, a multi-view feature mathcing module, and a feature consolidation decoder. Specifically, the sparse encoder drops less important image tokens, mostly related to background noise and occluders, solely based on correlation within the class token attention. Subsequently, the matching stage relies on the preserved tokens produced by the sparse encoder to identify k-nearest neighbors in the gallery by measuring the image and patch-level combined similarity. Finally, we use the feature consolidation module to compensate pruned features using identified neighbors for recovering essential information while disregarding disturbance from noise and occlusion. Experimental results demonstrate the effectiveness of our proposed framework on occluded, partial, and holistic Re-ID datasets. In particular, our method outperforms state-of-the-art results by at least 8.6\% mAP and 6.0\% Rank-1 accuracy on the challenging Occluded-Duke dataset.
[ { "created": "Sun, 27 Nov 2022 06:18:40 GMT", "version": "v1" }, { "created": "Thu, 21 Dec 2023 04:06:43 GMT", "version": "v2" } ]
2023-12-22
[ [ "Ye", "YuTeng", "" ], [ "Zhou", "Hang", "" ], [ "Cai", "Jiale", "" ], [ "Gao", "Chenxing", "" ], [ "Zhang", "Youjia", "" ], [ "Wang", "Junle", "" ], [ "Hu", "Qiang", "" ], [ "Yu", "Junqing", "" ], [ "Yang", "Wei", "" ] ]
Occluded person re-identification (ReID) is a challenging problem due to contamination from occluders. Existing approaches address the issue with prior knowledge cues, such as human body key points and semantic segmentations, which easily fail in the presence of heavy occlusion and other humans as occluders. In this paper, we propose a feature pruning and consolidation (FPC) framework to circumvent explicit human structure parsing. The framework mainly consists of a sparse encoder, a multi-view feature mathcing module, and a feature consolidation decoder. Specifically, the sparse encoder drops less important image tokens, mostly related to background noise and occluders, solely based on correlation within the class token attention. Subsequently, the matching stage relies on the preserved tokens produced by the sparse encoder to identify k-nearest neighbors in the gallery by measuring the image and patch-level combined similarity. Finally, we use the feature consolidation module to compensate pruned features using identified neighbors for recovering essential information while disregarding disturbance from noise and occlusion. Experimental results demonstrate the effectiveness of our proposed framework on occluded, partial, and holistic Re-ID datasets. In particular, our method outperforms state-of-the-art results by at least 8.6\% mAP and 6.0\% Rank-1 accuracy on the challenging Occluded-Duke dataset.
1709.05365
Myoungsoo Jung
Sungjoon Koh, Jie Zhang, Miryeong Kwon, Jungyeon Yoon, David Donofrio, Namsung Kim and Myoungsoo Jung
Understanding System Characteristics of Online Erasure Coding on Scalable, Distributed and Large-Scale SSD Array Systems
This paper is accepted by and will be published at 2017 IEEE International Symposium on Workload Characterization
null
null
null
cs.DC cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale systems with arrays of solid state disks (SSDs) have become increasingly common in many computing segments. To make such systems resilient, we can adopt erasure coding such as Reed-Solomon (RS) code as an alternative to replication because erasure coding can offer a significantly lower storage cost than replication. To understand the impact of using erasure coding on system performance and other system aspects such as CPU utilization and network traffic, we build a storage cluster consisting of approximately one hundred processor cores with more than fifty high-performance SSDs, and evaluate the cluster with a popular open-source distributed parallel file system, Ceph. Then we analyze behaviors of systems adopting erasure coding from the following five viewpoints, compared with those of systems using replication: (1) storage system I/O performance; (2) computing and software overheads; (3) I/O amplification; (4) network traffic among storage nodes; (5) the impact of physical data layout on performance of RS-coded SSD arrays. For all these analyses, we examine two representative RS configurations, which are used by Google and Facebook file systems, and compare them with triple replication that a typical parallel file system employs as a default fault tolerance mechanism. Lastly, we collect 54 block-level traces from the cluster and make them available for other researchers.
[ { "created": "Thu, 14 Sep 2017 14:14:10 GMT", "version": "v1" }, { "created": "Tue, 19 Sep 2017 04:14:56 GMT", "version": "v2" } ]
2017-09-20
[ [ "Koh", "Sungjoon", "" ], [ "Zhang", "Jie", "" ], [ "Kwon", "Miryeong", "" ], [ "Yoon", "Jungyeon", "" ], [ "Donofrio", "David", "" ], [ "Kim", "Namsung", "" ], [ "Jung", "Myoungsoo", "" ] ]
Large-scale systems with arrays of solid state disks (SSDs) have become increasingly common in many computing segments. To make such systems resilient, we can adopt erasure coding such as Reed-Solomon (RS) code as an alternative to replication because erasure coding can offer a significantly lower storage cost than replication. To understand the impact of using erasure coding on system performance and other system aspects such as CPU utilization and network traffic, we build a storage cluster consisting of approximately one hundred processor cores with more than fifty high-performance SSDs, and evaluate the cluster with a popular open-source distributed parallel file system, Ceph. Then we analyze behaviors of systems adopting erasure coding from the following five viewpoints, compared with those of systems using replication: (1) storage system I/O performance; (2) computing and software overheads; (3) I/O amplification; (4) network traffic among storage nodes; (5) the impact of physical data layout on performance of RS-coded SSD arrays. For all these analyses, we examine two representative RS configurations, which are used by Google and Facebook file systems, and compare them with triple replication that a typical parallel file system employs as a default fault tolerance mechanism. Lastly, we collect 54 block-level traces from the cluster and make them available for other researchers.
1703.02197
EPTCS
Minghui Ma (Sun Yat-Sen University), Ahti-Veikko Pietarinen (Tallinn University of Technology)
Graphical Sequent Calculi for Modal Logics
In Proceedings M4M9 2017, arXiv:1703.01736
EPTCS 243, 2017, pp. 91-103
10.4204/EPTCS.243.7
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The syntax of modal graphs is defined in terms of the continuous cut and broken cut following Charles Peirce's notation in the gamma part of his graphical logic of existential graphs. Graphical calculi for normal modal logics are developed based on a reformulation of the graphical calculus for classical propositional logic. These graphical calculi are of the nature of deep inference. The relationship between graphical calculi and sequent calculi for modal logics is shown by translations between graphs and modal formulas.
[ { "created": "Tue, 7 Mar 2017 03:16:38 GMT", "version": "v1" } ]
2017-03-08
[ [ "Ma", "Minghui", "", "Sun Yat-Sen University" ], [ "Pietarinen", "Ahti-Veikko", "", "Tallinn\n University of Technology" ] ]
The syntax of modal graphs is defined in terms of the continuous cut and broken cut following Charles Peirce's notation in the gamma part of his graphical logic of existential graphs. Graphical calculi for normal modal logics are developed based on a reformulation of the graphical calculus for classical propositional logic. These graphical calculi are of the nature of deep inference. The relationship between graphical calculi and sequent calculi for modal logics is shown by translations between graphs and modal formulas.