id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2311.17776
Gen Li
Gen Li, Deqing Sun, Laura Sevilla-Lara, Varun Jampani
One-Shot Open Affordance Learning with Foundation Models
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce One-shot Open Affordance Learning (OOAL), where a model is trained with just one example per base object category, but is expected to identify novel objects and affordances. While vision-language models excel at recognizing novel objects and scenes, they often struggle to understand finer levels of granularity such as affordances. To handle this issue, we conduct a comprehensive analysis of existing foundation models, to explore their inherent understanding of affordances and assess the potential for data-limited affordance learning. We then propose a vision-language framework with simple and effective designs that boost the alignment between visual features and affordance text embeddings. Experiments on two affordance segmentation benchmarks show that the proposed method outperforms state-of-the-art models with less than 1% of the full training data, and exhibits reasonable generalization capability on unseen objects and affordances.
[ { "created": "Wed, 29 Nov 2023 16:23:06 GMT", "version": "v1" } ]
2023-11-30
[ [ "Li", "Gen", "" ], [ "Sun", "Deqing", "" ], [ "Sevilla-Lara", "Laura", "" ], [ "Jampani", "Varun", "" ] ]
We introduce One-shot Open Affordance Learning (OOAL), where a model is trained with just one example per base object category, but is expected to identify novel objects and affordances. While vision-language models excel at recognizing novel objects and scenes, they often struggle to understand finer levels of granularity such as affordances. To handle this issue, we conduct a comprehensive analysis of existing foundation models, to explore their inherent understanding of affordances and assess the potential for data-limited affordance learning. We then propose a vision-language framework with simple and effective designs that boost the alignment between visual features and affordance text embeddings. Experiments on two affordance segmentation benchmarks show that the proposed method outperforms state-of-the-art models with less than 1% of the full training data, and exhibits reasonable generalization capability on unseen objects and affordances.
2311.11666
Haiyang Ying
Haiyang Ying, Yixuan Yin, Jinzhi Zhang, Fan Wang, Tao Yu, Ruqi Huang, Lu Fang
OmniSeg3D: Omniversal 3D Segmentation via Hierarchical Contrastive Learning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Towards holistic understanding of 3D scenes, a general 3D segmentation method is needed that can segment diverse objects without restrictions on object quantity or categories, while also reflecting the inherent hierarchical structure. To achieve this, we propose OmniSeg3D, an omniversal segmentation method aims for segmenting anything in 3D all at once. The key insight is to lift multi-view inconsistent 2D segmentations into a consistent 3D feature field through a hierarchical contrastive learning framework, which is accomplished by two steps. Firstly, we design a novel hierarchical representation based on category-agnostic 2D segmentations to model the multi-level relationship among pixels. Secondly, image features rendered from the 3D feature field are clustered at different levels, which can be further drawn closer or pushed apart according to the hierarchical relationship between different levels. In tackling the challenges posed by inconsistent 2D segmentations, this framework yields a global consistent 3D feature field, which further enables hierarchical segmentation, multi-object selection, and global discretization. Extensive experiments demonstrate the effectiveness of our method on high-quality 3D segmentation and accurate hierarchical structure understanding. A graphical user interface further facilitates flexible interaction for omniversal 3D segmentation.
[ { "created": "Mon, 20 Nov 2023 11:04:59 GMT", "version": "v1" } ]
2023-11-21
[ [ "Ying", "Haiyang", "" ], [ "Yin", "Yixuan", "" ], [ "Zhang", "Jinzhi", "" ], [ "Wang", "Fan", "" ], [ "Yu", "Tao", "" ], [ "Huang", "Ruqi", "" ], [ "Fang", "Lu", "" ] ]
Towards holistic understanding of 3D scenes, a general 3D segmentation method is needed that can segment diverse objects without restrictions on object quantity or categories, while also reflecting the inherent hierarchical structure. To achieve this, we propose OmniSeg3D, an omniversal segmentation method aims for segmenting anything in 3D all at once. The key insight is to lift multi-view inconsistent 2D segmentations into a consistent 3D feature field through a hierarchical contrastive learning framework, which is accomplished by two steps. Firstly, we design a novel hierarchical representation based on category-agnostic 2D segmentations to model the multi-level relationship among pixels. Secondly, image features rendered from the 3D feature field are clustered at different levels, which can be further drawn closer or pushed apart according to the hierarchical relationship between different levels. In tackling the challenges posed by inconsistent 2D segmentations, this framework yields a global consistent 3D feature field, which further enables hierarchical segmentation, multi-object selection, and global discretization. Extensive experiments demonstrate the effectiveness of our method on high-quality 3D segmentation and accurate hierarchical structure understanding. A graphical user interface further facilitates flexible interaction for omniversal 3D segmentation.
2204.09996
Velko Vechev
Velko Vechev, Juan Zarate, Bernhard Thomaszewski, Otmar Hilliges
Computational Design of Kinesthetic Garments
null
null
null
null
cs.GR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Kinesthetic garments provide physical feedback on body posture and motion through tailored distributions of reinforced material. Their ability to selectively stiffen a garment's response to specific motions makes them appealing for rehabilitation, sports, robotics, and many other application fields. However, finding designs that distribute a given amount of reinforcement material to maximally stiffen the response to specified motions is a challenging problem. In this work, we propose an optimization-driven approach for automated design of reinforcement patterns for kinesthetic garments. Our main contribution is to cast this design task as an on-body topology optimization problem. Our method allows designers to explore a continuous range of designs corresponding to various amounts of reinforcement coverage. Our model captures both tight contact and lift-off separation between cloth and body. We demonstrate our method on a variety of reinforcement design problems for different body sites and motions. Optimal designs lead to a two- to threefold improvement in performance in terms of energy density. A set of manufactured designs were consistently rated as providing more resistance than baselines in a comparative user study
[ { "created": "Thu, 21 Apr 2022 09:41:22 GMT", "version": "v1" } ]
2022-04-22
[ [ "Vechev", "Velko", "" ], [ "Zarate", "Juan", "" ], [ "Thomaszewski", "Bernhard", "" ], [ "Hilliges", "Otmar", "" ] ]
Kinesthetic garments provide physical feedback on body posture and motion through tailored distributions of reinforced material. Their ability to selectively stiffen a garment's response to specific motions makes them appealing for rehabilitation, sports, robotics, and many other application fields. However, finding designs that distribute a given amount of reinforcement material to maximally stiffen the response to specified motions is a challenging problem. In this work, we propose an optimization-driven approach for automated design of reinforcement patterns for kinesthetic garments. Our main contribution is to cast this design task as an on-body topology optimization problem. Our method allows designers to explore a continuous range of designs corresponding to various amounts of reinforcement coverage. Our model captures both tight contact and lift-off separation between cloth and body. We demonstrate our method on a variety of reinforcement design problems for different body sites and motions. Optimal designs lead to a two- to threefold improvement in performance in terms of energy density. A set of manufactured designs were consistently rated as providing more resistance than baselines in a comparative user study
1603.08777
Wolfgang Mulzer
Pat Morin, Wolfgang Mulzer, Tommy Reddad
Encoding Arguments
50 pages, 7 figures
ACM Computing Surveys (CSUR), 50(3), July 2017, Article 46
10.1145/3084288
null
cs.IT cs.DS math.IT math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many proofs in discrete mathematics and theoretical computer science are based on the probabilistic method. To prove the existence of a good object, we pick a random object and show that it is bad with low probability. This method is effective, but the underlying probabilistic machinery can be daunting. "Encoding arguments" provide an alternative presentation in which probabilistic reasoning is encapsulated in a "uniform encoding lemma". This lemma provides an upper bound on the probability of an event using the fact that a uniformly random choice from a set of size $n$ cannot be encoded with fewer than $\log_2 n$ bits on average. With the lemma, the argument reduces to devising an encoding where bad objects have short codewords. In this expository article, we describe the basic method and provide a simple tutorial on how to use it. After that, we survey many applications to classic problems from discrete mathematics and computer science. We also give a generalization for the case of non-uniform distributions, as well as a rigorous justification for the use of non-integer codeword lengths in encoding arguments. These latter two results allow encoding arguments to be applied more widely and to produce tighter results.
[ { "created": "Tue, 29 Mar 2016 14:10:43 GMT", "version": "v1" }, { "created": "Mon, 15 May 2017 13:36:55 GMT", "version": "v2" } ]
2017-08-01
[ [ "Morin", "Pat", "" ], [ "Mulzer", "Wolfgang", "" ], [ "Reddad", "Tommy", "" ] ]
Many proofs in discrete mathematics and theoretical computer science are based on the probabilistic method. To prove the existence of a good object, we pick a random object and show that it is bad with low probability. This method is effective, but the underlying probabilistic machinery can be daunting. "Encoding arguments" provide an alternative presentation in which probabilistic reasoning is encapsulated in a "uniform encoding lemma". This lemma provides an upper bound on the probability of an event using the fact that a uniformly random choice from a set of size $n$ cannot be encoded with fewer than $\log_2 n$ bits on average. With the lemma, the argument reduces to devising an encoding where bad objects have short codewords. In this expository article, we describe the basic method and provide a simple tutorial on how to use it. After that, we survey many applications to classic problems from discrete mathematics and computer science. We also give a generalization for the case of non-uniform distributions, as well as a rigorous justification for the use of non-integer codeword lengths in encoding arguments. These latter two results allow encoding arguments to be applied more widely and to produce tighter results.
1606.01750
Tengda Ying
Tengda Ying, Wenjiang Feng, Weifeng Su, and Weiheng Jiang
On the Degrees of Freedom of MIMO X Networks with Non-Cooperation Transmitters
version 2, 31 pages, 7 figures, submitted
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to limited backhaul/feedback link capacity and channel state information (CSI) feedback delay, obtaining global and instantaneous channel state information at the transmitter (CSIT) is a main obstacle in practice. In this paper, novel transmission schemes are proposed for a class of interference networks that can achieve new trade-off regions between the sum of degrees of freedom (sum-DoF) and CSI feedback delay with distributed and temperately-delayed CSIT. More specifically, a distributed space-time interference alignment (STIA) scheme is proposed for the two-user multiple-input multiple-output (MIMO) X channel via a novel precoding method called Cyclic Zero-padding. The achieved sum-DoFs herein for certain antenna configurations are greater than the best known sum-DoFs in literature with delayed CSIT. Furthermore, we propose a distributed retrospective interference alignment (RIA) scheme that achieves more than 1 sum-DoF for the K-user single-input single-output (SISO) X network. Finally, we extend the distributed STIA to the MxN user multiple-input single-output (MISO) X network where each transmitter has N-1 antennas and each receiver has a single antenna, yielding the same sum-DoF as that in the global and instantaneous CSIT case. The discussion and the result of the MISO X network can be extended to the MIMO case due to spatial scale invariance property.
[ { "created": "Mon, 6 Jun 2016 14:04:26 GMT", "version": "v1" }, { "created": "Fri, 12 Aug 2016 09:17:29 GMT", "version": "v2" } ]
2016-08-15
[ [ "Ying", "Tengda", "" ], [ "Feng", "Wenjiang", "" ], [ "Su", "Weifeng", "" ], [ "Jiang", "Weiheng", "" ] ]
Due to limited backhaul/feedback link capacity and channel state information (CSI) feedback delay, obtaining global and instantaneous channel state information at the transmitter (CSIT) is a main obstacle in practice. In this paper, novel transmission schemes are proposed for a class of interference networks that can achieve new trade-off regions between the sum of degrees of freedom (sum-DoF) and CSI feedback delay with distributed and temperately-delayed CSIT. More specifically, a distributed space-time interference alignment (STIA) scheme is proposed for the two-user multiple-input multiple-output (MIMO) X channel via a novel precoding method called Cyclic Zero-padding. The achieved sum-DoFs herein for certain antenna configurations are greater than the best known sum-DoFs in literature with delayed CSIT. Furthermore, we propose a distributed retrospective interference alignment (RIA) scheme that achieves more than 1 sum-DoF for the K-user single-input single-output (SISO) X network. Finally, we extend the distributed STIA to the MxN user multiple-input single-output (MISO) X network where each transmitter has N-1 antennas and each receiver has a single antenna, yielding the same sum-DoF as that in the global and instantaneous CSIT case. The discussion and the result of the MISO X network can be extended to the MIMO case due to spatial scale invariance property.
2007.12829
Guo Zhong
Shi-Xun Lina, Guo Zhongb, Ting Shu
Joint Featurewise Weighting and Lobal Structure Learning for Multi-view Subspace Clustering
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-view clustering integrates multiple feature sets, which reveal distinct aspects of the data and provide complementary information to each other, to improve the clustering performance. It remains challenging to effectively exploit complementary information across multiple views since the original data often contain noise and are highly redundant. Moreover, most existing multi-view clustering methods only aim to explore the consistency of all views while ignoring the local structure of each view. However, it is necessary to take the local structure of each view into consideration, because different views would present different geometric structures while admitting the same cluster structure. To address the above issues, we propose a novel multi-view subspace clustering method via simultaneously assigning weights for different features and capturing local information of data in view-specific self-representation feature spaces. Especially, a common cluster structure regularization is adopted to guarantee consistency among different views. An efficient algorithm based on an augmented Lagrangian multiplier is also developed to solve the associated optimization problem. Experiments conducted on several benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance. We provide the Matlab code on https://github.com/Ekin102003/JFLMSC.
[ { "created": "Sat, 25 Jul 2020 01:57:57 GMT", "version": "v1" } ]
2020-07-28
[ [ "Lina", "Shi-Xun", "" ], [ "Zhongb", "Guo", "" ], [ "Shu", "Ting", "" ] ]
Multi-view clustering integrates multiple feature sets, which reveal distinct aspects of the data and provide complementary information to each other, to improve the clustering performance. It remains challenging to effectively exploit complementary information across multiple views since the original data often contain noise and are highly redundant. Moreover, most existing multi-view clustering methods only aim to explore the consistency of all views while ignoring the local structure of each view. However, it is necessary to take the local structure of each view into consideration, because different views would present different geometric structures while admitting the same cluster structure. To address the above issues, we propose a novel multi-view subspace clustering method via simultaneously assigning weights for different features and capturing local information of data in view-specific self-representation feature spaces. Especially, a common cluster structure regularization is adopted to guarantee consistency among different views. An efficient algorithm based on an augmented Lagrangian multiplier is also developed to solve the associated optimization problem. Experiments conducted on several benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance. We provide the Matlab code on https://github.com/Ekin102003/JFLMSC.
2107.11245
Fei Zhang
Fei Zhang, Chaochen Gu, and Feng Yang
An Improved Algorithm of Robot Path Planning in Complex Environment Based on Double DQN
Accepted in International Conference on Guidance, Navigation and Control,2020
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Q Network (DQN) has several limitations when applied in planning a path in environment with a number of dilemmas according to our experiment. The reward function may be hard to model, and successful experience transitions are difficult to find in experience replay. In this context, this paper proposes an improved Double DQN (DDQN) to solve the problem by reference to A* and Rapidly-Exploring Random Tree (RRT). In order to achieve the rich experiments in experience replay, the initialization of robot in each training round is redefined based on RRT strategy. In addition, reward for the free positions is specially designed to accelerate the learning process according to the definition of position cost in A*. The simulation experimental results validate the efficiency of the improved DDQN, and robot could successfully learn the ability of obstacle avoidance and optimal path planning in which DQN or DDQN has no effect.
[ { "created": "Fri, 23 Jul 2021 14:03:04 GMT", "version": "v1" } ]
2021-07-26
[ [ "Zhang", "Fei", "" ], [ "Gu", "Chaochen", "" ], [ "Yang", "Feng", "" ] ]
Deep Q Network (DQN) has several limitations when applied in planning a path in environment with a number of dilemmas according to our experiment. The reward function may be hard to model, and successful experience transitions are difficult to find in experience replay. In this context, this paper proposes an improved Double DQN (DDQN) to solve the problem by reference to A* and Rapidly-Exploring Random Tree (RRT). In order to achieve the rich experiments in experience replay, the initialization of robot in each training round is redefined based on RRT strategy. In addition, reward for the free positions is specially designed to accelerate the learning process according to the definition of position cost in A*. The simulation experimental results validate the efficiency of the improved DDQN, and robot could successfully learn the ability of obstacle avoidance and optimal path planning in which DQN or DDQN has no effect.
2010.08390
Eleni Bohacek
Eleni Bohacek, Andrew J. Coates, David R. Selviah
Volumetric Calculation of Quantization Error in 3-D Vision Systems
As submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence on 4th September 2020
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates how the inherent quantization of camera sensors introduces uncertainty in the calculated position of an observed feature during 3-D mapping. It is typically assumed that pixels and scene features are points, however, a pixel is a two-dimensional area that maps onto multiple points in the scene. This uncertainty region is a bound for quantization error in the calculated point positions. Earlier studies calculated the volume of two intersecting pixel views, approximated as a cuboid, by projecting pyramids and cones from the pixels into the scene. In this paper, we reverse this approach by generating an array of scene points and calculating which scene points are detected by which pixel in each camera. This enables us to map the uncertainty regions for every pixel correspondence for a given camera system in one calculation, without approximating the complex shapes. The dependence of the volumes of the uncertainty regions on camera baseline length, focal length, pixel size, and distance to object, shows that earlier studies overestimated the quantization error by at least a factor of two. For static camera systems the method can also be used to determine volumetric scene geometry without the need to calculate disparity maps.
[ { "created": "Fri, 16 Oct 2020 13:48:30 GMT", "version": "v1" } ]
2020-10-19
[ [ "Bohacek", "Eleni", "" ], [ "Coates", "Andrew J.", "" ], [ "Selviah", "David R.", "" ] ]
This paper investigates how the inherent quantization of camera sensors introduces uncertainty in the calculated position of an observed feature during 3-D mapping. It is typically assumed that pixels and scene features are points, however, a pixel is a two-dimensional area that maps onto multiple points in the scene. This uncertainty region is a bound for quantization error in the calculated point positions. Earlier studies calculated the volume of two intersecting pixel views, approximated as a cuboid, by projecting pyramids and cones from the pixels into the scene. In this paper, we reverse this approach by generating an array of scene points and calculating which scene points are detected by which pixel in each camera. This enables us to map the uncertainty regions for every pixel correspondence for a given camera system in one calculation, without approximating the complex shapes. The dependence of the volumes of the uncertainty regions on camera baseline length, focal length, pixel size, and distance to object, shows that earlier studies overestimated the quantization error by at least a factor of two. For static camera systems the method can also be used to determine volumetric scene geometry without the need to calculate disparity maps.
2403.05110
Jensen Gao
Jensen Gao, Annie Xie, Ted Xiao, Chelsea Finn, Dorsa Sadigh
Efficient Data Collection for Robotic Manipulation via Compositional Generalization
RSS 2024
null
null
null
cs.RO cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data collection has become an increasingly important problem in robotic manipulation, yet there still lacks much understanding of how to effectively collect data to facilitate broad generalization. Recent works on large-scale robotic data collection typically vary many environmental factors of variation (e.g., object types, table textures) during data collection, to cover a diverse range of scenarios. However, they do not explicitly account for the possible compositional abilities of policies trained on the data. If robot policies can compose environmental factors from their data to succeed when encountering unseen factor combinations, we can exploit this to avoid collecting data for situations that composition would address. To investigate this possibility, we conduct thorough empirical studies both in simulation and on a real robot that compare data collection strategies and assess whether visual imitation learning policies can compose environmental factors. We find that policies do exhibit composition, although leveraging prior robotic datasets is critical for this on a real robot. We use these insights to propose better in-domain data collection strategies that exploit composition, which can induce better generalization than naive approaches for the same amount of effort during data collection. We further demonstrate that a real robot policy trained on data from such a strategy achieves a success rate of 77.5% when transferred to entirely new environments that encompass unseen combinations of environmental factors, whereas policies trained using data collected without accounting for environmental variation fail to transfer effectively, with a success rate of only 2.5%. We provide videos at http://iliad.stanford.edu/robot-data-comp/.
[ { "created": "Fri, 8 Mar 2024 07:15:38 GMT", "version": "v1" }, { "created": "Tue, 21 May 2024 14:18:47 GMT", "version": "v2" } ]
2024-05-22
[ [ "Gao", "Jensen", "" ], [ "Xie", "Annie", "" ], [ "Xiao", "Ted", "" ], [ "Finn", "Chelsea", "" ], [ "Sadigh", "Dorsa", "" ] ]
Data collection has become an increasingly important problem in robotic manipulation, yet there still lacks much understanding of how to effectively collect data to facilitate broad generalization. Recent works on large-scale robotic data collection typically vary many environmental factors of variation (e.g., object types, table textures) during data collection, to cover a diverse range of scenarios. However, they do not explicitly account for the possible compositional abilities of policies trained on the data. If robot policies can compose environmental factors from their data to succeed when encountering unseen factor combinations, we can exploit this to avoid collecting data for situations that composition would address. To investigate this possibility, we conduct thorough empirical studies both in simulation and on a real robot that compare data collection strategies and assess whether visual imitation learning policies can compose environmental factors. We find that policies do exhibit composition, although leveraging prior robotic datasets is critical for this on a real robot. We use these insights to propose better in-domain data collection strategies that exploit composition, which can induce better generalization than naive approaches for the same amount of effort during data collection. We further demonstrate that a real robot policy trained on data from such a strategy achieves a success rate of 77.5% when transferred to entirely new environments that encompass unseen combinations of environmental factors, whereas policies trained using data collected without accounting for environmental variation fail to transfer effectively, with a success rate of only 2.5%. We provide videos at http://iliad.stanford.edu/robot-data-comp/.
1606.03369
Thiago Vallin Spina PhD
Thiago Vallin Spina and Alexandre Xavier Falc\~ao
FOMTrace: Interactive Video Segmentation By Image Graphs and Fuzzy Object Models
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Common users have changed from mere consumers to active producers of multimedia data content. Video editing plays an important role in this scenario, calling for simple segmentation tools that can handle fast-moving and deformable video objects with possible occlusions, color similarities with the background, among other challenges. We present an interactive video segmentation method, named FOMTrace, which addresses the problem in an effective and efficient way. From a user-provided object mask in a first frame, the method performs semi-automatic video segmentation on a spatiotemporal superpixel-graph, and then estimates a Fuzzy Object Model (FOM), which refines segmentation of the second frame by constraining delineation on a pixel-graph within a region where the object's boundary is expected to be. The user can correct/accept the refined object mask in the second frame, which is then similarly used to improve the spatiotemporal video segmentation of the remaining frames. Both steps are repeated alternately, within interactive response times, until the segmentation refinement of the final frame is accepted by the user. Extensive experiments demonstrate FOMTrace's ability for tracing objects in comparison with state-of-the-art approaches for interactive video segmentation, supervised, and unsupervised object tracking.
[ { "created": "Fri, 10 Jun 2016 15:30:30 GMT", "version": "v1" } ]
2016-06-13
[ [ "Spina", "Thiago Vallin", "" ], [ "Falcão", "Alexandre Xavier", "" ] ]
Common users have changed from mere consumers to active producers of multimedia data content. Video editing plays an important role in this scenario, calling for simple segmentation tools that can handle fast-moving and deformable video objects with possible occlusions, color similarities with the background, among other challenges. We present an interactive video segmentation method, named FOMTrace, which addresses the problem in an effective and efficient way. From a user-provided object mask in a first frame, the method performs semi-automatic video segmentation on a spatiotemporal superpixel-graph, and then estimates a Fuzzy Object Model (FOM), which refines segmentation of the second frame by constraining delineation on a pixel-graph within a region where the object's boundary is expected to be. The user can correct/accept the refined object mask in the second frame, which is then similarly used to improve the spatiotemporal video segmentation of the remaining frames. Both steps are repeated alternately, within interactive response times, until the segmentation refinement of the final frame is accepted by the user. Extensive experiments demonstrate FOMTrace's ability for tracing objects in comparison with state-of-the-art approaches for interactive video segmentation, supervised, and unsupervised object tracking.
1806.09472
Andreas Brandstadt
Andreas Brandst\"adt, Raffaele Mosca
Maximum Weight Independent Sets for ($S_{1,2,4}$,Triangle)-Free Graphs in Polynomial Time
arXiv admin note: substantial text overlap with arXiv:1511.08066
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Maximum Weight Independent Set (MWIS) problem on finite undirected graphs with vertex weights asks for a set of pairwise nonadjacent vertices of maximum weight sum. MWIS is one of the most investigated and most important algorithmic graph problems; it is well known to be NP-complete, and it remains NP-complete even under various strong restrictions such as for triangle-free graphs. Its complexity for $P_k$-free graphs, $k \ge 7$, is an open problem. In \cite{BraMos2018}, it is shown that MWIS can be solved in polynomial time for ($P_7$,triangle)-free graphs. This result is extended by Maffray and Pastor \cite{MafPas2016} showing that MWIS can be solved in polynomial time for ($P_7$,bull)-free graphs. In the same paper, they also showed that MWIS can be solved in polynomial time for ($S_{1,2,3}$,bull)-free graphs. In this paper, using a similar approach as in \cite{BraMos2018}, we show that MWIS can be solved in polynomial time for ($S_{1,2,4}$,triangle)-free graphs which generalizes the result for ($P_7$,triangle)-free graphs.
[ { "created": "Fri, 22 Jun 2018 13:57:55 GMT", "version": "v1" }, { "created": "Fri, 11 Jan 2019 15:12:21 GMT", "version": "v2" } ]
2019-01-14
[ [ "Brandstädt", "Andreas", "" ], [ "Mosca", "Raffaele", "" ] ]
The Maximum Weight Independent Set (MWIS) problem on finite undirected graphs with vertex weights asks for a set of pairwise nonadjacent vertices of maximum weight sum. MWIS is one of the most investigated and most important algorithmic graph problems; it is well known to be NP-complete, and it remains NP-complete even under various strong restrictions such as for triangle-free graphs. Its complexity for $P_k$-free graphs, $k \ge 7$, is an open problem. In \cite{BraMos2018}, it is shown that MWIS can be solved in polynomial time for ($P_7$,triangle)-free graphs. This result is extended by Maffray and Pastor \cite{MafPas2016} showing that MWIS can be solved in polynomial time for ($P_7$,bull)-free graphs. In the same paper, they also showed that MWIS can be solved in polynomial time for ($S_{1,2,3}$,bull)-free graphs. In this paper, using a similar approach as in \cite{BraMos2018}, we show that MWIS can be solved in polynomial time for ($S_{1,2,4}$,triangle)-free graphs which generalizes the result for ($P_7$,triangle)-free graphs.
2210.07800
Samrat Mukhopadhyay
Samrat Mukhopadhyay and Himanshu Bhusan Mishra
Multiple Choice Hard Thresholding Pursuit (MCHTP) for Simultaneous Sparse Recovery and Sparsity Order Estimation
9 pages, 4 figures, tech-report
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
We address the problem of sparse recovery using greedy compressed sensing recovery algorithms, without explicit knowledge of the sparsity. Estimating the sparsity order is a crucial problem in many practical scenarios, e.g., wireless communications, where exact value of the sparsity order of the unknown channel may be unavailable a priori. In this paper we have proposed a new greedy algorithm, referred to as the Multiple Choice Hard Thresholding Pursuit (MCHTP), which modifies the popular hard thresholding pursuit (HTP) suitably to iteratively recover the unknown sparse vector along with the sparsity order of the unknown vector. We provide provable performance guarantees which ensures that MCHTP can estimate the sparsity order exactly, along with recovering the unknown sparse vector exactly with noiseless measurements. The simulation results corroborate the theoretical findings, demonstrating that even without exact sparsity knowledge, with only the knowledge of a loose upper bound of the sparsity, MCHTP exhibits outstanding recovery performance, which is almost identical to that of the conventional HTP with exact sparsity knowledge. Furthermore, simulation results demonstrate much lower computational complexity of MCHTP compared to other state-of-the-art techniques like MSP.
[ { "created": "Fri, 14 Oct 2022 13:30:36 GMT", "version": "v1" }, { "created": "Tue, 25 Oct 2022 12:50:24 GMT", "version": "v2" } ]
2022-10-26
[ [ "Mukhopadhyay", "Samrat", "" ], [ "Mishra", "Himanshu Bhusan", "" ] ]
We address the problem of sparse recovery using greedy compressed sensing recovery algorithms, without explicit knowledge of the sparsity. Estimating the sparsity order is a crucial problem in many practical scenarios, e.g., wireless communications, where exact value of the sparsity order of the unknown channel may be unavailable a priori. In this paper we have proposed a new greedy algorithm, referred to as the Multiple Choice Hard Thresholding Pursuit (MCHTP), which modifies the popular hard thresholding pursuit (HTP) suitably to iteratively recover the unknown sparse vector along with the sparsity order of the unknown vector. We provide provable performance guarantees which ensures that MCHTP can estimate the sparsity order exactly, along with recovering the unknown sparse vector exactly with noiseless measurements. The simulation results corroborate the theoretical findings, demonstrating that even without exact sparsity knowledge, with only the knowledge of a loose upper bound of the sparsity, MCHTP exhibits outstanding recovery performance, which is almost identical to that of the conventional HTP with exact sparsity knowledge. Furthermore, simulation results demonstrate much lower computational complexity of MCHTP compared to other state-of-the-art techniques like MSP.
2405.01435
Jean P. Martins
Jean Martins and Igor Almeida and Ricardo Souza and Silvia Lins
Closed-form congestion control via deep symbolic regression
null
null
null
null
cs.NI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
As mobile networks embrace the 5G era, the interest in adopting Reinforcement Learning (RL) algorithms to handle challenges in ultra-low-latency and high throughput scenarios increases. Simultaneously, the advent of packetized fronthaul networks imposes demanding requirements that traditional congestion control mechanisms cannot accomplish, highlighting the potential of RL-based congestion control algorithms. Although learning RL policies optimized for satisfying the stringent fronthaul requirements is feasible, the adoption of neural network models in real deployments still poses some challenges regarding real-time inference and interpretability. This paper proposes a methodology to deal with such challenges while maintaining the performance and generalization capabilities provided by a baseline RL policy. The method consists of (1) training a congestion control policy specialized in fronthaul-like networks via reinforcement learning, (2) collecting state-action experiences from the baseline, and (3) performing deep symbolic regression on the collected dataset. The proposed process overcomes the challenges related to inference-time limitations through closed-form expressions that approximate the baseline performance (link utilization, delay, and fairness) and which can be directly implemented in any programming language. Finally, we analyze the inner workings of the closed-form expressions.
[ { "created": "Thu, 28 Mar 2024 14:31:37 GMT", "version": "v1" } ]
2024-05-03
[ [ "Martins", "Jean", "" ], [ "Almeida", "Igor", "" ], [ "Souza", "Ricardo", "" ], [ "Lins", "Silvia", "" ] ]
As mobile networks embrace the 5G era, the interest in adopting Reinforcement Learning (RL) algorithms to handle challenges in ultra-low-latency and high throughput scenarios increases. Simultaneously, the advent of packetized fronthaul networks imposes demanding requirements that traditional congestion control mechanisms cannot accomplish, highlighting the potential of RL-based congestion control algorithms. Although learning RL policies optimized for satisfying the stringent fronthaul requirements is feasible, the adoption of neural network models in real deployments still poses some challenges regarding real-time inference and interpretability. This paper proposes a methodology to deal with such challenges while maintaining the performance and generalization capabilities provided by a baseline RL policy. The method consists of (1) training a congestion control policy specialized in fronthaul-like networks via reinforcement learning, (2) collecting state-action experiences from the baseline, and (3) performing deep symbolic regression on the collected dataset. The proposed process overcomes the challenges related to inference-time limitations through closed-form expressions that approximate the baseline performance (link utilization, delay, and fairness) and which can be directly implemented in any programming language. Finally, we analyze the inner workings of the closed-form expressions.
2111.12449
Le Yang
Le Yang, Junwei Han, Tao Zhao, Tianwei Lin, Dingwen Zhang, Jianxin Chen
Background-Click Supervision for Temporal Action Localization
To appear at TPAMI
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Weakly supervised temporal action localization aims at learning the instance-level action pattern from the video-level labels, where a significant challenge is action-context confusion. To overcome this challenge, one recent work builds an action-click supervision framework. It requires similar annotation costs but can steadily improve the localization performance when compared to the conventional weakly supervised methods. In this paper, by revealing that the performance bottleneck of the existing approaches mainly comes from the background errors, we find that a stronger action localizer can be trained with labels on the background video frames rather than those on the action frames. To this end, we convert the action-click supervision to the background-click supervision and develop a novel method, called BackTAL. Specifically, BackTAL implements two-fold modeling on the background video frames, i.e. the position modeling and the feature modeling. In position modeling, we not only conduct supervised learning on the annotated video frames but also design a score separation module to enlarge the score differences between the potential action frames and backgrounds. In feature modeling, we propose an affinity module to measure frame-specific similarities among neighboring frames and dynamically attend to informative neighbors when calculating temporal convolution. Extensive experiments on three benchmarks are conducted, which demonstrate the high performance of the established BackTAL and the rationality of the proposed background-click supervision. Code is available at https://github.com/VividLe/BackTAL.
[ { "created": "Wed, 24 Nov 2021 12:02:52 GMT", "version": "v1" } ]
2021-11-25
[ [ "Yang", "Le", "" ], [ "Han", "Junwei", "" ], [ "Zhao", "Tao", "" ], [ "Lin", "Tianwei", "" ], [ "Zhang", "Dingwen", "" ], [ "Chen", "Jianxin", "" ] ]
Weakly supervised temporal action localization aims at learning the instance-level action pattern from the video-level labels, where a significant challenge is action-context confusion. To overcome this challenge, one recent work builds an action-click supervision framework. It requires similar annotation costs but can steadily improve the localization performance when compared to the conventional weakly supervised methods. In this paper, by revealing that the performance bottleneck of the existing approaches mainly comes from the background errors, we find that a stronger action localizer can be trained with labels on the background video frames rather than those on the action frames. To this end, we convert the action-click supervision to the background-click supervision and develop a novel method, called BackTAL. Specifically, BackTAL implements two-fold modeling on the background video frames, i.e. the position modeling and the feature modeling. In position modeling, we not only conduct supervised learning on the annotated video frames but also design a score separation module to enlarge the score differences between the potential action frames and backgrounds. In feature modeling, we propose an affinity module to measure frame-specific similarities among neighboring frames and dynamically attend to informative neighbors when calculating temporal convolution. Extensive experiments on three benchmarks are conducted, which demonstrate the high performance of the established BackTAL and the rationality of the proposed background-click supervision. Code is available at https://github.com/VividLe/BackTAL.
1803.05355
James Thorne
James Thorne, Andreas Vlachos, Christos Christodoulopoulos and Arpit Mittal
FEVER: a large-scale dataset for Fact Extraction and VERification
Updated version of NAACL2018 paper. Data is released on http://fever.ai
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification. It consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as Supported, Refuted or NotEnoughInfo by annotators achieving 0.6841 in Fleiss $\kappa$. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment. To characterize the challenge of the dataset presented, we develop a pipeline approach and compare it to suitably designed oracles. The best accuracy we achieve on labeling a claim accompanied by the correct evidence is 31.87%, while if we ignore the evidence we achieve 50.91%. Thus we believe that FEVER is a challenging testbed that will help stimulate progress on claim verification against textual sources.
[ { "created": "Wed, 14 Mar 2018 15:30:37 GMT", "version": "v1" }, { "created": "Mon, 16 Apr 2018 23:08:25 GMT", "version": "v2" }, { "created": "Tue, 18 Dec 2018 10:58:20 GMT", "version": "v3" } ]
2018-12-19
[ [ "Thorne", "James", "" ], [ "Vlachos", "Andreas", "" ], [ "Christodoulopoulos", "Christos", "" ], [ "Mittal", "Arpit", "" ] ]
In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification. It consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as Supported, Refuted or NotEnoughInfo by annotators achieving 0.6841 in Fleiss $\kappa$. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment. To characterize the challenge of the dataset presented, we develop a pipeline approach and compare it to suitably designed oracles. The best accuracy we achieve on labeling a claim accompanied by the correct evidence is 31.87%, while if we ignore the evidence we achieve 50.91%. Thus we believe that FEVER is a challenging testbed that will help stimulate progress on claim verification against textual sources.
2003.01515
Ziqi Liu
Ziqi Liu, Dong Wang, Qianyu Yu, Zhiqiang Zhang, Yue Shen, Jian Ma, Wenliang Zhong, Jinjie Gu, Jun Zhou, Shuang Yang, Yuan Qi
Graph Representation Learning for Merchant Incentive Optimization in Mobile Payment Marketing
null
null
null
null
cs.SI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile payment such as Alipay has been widely used in our daily lives. To further promote the mobile payment activities, it is important to run marketing campaigns under a limited budget by providing incentives such as coupons, commissions to merchants. As a result, incentive optimization is the key to maximizing the commercial objective of the marketing campaign. With the analyses of online experiments, we found that the transaction network can subtly describe the similarity of merchants' responses to different incentives, which is of great use in the incentive optimization problem. In this paper, we present a graph representation learning method atop of transaction networks for merchant incentive optimization in mobile payment marketing. With limited samples collected from online experiments, our end-to-end method first learns merchant representations based on an attributed transaction networks, then effectively models the correlations between the commercial objectives each merchant may achieve and the incentives under varying treatments. Thus we are able to model the sensitivity to incentive for each merchant, and spend the most budgets on those merchants that show strong sensitivities in the marketing campaign. Extensive offline and online experimental results at Alipay demonstrate the effectiveness of our proposed approach.
[ { "created": "Thu, 27 Feb 2020 18:48:55 GMT", "version": "v1" } ]
2020-03-04
[ [ "Liu", "Ziqi", "" ], [ "Wang", "Dong", "" ], [ "Yu", "Qianyu", "" ], [ "Zhang", "Zhiqiang", "" ], [ "Shen", "Yue", "" ], [ "Ma", "Jian", "" ], [ "Zhong", "Wenliang", "" ], [ "Gu", "Jinjie", "" ], [ "Zhou", "Jun", "" ], [ "Yang", "Shuang", "" ], [ "Qi", "Yuan", "" ] ]
Mobile payment such as Alipay has been widely used in our daily lives. To further promote the mobile payment activities, it is important to run marketing campaigns under a limited budget by providing incentives such as coupons, commissions to merchants. As a result, incentive optimization is the key to maximizing the commercial objective of the marketing campaign. With the analyses of online experiments, we found that the transaction network can subtly describe the similarity of merchants' responses to different incentives, which is of great use in the incentive optimization problem. In this paper, we present a graph representation learning method atop of transaction networks for merchant incentive optimization in mobile payment marketing. With limited samples collected from online experiments, our end-to-end method first learns merchant representations based on an attributed transaction networks, then effectively models the correlations between the commercial objectives each merchant may achieve and the incentives under varying treatments. Thus we are able to model the sensitivity to incentive for each merchant, and spend the most budgets on those merchants that show strong sensitivities in the marketing campaign. Extensive offline and online experimental results at Alipay demonstrate the effectiveness of our proposed approach.
2205.14790
Orestis Papadigenopoulos
Orestis Papadigenopoulos, Constantine Caramanis, Sanjay Shakkottai
Non-Stationary Bandits under Recharging Payoffs: Improved Planning with Sublinear Regret
Accepted for publication to NeurIPS 2022
null
null
null
cs.LG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The stochastic multi-armed bandit setting has been recently studied in the non-stationary regime, where the mean payoff of each action is a non-decreasing function of the number of rounds passed since it was last played. This model captures natural behavioral aspects of the users which crucially determine the performance of recommendation platforms, ad placement systems, and more. Even assuming prior knowledge of the mean payoff functions, computing an optimal planning in the above model is NP-hard, while the state-of-the-art is a $1/4$-approximation algorithm for the case where at most one arm can be played per round. We first focus on the setting where the mean payoff functions are known. In this setting, we significantly improve the best-known guarantees for the planning problem by developing a polynomial-time $(1-{1}/{e})$-approximation algorithm (asymptotically and in expectation), based on a novel combination of randomized LP rounding and a time-correlated (interleaved) scheduling method. Furthermore, our algorithm achieves improved guarantees -- compared to prior work -- for the case where more than one arm can be played at each round. Moving to the bandit setting, when the mean payoff functions are initially unknown, we show how our algorithm can be transformed into a bandit algorithm with sublinear regret.
[ { "created": "Sun, 29 May 2022 23:55:36 GMT", "version": "v1" }, { "created": "Wed, 12 Oct 2022 04:19:27 GMT", "version": "v2" } ]
2022-10-13
[ [ "Papadigenopoulos", "Orestis", "" ], [ "Caramanis", "Constantine", "" ], [ "Shakkottai", "Sanjay", "" ] ]
The stochastic multi-armed bandit setting has been recently studied in the non-stationary regime, where the mean payoff of each action is a non-decreasing function of the number of rounds passed since it was last played. This model captures natural behavioral aspects of the users which crucially determine the performance of recommendation platforms, ad placement systems, and more. Even assuming prior knowledge of the mean payoff functions, computing an optimal planning in the above model is NP-hard, while the state-of-the-art is a $1/4$-approximation algorithm for the case where at most one arm can be played per round. We first focus on the setting where the mean payoff functions are known. In this setting, we significantly improve the best-known guarantees for the planning problem by developing a polynomial-time $(1-{1}/{e})$-approximation algorithm (asymptotically and in expectation), based on a novel combination of randomized LP rounding and a time-correlated (interleaved) scheduling method. Furthermore, our algorithm achieves improved guarantees -- compared to prior work -- for the case where more than one arm can be played at each round. Moving to the bandit setting, when the mean payoff functions are initially unknown, we show how our algorithm can be transformed into a bandit algorithm with sublinear regret.
2012.15231
Ivan Letteri
Ivan Letteri, Antonio Di Cecco, Abeer Dyoub, Giuseppe Della Penna
A Novel Resampling Technique for Imbalanced Dataset Optimization
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Despite the enormous amount of data, particular events of interest can still be quite rare. Classification of rare events is a common problem in many domains, such as fraudulent transactions, malware traffic analysis and network intrusion detection. Many studies have been developed for malware detection using machine learning approaches on various datasets, but as far as we know only the MTA-KDD'19 dataset has the peculiarity of updating the representative set of malicious traffic on a daily basis. This daily updating is the added value of the dataset, but it translates into a potential due to the class imbalance problem that the RRw-Optimized MTA-KDD'19 will occur. We capture difficulties of class distribution in real datasets by considering four types of minority class examples: safe, borderline, rare and outliers. In this work, we developed two versions of Generative Silhouette Resampling 1-Nearest Neighbour (G1Nos) oversampling algorithms for dealing with class imbalance problem. The first module of G1Nos algorithms performs a coefficient-based instance selection silhouette identifying the critical threshold of Imbalance Degree. (ID), the second module generates synthetic samples using a SMOTE-like oversampling algorithm. The balancing of the classes is done by our G1Nos algorithms to re-establish the proportions between the two classes of the used dataset. The experimental results show that our oversampling algorithm work better than the other two SOTA methodologies in all the metrics considered.
[ { "created": "Wed, 30 Dec 2020 17:17:08 GMT", "version": "v1" } ]
2021-01-01
[ [ "Letteri", "Ivan", "" ], [ "Di Cecco", "Antonio", "" ], [ "Dyoub", "Abeer", "" ], [ "Della Penna", "Giuseppe", "" ] ]
Despite the enormous amount of data, particular events of interest can still be quite rare. Classification of rare events is a common problem in many domains, such as fraudulent transactions, malware traffic analysis and network intrusion detection. Many studies have been developed for malware detection using machine learning approaches on various datasets, but as far as we know only the MTA-KDD'19 dataset has the peculiarity of updating the representative set of malicious traffic on a daily basis. This daily updating is the added value of the dataset, but it translates into a potential due to the class imbalance problem that the RRw-Optimized MTA-KDD'19 will occur. We capture difficulties of class distribution in real datasets by considering four types of minority class examples: safe, borderline, rare and outliers. In this work, we developed two versions of Generative Silhouette Resampling 1-Nearest Neighbour (G1Nos) oversampling algorithms for dealing with class imbalance problem. The first module of G1Nos algorithms performs a coefficient-based instance selection silhouette identifying the critical threshold of Imbalance Degree. (ID), the second module generates synthetic samples using a SMOTE-like oversampling algorithm. The balancing of the classes is done by our G1Nos algorithms to re-establish the proportions between the two classes of the used dataset. The experimental results show that our oversampling algorithm work better than the other two SOTA methodologies in all the metrics considered.
1907.00240
Dennis Soemers
Matthew Stephenson, \'Eric Piette, Dennis J. N. J. Soemers, Cameron Browne
An Overview of the Ludii General Game System
Accepted at the IEEE Conference on Games (CoG) 2019 (Demo paper)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Digital Ludeme Project (DLP) aims to reconstruct and analyse over 1000 traditional strategy games using modern techniques. One of the key aspects of this project is the development of Ludii, a general game system that will be able to model and play the complete range of games required by this project. Such an undertaking will create a wide range of possibilities for new AI challenges. In this paper we describe many of the features of Ludii that can be used. This includes designing and modifying games using the Ludii game description language, creating agents capable of playing these games, and several advantages the system has over prior general game software.
[ { "created": "Sat, 29 Jun 2019 17:16:27 GMT", "version": "v1" } ]
2019-07-02
[ [ "Stephenson", "Matthew", "" ], [ "Piette", "Éric", "" ], [ "Soemers", "Dennis J. N. J.", "" ], [ "Browne", "Cameron", "" ] ]
The Digital Ludeme Project (DLP) aims to reconstruct and analyse over 1000 traditional strategy games using modern techniques. One of the key aspects of this project is the development of Ludii, a general game system that will be able to model and play the complete range of games required by this project. Such an undertaking will create a wide range of possibilities for new AI challenges. In this paper we describe many of the features of Ludii that can be used. This includes designing and modifying games using the Ludii game description language, creating agents capable of playing these games, and several advantages the system has over prior general game software.
1812.01570
Yang Liu
Yang Liu, Wenwu Wang, Volkan Kilic
Intensity Particle Flow SMC-PHD Filter For Audio Speaker Tracking
In Proceedings of the LOCATA Challenge Workshop - a satellite event of IWAENC 2018 (arXiv:1811.08482 )
null
null
LOCATAchallenge/2018/04
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-zero diffusion particle flow Sequential Monte Carlo probability hypothesis density (NPF-SMC-PHD) filtering has been recently introduced for multi-speaker tracking. However, the NPF does not consider the missing detection which plays a key role in estimation of the number of speakers with their states. To address this limitation, we propose to use intensity particle flow (IPF) in NPFSMC-PHD filter. The proposed method, IPF-SMC-PHD, considers the clutter intensity and detection probability while no data association algorithms are used for the calculation of particle flow. Experiments on the LOCATA (acoustic source Localization and Tracking) dataset with the sequences of task 4 show that our proposed IPF-SMC-PHD filter improves the tracking performance in terms of estimation accuracy as compared to its baseline counterparts.
[ { "created": "Tue, 4 Dec 2018 18:20:27 GMT", "version": "v1" } ]
2018-12-05
[ [ "Liu", "Yang", "" ], [ "Wang", "Wenwu", "" ], [ "Kilic", "Volkan", "" ] ]
Non-zero diffusion particle flow Sequential Monte Carlo probability hypothesis density (NPF-SMC-PHD) filtering has been recently introduced for multi-speaker tracking. However, the NPF does not consider the missing detection which plays a key role in estimation of the number of speakers with their states. To address this limitation, we propose to use intensity particle flow (IPF) in NPFSMC-PHD filter. The proposed method, IPF-SMC-PHD, considers the clutter intensity and detection probability while no data association algorithms are used for the calculation of particle flow. Experiments on the LOCATA (acoustic source Localization and Tracking) dataset with the sequences of task 4 show that our proposed IPF-SMC-PHD filter improves the tracking performance in terms of estimation accuracy as compared to its baseline counterparts.
2406.05064
Subhojyoti Mukherjee
Subhojyoti Mukherjee, Josiah P. Hanna, Qiaomin Xie, Robert Nowak
Pretraining Decision Transformers with Reward Prediction for In-Context Multi-task Structured Bandit Learning
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
In this paper, we study multi-task structured bandit problem where the goal is to learn a near-optimal algorithm that minimizes cumulative regret. The tasks share a common structure and the algorithm exploits the shared structure to minimize the cumulative regret for an unseen but related test task. We use a transformer as a decision-making algorithm to learn this shared structure so as to generalize to the test task. The prior work of pretrained decision transformers like DPT requires access to the optimal action during training which may be hard in several scenarios. Diverging from these works, our learning algorithm does not need the knowledge of optimal action per task during training but predicts a reward vector for each of the actions using only the observed offline data from the diverse training tasks. Finally, during inference time, it selects action using the reward predictions employing various exploration strategies in-context for an unseen test task. Our model outperforms other SOTA methods like DPT, and Algorithmic Distillation over a series of experiments on several structured bandit problems (linear, bilinear, latent, non-linear). Interestingly, we show that our algorithm, without the knowledge of the underlying problem structure, can learn a near-optimal policy in-context by leveraging the shared structure across diverse tasks. We further extend the field of pre-trained decision transformers by showing that they can leverage unseen tasks with new actions and still learn the underlying latent structure to derive a near-optimal policy. We validate this over several experiments to show that our proposed solution is very general and has wide applications to potentially emergent online and offline strategies at test time. Finally, we theoretically analyze the performance of our algorithm and obtain generalization bounds in the in-context multi-task learning setting.
[ { "created": "Fri, 7 Jun 2024 16:34:31 GMT", "version": "v1" } ]
2024-06-10
[ [ "Mukherjee", "Subhojyoti", "" ], [ "Hanna", "Josiah P.", "" ], [ "Xie", "Qiaomin", "" ], [ "Nowak", "Robert", "" ] ]
In this paper, we study multi-task structured bandit problem where the goal is to learn a near-optimal algorithm that minimizes cumulative regret. The tasks share a common structure and the algorithm exploits the shared structure to minimize the cumulative regret for an unseen but related test task. We use a transformer as a decision-making algorithm to learn this shared structure so as to generalize to the test task. The prior work of pretrained decision transformers like DPT requires access to the optimal action during training which may be hard in several scenarios. Diverging from these works, our learning algorithm does not need the knowledge of optimal action per task during training but predicts a reward vector for each of the actions using only the observed offline data from the diverse training tasks. Finally, during inference time, it selects action using the reward predictions employing various exploration strategies in-context for an unseen test task. Our model outperforms other SOTA methods like DPT, and Algorithmic Distillation over a series of experiments on several structured bandit problems (linear, bilinear, latent, non-linear). Interestingly, we show that our algorithm, without the knowledge of the underlying problem structure, can learn a near-optimal policy in-context by leveraging the shared structure across diverse tasks. We further extend the field of pre-trained decision transformers by showing that they can leverage unseen tasks with new actions and still learn the underlying latent structure to derive a near-optimal policy. We validate this over several experiments to show that our proposed solution is very general and has wide applications to potentially emergent online and offline strategies at test time. Finally, we theoretically analyze the performance of our algorithm and obtain generalization bounds in the in-context multi-task learning setting.
2309.10289
Aocheng Shen
Zhiyi Huang, Hanrui Jiang, Aocheng Shen, Junkai Song, Zhiang Wu, Qiankun Zhang
Online Matching with Stochastic Rewards: Advanced Analyses Using Configuration Linear Programs
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mehta and Panigrahi (2012) proposed Online Matching with Stochastic Rewards, which generalizes the Online Bipartite Matching problem of Karp, Vazirani, and Vazirani (1990) by associating the edges with success probabilities. This new feature captures the pay-per-click model in online advertising. Recently, Huang and Zhang (2020) studied this problem under the online primal dual framework using the Configuration Linear Program (LP), and got the best known competitive ratios of the Stochastic Balance algorithm. Their work suggests that the more expressive Configuration LP is more suitable for this problem than the Matching LP. This paper advances the theory of Configuration LP in two directions. Our technical contribution includes a characterization of the joint matching outcome of an offline vertex and \emph{all its neighbors}. This characterization may be of independent interest, and is aligned with the spirit of Configuration LP. By contrast, previous analyses of Ranking generally focus on only one neighbor. Second, we designed a Stochastic Configuration LP that captures a stochastic benchmark proposed by Goyal and Udwani (2020), who used a Path-based LP. The Stochastic Configuration LP is smaller and simpler than the Path-based LP. Moreover, using the new LP we improved the competitive ratio of Stochastic Balance from $0.596$ to $0.611$ when the success probabilities are infinitesimal, and to $0.613$ when the success probabilities are further equal.
[ { "created": "Tue, 19 Sep 2023 03:35:06 GMT", "version": "v1" } ]
2023-09-20
[ [ "Huang", "Zhiyi", "" ], [ "Jiang", "Hanrui", "" ], [ "Shen", "Aocheng", "" ], [ "Song", "Junkai", "" ], [ "Wu", "Zhiang", "" ], [ "Zhang", "Qiankun", "" ] ]
Mehta and Panigrahi (2012) proposed Online Matching with Stochastic Rewards, which generalizes the Online Bipartite Matching problem of Karp, Vazirani, and Vazirani (1990) by associating the edges with success probabilities. This new feature captures the pay-per-click model in online advertising. Recently, Huang and Zhang (2020) studied this problem under the online primal dual framework using the Configuration Linear Program (LP), and got the best known competitive ratios of the Stochastic Balance algorithm. Their work suggests that the more expressive Configuration LP is more suitable for this problem than the Matching LP. This paper advances the theory of Configuration LP in two directions. Our technical contribution includes a characterization of the joint matching outcome of an offline vertex and \emph{all its neighbors}. This characterization may be of independent interest, and is aligned with the spirit of Configuration LP. By contrast, previous analyses of Ranking generally focus on only one neighbor. Second, we designed a Stochastic Configuration LP that captures a stochastic benchmark proposed by Goyal and Udwani (2020), who used a Path-based LP. The Stochastic Configuration LP is smaller and simpler than the Path-based LP. Moreover, using the new LP we improved the competitive ratio of Stochastic Balance from $0.596$ to $0.611$ when the success probabilities are infinitesimal, and to $0.613$ when the success probabilities are further equal.
2009.01456
Minhyuk Sung
Minhyuk Sung, Zhenyu Jiang, Panos Achlioptas, Niloy J. Mitra, Leonidas J. Guibas
DeformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces
SIGGRAPH Asia 2020
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shape deformation is an important component in any geometry processing toolbox. The goal is to enable intuitive deformations of single or multiple shapes or to transfer example deformations to new shapes while preserving the plausibility of the deformed shape(s). Existing approaches assume access to point-level or part-level correspondence or establish them in a preprocessing phase, thus limiting the scope and generality of such approaches. We propose DeformSyncNet, a new approach that allows consistent and synchronized shape deformations without requiring explicit correspondence information. Technically, we achieve this by encoding deformations into a class-specific idealized latent space while decoding them into an individual, model-specific linear deformation action space, operating directly in 3D. The underlying encoding and decoding are performed by specialized (jointly trained) neural networks. By design, the inductive bias of our networks results in a deformation space with several desirable properties, such as path invariance across different deformation pathways, which are then also approximately preserved in real space. We qualitatively and quantitatively evaluate our framework against multiple alternative approaches and demonstrate improved performance.
[ { "created": "Thu, 3 Sep 2020 05:26:32 GMT", "version": "v1" } ]
2020-09-04
[ [ "Sung", "Minhyuk", "" ], [ "Jiang", "Zhenyu", "" ], [ "Achlioptas", "Panos", "" ], [ "Mitra", "Niloy J.", "" ], [ "Guibas", "Leonidas J.", "" ] ]
Shape deformation is an important component in any geometry processing toolbox. The goal is to enable intuitive deformations of single or multiple shapes or to transfer example deformations to new shapes while preserving the plausibility of the deformed shape(s). Existing approaches assume access to point-level or part-level correspondence or establish them in a preprocessing phase, thus limiting the scope and generality of such approaches. We propose DeformSyncNet, a new approach that allows consistent and synchronized shape deformations without requiring explicit correspondence information. Technically, we achieve this by encoding deformations into a class-specific idealized latent space while decoding them into an individual, model-specific linear deformation action space, operating directly in 3D. The underlying encoding and decoding are performed by specialized (jointly trained) neural networks. By design, the inductive bias of our networks results in a deformation space with several desirable properties, such as path invariance across different deformation pathways, which are then also approximately preserved in real space. We qualitatively and quantitatively evaluate our framework against multiple alternative approaches and demonstrate improved performance.
1904.04090
Thorsten Wissmann
J. Leroux and M. Praveen and Ph. Schnoebelen and G. Sutre
On Functions Weakly Computable by Pushdown Petri Nets and Related Systems
null
Logical Methods in Computer Science, Volume 15, Issue 4 (December 18, 2019) lmcs:5362
10.23638/LMCS-15(4:15)2019
null
cs.FL cs.LO
http://creativecommons.org/licenses/by/4.0/
We consider numerical functions weakly computable by grammar-controlled vector addition systems (GVASes, a variant of pushdown Petri nets). GVASes can weakly compute all fast growing functions $F_\alpha$ for $\alpha<\omega^\omega$, hence they are computationally more powerful than standard vector addition systems. On the other hand they cannot weakly compute the inverses $F_\alpha^{-1}$ or indeed any sublinear function. The proof relies on a pumping lemma for runs of GVASes that is of independent interest.
[ { "created": "Mon, 8 Apr 2019 14:27:18 GMT", "version": "v1" }, { "created": "Mon, 25 Nov 2019 17:09:00 GMT", "version": "v2" }, { "created": "Tue, 17 Dec 2019 09:51:33 GMT", "version": "v3" } ]
2023-06-22
[ [ "Leroux", "J.", "" ], [ "Praveen", "M.", "" ], [ "Schnoebelen", "Ph.", "" ], [ "Sutre", "G.", "" ] ]
We consider numerical functions weakly computable by grammar-controlled vector addition systems (GVASes, a variant of pushdown Petri nets). GVASes can weakly compute all fast growing functions $F_\alpha$ for $\alpha<\omega^\omega$, hence they are computationally more powerful than standard vector addition systems. On the other hand they cannot weakly compute the inverses $F_\alpha^{-1}$ or indeed any sublinear function. The proof relies on a pumping lemma for runs of GVASes that is of independent interest.
2408.02193
Weijie Lv
Weijie Lv, Xuan Xia and Sheng-Jun Huang
CodeACT: Code Adaptive Compute-efficient Tuning Framework for Code LLMs
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) have shown great potential in code-related tasks, yet open-source models lag behind their closed-source counterparts. To bridge this performance gap, existing methods generate vast amounts of synthetic data for fine-tuning, leading to inefficiencies in training. Motivated by the need for more effective and efficient training, we propose the Code Adaptive Compute-efficient Tuning (CodeACT) framework. CodeACT introduces the Complexity and Diversity Aware Sampling (CDAS) method to select high-quality training data based on complexity and diversity, and the Dynamic Pack padding strategy to reduce computational resource usage by minimizing padding tokens during training. Experimental results demonstrate that CodeACT-DeepSeek-Coder-6.7B, fine-tuned on only 40% of the EVOL-Instruct data, achieves an 8.6% performance increase on HumanEval, reduces training time by 78%, and decreases peak GPU memory usage by 27%. These findings underscore CodeACT's ability to enhance the performance and efficiency of open-source models. By optimizing both the data selection and training processes, CodeACT offers a comprehensive approach to improving the capabilities of open-source LLMs while significantly reducing computational requirements, addressing the dual challenges of data quality and training efficiency, and paving the way for more resource-efficient and performant models.
[ { "created": "Mon, 5 Aug 2024 02:38:48 GMT", "version": "v1" } ]
2024-08-06
[ [ "Lv", "Weijie", "" ], [ "Xia", "Xuan", "" ], [ "Huang", "Sheng-Jun", "" ] ]
Large language models (LLMs) have shown great potential in code-related tasks, yet open-source models lag behind their closed-source counterparts. To bridge this performance gap, existing methods generate vast amounts of synthetic data for fine-tuning, leading to inefficiencies in training. Motivated by the need for more effective and efficient training, we propose the Code Adaptive Compute-efficient Tuning (CodeACT) framework. CodeACT introduces the Complexity and Diversity Aware Sampling (CDAS) method to select high-quality training data based on complexity and diversity, and the Dynamic Pack padding strategy to reduce computational resource usage by minimizing padding tokens during training. Experimental results demonstrate that CodeACT-DeepSeek-Coder-6.7B, fine-tuned on only 40% of the EVOL-Instruct data, achieves an 8.6% performance increase on HumanEval, reduces training time by 78%, and decreases peak GPU memory usage by 27%. These findings underscore CodeACT's ability to enhance the performance and efficiency of open-source models. By optimizing both the data selection and training processes, CodeACT offers a comprehensive approach to improving the capabilities of open-source LLMs while significantly reducing computational requirements, addressing the dual challenges of data quality and training efficiency, and paving the way for more resource-efficient and performant models.
2209.10414
Van Nguyen
Van Nguyen, Trung Le, Chakkrit Tantithamthavorn, Michael Fu, John Grundy, Hung Nguyen, Seyit Camtepe, Paul Quirk and Dinh Phung
Statement-Level Vulnerability Detection: Learning Vulnerability Patterns Through Information Theory and Contrastive Learning
null
null
null
null
cs.CR cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Software vulnerabilities are a serious and crucial concern. Typically, in a program or function consisting of hundreds or thousands of source code statements, there are only a few statements causing the corresponding vulnerabilities. Most current approaches to vulnerability labelling are done on a function or program level by experts with the assistance of machine learning tools. Extending this approach to the code statement level is much more costly and time-consuming and remains an open problem. In this paper, we propose a novel end-to-end deep learning-based approach to identify the vulnerability-relevant code statements of a specific function. Inspired by the specific structures observed in real-world vulnerable code, we first leverage mutual information for learning a set of latent variables representing the relevance of the source code statements to the corresponding function's vulnerability. We then propose novel clustered spatial contrastive learning in order to further improve the representation learning and the robust selection process of vulnerability-relevant code statements. Experimental results on real-world datasets of 200k+ C/C++ functions show the superiority of our method over other state-of-the-art baselines. In general, our method obtains a higher performance in VCP, VCA, and Top-10 ACC measures of between 3% to 14% over the baselines when running on real-world datasets in an unsupervised setting. Our released source code samples are publicly available at \href{https://github.com/vannguyennd/livuitcl}{https://github.com/vannguyennd/livuitcl.}
[ { "created": "Tue, 20 Sep 2022 00:46:20 GMT", "version": "v1" }, { "created": "Wed, 12 Jun 2024 03:52:41 GMT", "version": "v2" } ]
2024-06-13
[ [ "Nguyen", "Van", "" ], [ "Le", "Trung", "" ], [ "Tantithamthavorn", "Chakkrit", "" ], [ "Fu", "Michael", "" ], [ "Grundy", "John", "" ], [ "Nguyen", "Hung", "" ], [ "Camtepe", "Seyit", "" ], [ "Quirk", "Paul", "" ], [ "Phung", "Dinh", "" ] ]
Software vulnerabilities are a serious and crucial concern. Typically, in a program or function consisting of hundreds or thousands of source code statements, there are only a few statements causing the corresponding vulnerabilities. Most current approaches to vulnerability labelling are done on a function or program level by experts with the assistance of machine learning tools. Extending this approach to the code statement level is much more costly and time-consuming and remains an open problem. In this paper, we propose a novel end-to-end deep learning-based approach to identify the vulnerability-relevant code statements of a specific function. Inspired by the specific structures observed in real-world vulnerable code, we first leverage mutual information for learning a set of latent variables representing the relevance of the source code statements to the corresponding function's vulnerability. We then propose novel clustered spatial contrastive learning in order to further improve the representation learning and the robust selection process of vulnerability-relevant code statements. Experimental results on real-world datasets of 200k+ C/C++ functions show the superiority of our method over other state-of-the-art baselines. In general, our method obtains a higher performance in VCP, VCA, and Top-10 ACC measures of between 3% to 14% over the baselines when running on real-world datasets in an unsupervised setting. Our released source code samples are publicly available at \href{https://github.com/vannguyennd/livuitcl}{https://github.com/vannguyennd/livuitcl.}
2407.19813
Yuan Xia
Yuan Xia, Jingbo Zhou, Zhenhui Shi, Jun Chen, Haifeng Huang
Improving Retrieval Augmented Language Model with Self-Reasoning
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
The Retrieval-Augmented Language Model (RALM) has shown remarkable performance on knowledge-intensive tasks by incorporating external knowledge during inference, which mitigates the factual hallucinations inherited in large language models (LLMs). Despite these advancements, challenges persist in the implementation of RALMs, particularly concerning their reliability and traceability. To be specific, the irrelevant document retrieval may result in unhelpful response generation or even deteriorate the performance of LLMs, while the lack of proper citations in generated outputs complicates efforts to verify the trustworthiness of the models. To this end, we propose a novel self-reasoning framework aimed at improving the reliability and traceability of RALMs, whose core idea is to leverage reasoning trajectories generated by the LLM itself. The framework involves constructing self-reason trajectories with three processes: a relevance-aware process, an evidence-aware selective process, and a trajectory analysis process. We have evaluated our framework across four public datasets (two short-form QA datasets, one long-form QA dataset, and one fact verification dataset) to demonstrate the superiority of our method, which can outperform existing state-of-art models and can achieve comparable performance with GPT-4, while only using 2,000 training samples.
[ { "created": "Mon, 29 Jul 2024 09:05:10 GMT", "version": "v1" }, { "created": "Fri, 2 Aug 2024 12:11:17 GMT", "version": "v2" } ]
2024-08-05
[ [ "Xia", "Yuan", "" ], [ "Zhou", "Jingbo", "" ], [ "Shi", "Zhenhui", "" ], [ "Chen", "Jun", "" ], [ "Huang", "Haifeng", "" ] ]
The Retrieval-Augmented Language Model (RALM) has shown remarkable performance on knowledge-intensive tasks by incorporating external knowledge during inference, which mitigates the factual hallucinations inherited in large language models (LLMs). Despite these advancements, challenges persist in the implementation of RALMs, particularly concerning their reliability and traceability. To be specific, the irrelevant document retrieval may result in unhelpful response generation or even deteriorate the performance of LLMs, while the lack of proper citations in generated outputs complicates efforts to verify the trustworthiness of the models. To this end, we propose a novel self-reasoning framework aimed at improving the reliability and traceability of RALMs, whose core idea is to leverage reasoning trajectories generated by the LLM itself. The framework involves constructing self-reason trajectories with three processes: a relevance-aware process, an evidence-aware selective process, and a trajectory analysis process. We have evaluated our framework across four public datasets (two short-form QA datasets, one long-form QA dataset, and one fact verification dataset) to demonstrate the superiority of our method, which can outperform existing state-of-art models and can achieve comparable performance with GPT-4, while only using 2,000 training samples.
1602.01890
Archith Bency
Archith J. Bency, S. Karthikeyan, Carter De Leo, Santhoshkumar Sunderrajan and B. S. Manjunath
Search Tracker: Human-derived object tracking in-the-wild through large-scale search and retrieval
Under review with the IEEE Transactions on Circuits and Systems for Video Technology
null
10.1109/TCSVT.2016.2555718
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans use context and scene knowledge to easily localize moving objects in conditions of complex illumination changes, scene clutter and occlusions. In this paper, we present a method to leverage human knowledge in the form of annotated video libraries in a novel search and retrieval based setting to track objects in unseen video sequences. For every video sequence, a document that represents motion information is generated. Documents of the unseen video are queried against the library at multiple scales to find videos with similar motion characteristics. This provides us with coarse localization of objects in the unseen video. We further adapt these retrieved object locations to the new video using an efficient warping scheme. The proposed method is validated on in-the-wild video surveillance datasets where we outperform state-of-the-art appearance-based trackers. We also introduce a new challenging dataset with complex object appearance changes.
[ { "created": "Fri, 5 Feb 2016 00:01:13 GMT", "version": "v1" } ]
2016-04-20
[ [ "Bency", "Archith J.", "" ], [ "Karthikeyan", "S.", "" ], [ "De Leo", "Carter", "" ], [ "Sunderrajan", "Santhoshkumar", "" ], [ "Manjunath", "B. S.", "" ] ]
Humans use context and scene knowledge to easily localize moving objects in conditions of complex illumination changes, scene clutter and occlusions. In this paper, we present a method to leverage human knowledge in the form of annotated video libraries in a novel search and retrieval based setting to track objects in unseen video sequences. For every video sequence, a document that represents motion information is generated. Documents of the unseen video are queried against the library at multiple scales to find videos with similar motion characteristics. This provides us with coarse localization of objects in the unseen video. We further adapt these retrieved object locations to the new video using an efficient warping scheme. The proposed method is validated on in-the-wild video surveillance datasets where we outperform state-of-the-art appearance-based trackers. We also introduce a new challenging dataset with complex object appearance changes.
1811.04319
Ronen Tamari
Ronen Tamari, Hiroyuki Shindo, Dafna Shahaf, Yuji Matsumoto
Playing by the Book: An Interactive Game Approach for Action Graph Extraction from Text
Accepted to NAACL 2019 ESSP workshop (https://scientific-knowledge.github.io/)
null
null
null
cs.LG cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding procedural text requires tracking entities, actions and effects as the narrative unfolds. We focus on the challenging real-world problem of action-graph extraction from material science papers, where language is highly specialized and data annotation is expensive and scarce. We propose a novel approach, Text2Quest, where procedural text is interpreted as instructions for an interactive game. A learning agent completes the game by executing the procedure correctly in a text-based simulated lab environment. The framework can complement existing approaches and enables richer forms of learning compared to static texts. We discuss potential limitations and advantages of the approach, and release a prototype proof-of-concept, hoping to encourage research in this direction.
[ { "created": "Sat, 10 Nov 2018 21:45:07 GMT", "version": "v1" }, { "created": "Sat, 22 Dec 2018 16:59:00 GMT", "version": "v2" }, { "created": "Sat, 6 Apr 2019 19:19:05 GMT", "version": "v3" } ]
2019-04-09
[ [ "Tamari", "Ronen", "" ], [ "Shindo", "Hiroyuki", "" ], [ "Shahaf", "Dafna", "" ], [ "Matsumoto", "Yuji", "" ] ]
Understanding procedural text requires tracking entities, actions and effects as the narrative unfolds. We focus on the challenging real-world problem of action-graph extraction from material science papers, where language is highly specialized and data annotation is expensive and scarce. We propose a novel approach, Text2Quest, where procedural text is interpreted as instructions for an interactive game. A learning agent completes the game by executing the procedure correctly in a text-based simulated lab environment. The framework can complement existing approaches and enables richer forms of learning compared to static texts. We discuss potential limitations and advantages of the approach, and release a prototype proof-of-concept, hoping to encourage research in this direction.
cs/0303009
Tomi Janhunen
T. Janhunen, I. Niemela, D. Seipel, P. Simons, J. You
Unfolding Partiality and Disjunctions in Stable Model Semantics
49 pages, 4 figures, 1 table
null
null
null
cs.AI
null
The paper studies an implementation methodology for partial and disjunctive stable models where partiality and disjunctions are unfolded from a logic program so that an implementation of stable models for normal (disjunction-free) programs can be used as the core inference engine. The unfolding is done in two separate steps. Firstly, it is shown that partial stable models can be captured by total stable models using a simple linear and modular program transformation. Hence, reasoning tasks concerning partial stable models can be solved using an implementation of total stable models. Disjunctive partial stable models have been lacking implementations which now become available as the translation handles also the disjunctive case. Secondly, it is shown how total stable models of disjunctive programs can be determined by computing stable models for normal programs. Hence, an implementation of stable models of normal programs can be used as a core engine for implementing disjunctive programs. The feasibility of the approach is demonstrated by constructing a system for computing stable models of disjunctive programs using the smodels system as the core engine. The performance of the resulting system is compared to that of dlv which is a state-of-the-art special purpose system for disjunctive programs.
[ { "created": "Fri, 14 Mar 2003 14:29:32 GMT", "version": "v1" }, { "created": "Fri, 2 Jan 2004 14:27:37 GMT", "version": "v2" } ]
2007-05-23
[ [ "Janhunen", "T.", "" ], [ "Niemela", "I.", "" ], [ "Seipel", "D.", "" ], [ "Simons", "P.", "" ], [ "You", "J.", "" ] ]
The paper studies an implementation methodology for partial and disjunctive stable models where partiality and disjunctions are unfolded from a logic program so that an implementation of stable models for normal (disjunction-free) programs can be used as the core inference engine. The unfolding is done in two separate steps. Firstly, it is shown that partial stable models can be captured by total stable models using a simple linear and modular program transformation. Hence, reasoning tasks concerning partial stable models can be solved using an implementation of total stable models. Disjunctive partial stable models have been lacking implementations which now become available as the translation handles also the disjunctive case. Secondly, it is shown how total stable models of disjunctive programs can be determined by computing stable models for normal programs. Hence, an implementation of stable models of normal programs can be used as a core engine for implementing disjunctive programs. The feasibility of the approach is demonstrated by constructing a system for computing stable models of disjunctive programs using the smodels system as the core engine. The performance of the resulting system is compared to that of dlv which is a state-of-the-art special purpose system for disjunctive programs.
2004.08552
Xulei Yang
Gabriel Tjio, Xulei Yang, Jia Mei Hong, Sum Thai Wong, Vanessa Ding, Andre Choo and Yi Su
Accurate Tumor Tissue Region Detection with Accelerated Deep Convolutional Neural Networks
9 pages, 6 figures, 3 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Manual annotation of pathology slides for cancer diagnosis is laborious and repetitive. Therefore, much effort has been devoted to develop computer vision solutions. Our approach, (FLASH), is based on a Deep Convolutional Neural Network (DCNN) architecture. It reduces computational costs and is faster than typical deep learning approaches by two orders of magnitude, making high throughput processing a possibility. In computer vision approaches using deep learning methods, the input image is subdivided into patches which are separately passed through the neural network. Features extracted from these patches are used by the classifier to annotate the corresponding region. Our approach aggregates all the extracted features into a single matrix before passing them to the classifier. Previously, the features are extracted from overlapping patches. Aggregating the features eliminates the need for processing overlapping patches, which reduces the computations required. DCCN and FLASH demonstrate high sensitivity (~ 0.96), good precision (~0.78) and high F1 scores (~0.84). The average time taken to process each sample for FLASH and DCNN is 96.6 seconds and 9489.20 seconds, respectively. Our approach was approximately 100 times faster than the original DCNN approach while simultaneously preserving high accuracy and precision.
[ { "created": "Sat, 18 Apr 2020 08:24:27 GMT", "version": "v1" } ]
2020-04-21
[ [ "Tjio", "Gabriel", "" ], [ "Yang", "Xulei", "" ], [ "Hong", "Jia Mei", "" ], [ "Wong", "Sum Thai", "" ], [ "Ding", "Vanessa", "" ], [ "Choo", "Andre", "" ], [ "Su", "Yi", "" ] ]
Manual annotation of pathology slides for cancer diagnosis is laborious and repetitive. Therefore, much effort has been devoted to develop computer vision solutions. Our approach, (FLASH), is based on a Deep Convolutional Neural Network (DCNN) architecture. It reduces computational costs and is faster than typical deep learning approaches by two orders of magnitude, making high throughput processing a possibility. In computer vision approaches using deep learning methods, the input image is subdivided into patches which are separately passed through the neural network. Features extracted from these patches are used by the classifier to annotate the corresponding region. Our approach aggregates all the extracted features into a single matrix before passing them to the classifier. Previously, the features are extracted from overlapping patches. Aggregating the features eliminates the need for processing overlapping patches, which reduces the computations required. DCCN and FLASH demonstrate high sensitivity (~ 0.96), good precision (~0.78) and high F1 scores (~0.84). The average time taken to process each sample for FLASH and DCNN is 96.6 seconds and 9489.20 seconds, respectively. Our approach was approximately 100 times faster than the original DCNN approach while simultaneously preserving high accuracy and precision.
2312.08212
Jingsheng Gao
Jingsheng Gao, Jiacheng Ruan, Suncheng Xiang, Zefang Yu, Ke Ji, Mingye Xie, Ting Liu, Yuzhuo Fu
LAMM: Label Alignment for Multi-Modal Prompt Learning
Accepted at AAAI 2024 Main Conference
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
With the success of pre-trained visual-language (VL) models such as CLIP in visual representation tasks, transferring pre-trained models to downstream tasks has become a crucial paradigm. Recently, the prompt tuning paradigm, which draws inspiration from natural language processing (NLP), has made significant progress in VL field. However, preceding methods mainly focus on constructing prompt templates for text and visual inputs, neglecting the gap in class label representations between the VL models and downstream tasks. To address this challenge, we introduce an innovative label alignment method named \textbf{LAMM}, which can dynamically adjust the category embeddings of downstream datasets through end-to-end training. Moreover, to achieve a more appropriate label distribution, we propose a hierarchical loss, encompassing the alignment of the parameter space, feature space, and logits space. We conduct experiments on 11 downstream vision datasets and demonstrate that our method significantly improves the performance of existing multi-modal prompt learning models in few-shot scenarios, exhibiting an average accuracy improvement of 2.31(\%) compared to the state-of-the-art methods on 16 shots. Moreover, our methodology exhibits the preeminence in continual learning compared to other prompt tuning methods. Importantly, our method is synergistic with existing prompt tuning methods and can boost the performance on top of them. Our code and dataset will be publicly available at https://github.com/gaojingsheng/LAMM.
[ { "created": "Wed, 13 Dec 2023 15:29:52 GMT", "version": "v1" } ]
2023-12-14
[ [ "Gao", "Jingsheng", "" ], [ "Ruan", "Jiacheng", "" ], [ "Xiang", "Suncheng", "" ], [ "Yu", "Zefang", "" ], [ "Ji", "Ke", "" ], [ "Xie", "Mingye", "" ], [ "Liu", "Ting", "" ], [ "Fu", "Yuzhuo", "" ] ]
With the success of pre-trained visual-language (VL) models such as CLIP in visual representation tasks, transferring pre-trained models to downstream tasks has become a crucial paradigm. Recently, the prompt tuning paradigm, which draws inspiration from natural language processing (NLP), has made significant progress in VL field. However, preceding methods mainly focus on constructing prompt templates for text and visual inputs, neglecting the gap in class label representations between the VL models and downstream tasks. To address this challenge, we introduce an innovative label alignment method named \textbf{LAMM}, which can dynamically adjust the category embeddings of downstream datasets through end-to-end training. Moreover, to achieve a more appropriate label distribution, we propose a hierarchical loss, encompassing the alignment of the parameter space, feature space, and logits space. We conduct experiments on 11 downstream vision datasets and demonstrate that our method significantly improves the performance of existing multi-modal prompt learning models in few-shot scenarios, exhibiting an average accuracy improvement of 2.31(\%) compared to the state-of-the-art methods on 16 shots. Moreover, our methodology exhibits the preeminence in continual learning compared to other prompt tuning methods. Importantly, our method is synergistic with existing prompt tuning methods and can boost the performance on top of them. Our code and dataset will be publicly available at https://github.com/gaojingsheng/LAMM.
1709.08350
Di Zhuang
Di Zhuang, J. Morris Chang, Mingchen Li
DynaMo: Dynamic Community Detection by Incrementally Maximizing Modularity
14 pages, 6 figures, 2 tables, Accepted for IEEE Transactions on Knowledge and Data Engineering
null
10.1109/TKDE.2019.2951419
null
cs.SI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Community detection is of great importance for online social network analysis. The volume, variety and velocity of data generated by today's online social networks are advancing the way researchers analyze those networks. For instance, real-world networks, such as Facebook, LinkedIn and Twitter, are inherently growing rapidly and expanding aggressively over time. However, most of the studies so far have been focusing on detecting communities on the static networks. It is computationally expensive to directly employ a well-studied static algorithm repeatedly on the network snapshots of the dynamic networks. We propose DynaMo, a novel modularity-based dynamic community detection algorithm, aiming to detect communities of dynamic networks as effective as repeatedly applying static algorithms but in a more efficient way. DynaMo is an adaptive and incremental algorithm, which is designed for incrementally maximizing the modularity gain while updating the community structure of dynamic networks. In the experimental evaluation, a comprehensive comparison has been made among DynaMo, Louvain (static) and 5 other dynamic algorithms. Extensive experiments have been conducted on 6 real-world networks and 10,000 synthetic networks. Our results show that DynaMo outperforms all the other 5 dynamic algorithms in terms of the effectiveness, and is 2 to 5 times (by average) faster than Louvain algorithm.
[ { "created": "Mon, 25 Sep 2017 07:11:44 GMT", "version": "v1" }, { "created": "Mon, 20 May 2019 14:10:50 GMT", "version": "v2" }, { "created": "Sat, 9 Nov 2019 20:10:02 GMT", "version": "v3" } ]
2019-11-12
[ [ "Zhuang", "Di", "" ], [ "Chang", "J. Morris", "" ], [ "Li", "Mingchen", "" ] ]
Community detection is of great importance for online social network analysis. The volume, variety and velocity of data generated by today's online social networks are advancing the way researchers analyze those networks. For instance, real-world networks, such as Facebook, LinkedIn and Twitter, are inherently growing rapidly and expanding aggressively over time. However, most of the studies so far have been focusing on detecting communities on the static networks. It is computationally expensive to directly employ a well-studied static algorithm repeatedly on the network snapshots of the dynamic networks. We propose DynaMo, a novel modularity-based dynamic community detection algorithm, aiming to detect communities of dynamic networks as effective as repeatedly applying static algorithms but in a more efficient way. DynaMo is an adaptive and incremental algorithm, which is designed for incrementally maximizing the modularity gain while updating the community structure of dynamic networks. In the experimental evaluation, a comprehensive comparison has been made among DynaMo, Louvain (static) and 5 other dynamic algorithms. Extensive experiments have been conducted on 6 real-world networks and 10,000 synthetic networks. Our results show that DynaMo outperforms all the other 5 dynamic algorithms in terms of the effectiveness, and is 2 to 5 times (by average) faster than Louvain algorithm.
1809.10134
Philip Fong
Juan Carlos Fuentes Carranza, Philip W. L. Fong
Brokering Policies and Execution Monitors for IoT Middleware
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Event-based systems lie at the heart of many cloud-based Internet-of-Things (IoT) platforms. This combination of the Broker architectural style and the Publisher-Subscriber design pattern provides a way for smart devices to communicate and coordinate with one another. The present design of these cloud-based IoT frameworks lacks measures to (i) protect devices against malicious cloud disconnections, (ii) impose information flow control among communicating parties, and (iii) enforce coordination protocols in the presence of compromised devices. In this work, we propose to extend the modular event-based system architecture of Fiege et al., to incorporate brokering policies and execution monitors, in order to address the three protection challenges mentioned above. We formalized the operational semantics of our protection scheme, explored how the scheme can be used to enforce BLP-style information flow control and RBAC-style protection domains, implemented the proposal in an open-source MQTT broker, and evaluated the performance impact of the protection mechanisms.
[ { "created": "Wed, 26 Sep 2018 17:38:58 GMT", "version": "v1" }, { "created": "Thu, 27 Sep 2018 17:44:47 GMT", "version": "v2" } ]
2018-09-28
[ [ "Carranza", "Juan Carlos Fuentes", "" ], [ "Fong", "Philip W. L.", "" ] ]
Event-based systems lie at the heart of many cloud-based Internet-of-Things (IoT) platforms. This combination of the Broker architectural style and the Publisher-Subscriber design pattern provides a way for smart devices to communicate and coordinate with one another. The present design of these cloud-based IoT frameworks lacks measures to (i) protect devices against malicious cloud disconnections, (ii) impose information flow control among communicating parties, and (iii) enforce coordination protocols in the presence of compromised devices. In this work, we propose to extend the modular event-based system architecture of Fiege et al., to incorporate brokering policies and execution monitors, in order to address the three protection challenges mentioned above. We formalized the operational semantics of our protection scheme, explored how the scheme can be used to enforce BLP-style information flow control and RBAC-style protection domains, implemented the proposal in an open-source MQTT broker, and evaluated the performance impact of the protection mechanisms.
1108.2157
Rajeev Raman
Alexander Golynski and Alessio Orlandi and Rajeev Raman and S. Srinivasa Rao
Optimal Indexes for Sparse Bit Vectors
Some of these results were published in preliminary form in the proceedings of SWAT 2008. There are new upper bounds not in the SWAT version, however
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of supporting Rank() and Select() operations on a bit vector of length m with n 1 bits. The problem is considered in the succinct index model, where the bit vector is stored in "read-only" memory and an additional data structure, called the index, is created during pre-processing to help answer the above queries. We give asymptotically optimal density-sensitive trade-offs, involving both m and n, that relate the size of the index to the number of accesses to the bit vector (and processing time) needed to answer the above queries. The results are particularly interesting for the case where n = o(m).
[ { "created": "Wed, 10 Aug 2011 11:36:22 GMT", "version": "v1" } ]
2015-03-19
[ [ "Golynski", "Alexander", "" ], [ "Orlandi", "Alessio", "" ], [ "Raman", "Rajeev", "" ], [ "Rao", "S. Srinivasa", "" ] ]
We consider the problem of supporting Rank() and Select() operations on a bit vector of length m with n 1 bits. The problem is considered in the succinct index model, where the bit vector is stored in "read-only" memory and an additional data structure, called the index, is created during pre-processing to help answer the above queries. We give asymptotically optimal density-sensitive trade-offs, involving both m and n, that relate the size of the index to the number of accesses to the bit vector (and processing time) needed to answer the above queries. The results are particularly interesting for the case where n = o(m).
2212.01855
Mahender Kumar
Mahender Kumar and Satish Chand
Pairing-Friendly Elliptic Curves: Revisited Taxonomy, Attacks and Security Concern
null
null
null
null
cs.CR
http://creativecommons.org/publicdomain/zero/1.0/
Major families of pairing-friendly elliptic curves, including BN, BLS12, BLS24, KSS16, and KSS18 have recently been vulnerable to number field sieve (NFS) attacks. Due to the recent attacks on discrete logs in F_(q^k ), selecting such curves became relevant again. This paper revisited the topic of selecting pairing-friendly curves at different security levels. First, we expanded the classification given by Freeman et al. [1] by identifying new families that were not previously mentioned, such as a complete family with variable differentiation and new sparse families of curves. We discussed individual curves and a comprehensive framework for constructing parametric families. We estimated the security and assessed families of the pairing-friendly curve to discover families of curves better than BN, KSS, and BLS in terms of the required key size. We also evaluated the complexity of the optimal ate pairing that has never been discussed before, except by Barbulescu et al. [2]. We demonstrated that the recent attack (TNFS) on pairing needs to increase the key size. We compared families of curves in the context of key size and selected a suitable alternative to an elliptic curve.
[ { "created": "Sun, 4 Dec 2022 15:45:09 GMT", "version": "v1" } ]
2022-12-06
[ [ "Kumar", "Mahender", "" ], [ "Chand", "Satish", "" ] ]
Major families of pairing-friendly elliptic curves, including BN, BLS12, BLS24, KSS16, and KSS18 have recently been vulnerable to number field sieve (NFS) attacks. Due to the recent attacks on discrete logs in F_(q^k ), selecting such curves became relevant again. This paper revisited the topic of selecting pairing-friendly curves at different security levels. First, we expanded the classification given by Freeman et al. [1] by identifying new families that were not previously mentioned, such as a complete family with variable differentiation and new sparse families of curves. We discussed individual curves and a comprehensive framework for constructing parametric families. We estimated the security and assessed families of the pairing-friendly curve to discover families of curves better than BN, KSS, and BLS in terms of the required key size. We also evaluated the complexity of the optimal ate pairing that has never been discussed before, except by Barbulescu et al. [2]. We demonstrated that the recent attack (TNFS) on pairing needs to increase the key size. We compared families of curves in the context of key size and selected a suitable alternative to an elliptic curve.
1912.03488
Bhanu Garg Mr.
Bhanu Garg and Naresh Manwani
Robust Deep Ordinal Regression Under Label Noise
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The real-world data is often susceptible to label noise, which might constrict the effectiveness of the existing state of the art algorithms for ordinal regression. Existing works on ordinal regression do not take label noise into account. We propose a theoretically grounded approach for class conditional label noise in ordinal regression problems. We present a deep learning implementation of two commonly used loss functions for ordinal regression that is both - 1) robust to label noise, and 2) rank consistent for a good ranking rule. We verify these properties of the algorithm empirically and show robustness to label noise on real data and rank consistency. To the best of our knowledge, this is the first approach for robust ordinal regression models.
[ { "created": "Sat, 7 Dec 2019 10:39:45 GMT", "version": "v1" }, { "created": "Mon, 27 Jan 2020 17:47:59 GMT", "version": "v2" } ]
2020-01-28
[ [ "Garg", "Bhanu", "" ], [ "Manwani", "Naresh", "" ] ]
The real-world data is often susceptible to label noise, which might constrict the effectiveness of the existing state of the art algorithms for ordinal regression. Existing works on ordinal regression do not take label noise into account. We propose a theoretically grounded approach for class conditional label noise in ordinal regression problems. We present a deep learning implementation of two commonly used loss functions for ordinal regression that is both - 1) robust to label noise, and 2) rank consistent for a good ranking rule. We verify these properties of the algorithm empirically and show robustness to label noise on real data and rank consistency. To the best of our knowledge, this is the first approach for robust ordinal regression models.
2208.04994
Shijun Wang
Shijun Wang, Hamed Hemati, J\'on Gu{\dh}nason, Damian Borth
Generative Data Augmentation Guided by Triplet Loss for Speech Emotion Recognition
Published in INTERSPEECH 2022
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Speech Emotion Recognition (SER) is crucial for human-computer interaction but still remains a challenging problem because of two major obstacles: data scarcity and imbalance. Many datasets for SER are substantially imbalanced, where data utterances of one class (most often Neutral) are much more frequent than those of other classes. Furthermore, only a few data resources are available for many existing spoken languages. To address these problems, we exploit a GAN-based augmentation model guided by a triplet network, to improve SER performance given imbalanced and insufficient training data. We conduct experiments and demonstrate: 1) With a highly imbalanced dataset, our augmentation strategy significantly improves the SER performance (+8% recall score compared with the baseline). 2) Moreover, in a cross-lingual benchmark, where we train a model with enough source language utterances but very few target language utterances (around 50 in our experiments), our augmentation strategy brings benefits for the SER performance of all three target languages.
[ { "created": "Tue, 9 Aug 2022 18:39:42 GMT", "version": "v1" } ]
2022-08-11
[ [ "Wang", "Shijun", "" ], [ "Hemati", "Hamed", "" ], [ "Guðnason", "Jón", "" ], [ "Borth", "Damian", "" ] ]
Speech Emotion Recognition (SER) is crucial for human-computer interaction but still remains a challenging problem because of two major obstacles: data scarcity and imbalance. Many datasets for SER are substantially imbalanced, where data utterances of one class (most often Neutral) are much more frequent than those of other classes. Furthermore, only a few data resources are available for many existing spoken languages. To address these problems, we exploit a GAN-based augmentation model guided by a triplet network, to improve SER performance given imbalanced and insufficient training data. We conduct experiments and demonstrate: 1) With a highly imbalanced dataset, our augmentation strategy significantly improves the SER performance (+8% recall score compared with the baseline). 2) Moreover, in a cross-lingual benchmark, where we train a model with enough source language utterances but very few target language utterances (around 50 in our experiments), our augmentation strategy brings benefits for the SER performance of all three target languages.
2403.02574
Yutong Li
Yutong Li, Lu Chen, Aiwei Liu, Kai Yu, Lijie Wen
ChatCite: LLM Agent with Human Workflow Guidance for Comparative Literature Summary
18 pages, 5 figures
null
null
null
cs.IR cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
The literature review is an indispensable step in the research process. It provides the benefit of comprehending the research problem and understanding the current research situation while conducting a comparative analysis of prior works. However, literature summary is challenging and time consuming. The previous LLM-based studies on literature review mainly focused on the complete process, including literature retrieval, screening, and summarization. However, for the summarization step, simple CoT method often lacks the ability to provide extensive comparative summary. In this work, we firstly focus on the independent literature summarization step and introduce ChatCite, an LLM agent with human workflow guidance for comparative literature summary. This agent, by mimicking the human workflow, first extracts key elements from relevant literature and then generates summaries using a Reflective Incremental Mechanism. In order to better evaluate the quality of the generated summaries, we devised a LLM-based automatic evaluation metric, G-Score, in refer to the human evaluation criteria. The ChatCite agent outperformed other models in various dimensions in the experiments. The literature summaries generated by ChatCite can also be directly used for drafting literature reviews.
[ { "created": "Tue, 5 Mar 2024 01:13:56 GMT", "version": "v1" } ]
2024-03-06
[ [ "Li", "Yutong", "" ], [ "Chen", "Lu", "" ], [ "Liu", "Aiwei", "" ], [ "Yu", "Kai", "" ], [ "Wen", "Lijie", "" ] ]
The literature review is an indispensable step in the research process. It provides the benefit of comprehending the research problem and understanding the current research situation while conducting a comparative analysis of prior works. However, literature summary is challenging and time consuming. The previous LLM-based studies on literature review mainly focused on the complete process, including literature retrieval, screening, and summarization. However, for the summarization step, simple CoT method often lacks the ability to provide extensive comparative summary. In this work, we firstly focus on the independent literature summarization step and introduce ChatCite, an LLM agent with human workflow guidance for comparative literature summary. This agent, by mimicking the human workflow, first extracts key elements from relevant literature and then generates summaries using a Reflective Incremental Mechanism. In order to better evaluate the quality of the generated summaries, we devised a LLM-based automatic evaluation metric, G-Score, in refer to the human evaluation criteria. The ChatCite agent outperformed other models in various dimensions in the experiments. The literature summaries generated by ChatCite can also be directly used for drafting literature reviews.
2110.07604
Jason Y. Zhang
Jason Y. Zhang, Gengshan Yang, Shubham Tulsiani, Deva Ramanan
NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in the Wild
In NeurIPS 2021. v2-3: Fixed minor typos
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Recent history has seen a tremendous growth of work exploring implicit representations of geometry and radiance, popularized through Neural Radiance Fields (NeRF). Such works are fundamentally based on a (implicit) volumetric representation of occupancy, allowing them to model diverse scene structure including translucent objects and atmospheric obscurants. But because the vast majority of real-world scenes are composed of well-defined surfaces, we introduce a surface analog of such implicit models called Neural Reflectance Surfaces (NeRS). NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions. Even more importantly, surface parameterizations allow NeRS to learn (neural) bidirectional surface reflectance functions (BRDFs) that factorize view-dependent appearance into environmental illumination, diffuse color (albedo), and specular "shininess." Finally, rather than illustrating our results on synthetic scenes or controlled in-the-lab capture, we assemble a novel dataset of multi-view images from online marketplaces for selling goods. Such "in-the-wild" multi-view image sets pose a number of challenges, including a small number of views with unknown/rough camera estimates. We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions. We hope that NeRS serves as a first step toward building scalable, high-quality libraries of real-world shape, materials, and illumination. The project page with code and video visualizations can be found at https://jasonyzhang.com/ners.
[ { "created": "Thu, 14 Oct 2021 17:59:58 GMT", "version": "v1" }, { "created": "Fri, 15 Oct 2021 08:05:41 GMT", "version": "v2" }, { "created": "Mon, 18 Oct 2021 04:03:39 GMT", "version": "v3" } ]
2021-10-19
[ [ "Zhang", "Jason Y.", "" ], [ "Yang", "Gengshan", "" ], [ "Tulsiani", "Shubham", "" ], [ "Ramanan", "Deva", "" ] ]
Recent history has seen a tremendous growth of work exploring implicit representations of geometry and radiance, popularized through Neural Radiance Fields (NeRF). Such works are fundamentally based on a (implicit) volumetric representation of occupancy, allowing them to model diverse scene structure including translucent objects and atmospheric obscurants. But because the vast majority of real-world scenes are composed of well-defined surfaces, we introduce a surface analog of such implicit models called Neural Reflectance Surfaces (NeRS). NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions. Even more importantly, surface parameterizations allow NeRS to learn (neural) bidirectional surface reflectance functions (BRDFs) that factorize view-dependent appearance into environmental illumination, diffuse color (albedo), and specular "shininess." Finally, rather than illustrating our results on synthetic scenes or controlled in-the-lab capture, we assemble a novel dataset of multi-view images from online marketplaces for selling goods. Such "in-the-wild" multi-view image sets pose a number of challenges, including a small number of views with unknown/rough camera estimates. We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions. We hope that NeRS serves as a first step toward building scalable, high-quality libraries of real-world shape, materials, and illumination. The project page with code and video visualizations can be found at https://jasonyzhang.com/ners.
1806.01610
Robin Tibor Schirrmeister
Robin Tibor Schirrmeister, Patryk Chrab\k{a}szcz, Frank Hutter, Tonio Ball
Training Generative Reversible Networks
Source code for this study is at https://github.com/robintibor/generative-reversible
null
null
null
cs.LG cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative models with an encoding component such as autoencoders currently receive great interest. However, training of autoencoders is typically complicated by the need to train a separate encoder and decoder model that have to be enforced to be reciprocal to each other. To overcome this problem, by-design reversible neural networks (RevNets) had been previously used as generative models either directly optimizing the likelihood of the data under the model or using an adversarial approach on the generated data. Here, we instead investigate their performance using an adversary on the latent space in the adversarial autoencoder framework. We investigate the generative performance of RevNets on the CelebA dataset, showing that generative RevNets can generate coherent faces with similar quality as Variational Autoencoders. This first attempt to use RevNets inside the adversarial autoencoder framework slightly underperformed relative to recent advanced generative models using an autoencoder component on CelebA, but this gap may diminish with further optimization of the training setup of generative RevNets. In addition to the experiments on CelebA, we show a proof-of-principle experiment on the MNIST dataset suggesting that adversary-free trained RevNets can discover meaningful latent dimensions without pre-specifying the number of dimensions of the latent sampling distribution. In summary, this study shows that RevNets can be employed in different generative training settings. Source code for this study is at https://github.com/robintibor/generative-reversible
[ { "created": "Tue, 5 Jun 2018 11:16:42 GMT", "version": "v1" }, { "created": "Wed, 6 Jun 2018 08:40:34 GMT", "version": "v2" }, { "created": "Sun, 8 Jul 2018 23:22:27 GMT", "version": "v3" }, { "created": "Thu, 23 Aug 2018 12:07:40 GMT", "version": "v4" } ]
2018-08-24
[ [ "Schirrmeister", "Robin Tibor", "" ], [ "Chrabąszcz", "Patryk", "" ], [ "Hutter", "Frank", "" ], [ "Ball", "Tonio", "" ] ]
Generative models with an encoding component such as autoencoders currently receive great interest. However, training of autoencoders is typically complicated by the need to train a separate encoder and decoder model that have to be enforced to be reciprocal to each other. To overcome this problem, by-design reversible neural networks (RevNets) had been previously used as generative models either directly optimizing the likelihood of the data under the model or using an adversarial approach on the generated data. Here, we instead investigate their performance using an adversary on the latent space in the adversarial autoencoder framework. We investigate the generative performance of RevNets on the CelebA dataset, showing that generative RevNets can generate coherent faces with similar quality as Variational Autoencoders. This first attempt to use RevNets inside the adversarial autoencoder framework slightly underperformed relative to recent advanced generative models using an autoencoder component on CelebA, but this gap may diminish with further optimization of the training setup of generative RevNets. In addition to the experiments on CelebA, we show a proof-of-principle experiment on the MNIST dataset suggesting that adversary-free trained RevNets can discover meaningful latent dimensions without pre-specifying the number of dimensions of the latent sampling distribution. In summary, this study shows that RevNets can be employed in different generative training settings. Source code for this study is at https://github.com/robintibor/generative-reversible
1111.0737
Maxim Vashkevich
Maxim Vashkevich and Wanggen Wan and Alexander Petrovsky
Practical design of multi-channel oversampled warped cosine-modulated filter banks
6 pages, 9 figures. IET International Communication Conference on Wireless Mobile & Computing 2011
null
10.1049/cp.2011.0844
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A practical approach to optimal design of multichannel oversampled warped cosine-modulated filter banks (CMFB) is proposed. Warped CMFB is obtained by allpass transformation of uniform CMFB. The paper addresses the problems of minimization amplitude distortion and suppression of aliasing components emerged due to oversampling of filter bank channel signals. Proposed optimization-based design considerably reduces distortions of overall filter bank transfer function taking into account channel subsampling ratios. Matlab-implementation of the proposed warped CMFB design method is available in public GitHub repository.
[ { "created": "Thu, 3 Nov 2011 06:46:36 GMT", "version": "v1" }, { "created": "Sun, 18 Nov 2012 13:42:29 GMT", "version": "v2" }, { "created": "Wed, 25 Mar 2020 06:43:16 GMT", "version": "v3" } ]
2020-03-26
[ [ "Vashkevich", "Maxim", "" ], [ "Wan", "Wanggen", "" ], [ "Petrovsky", "Alexander", "" ] ]
A practical approach to optimal design of multichannel oversampled warped cosine-modulated filter banks (CMFB) is proposed. Warped CMFB is obtained by allpass transformation of uniform CMFB. The paper addresses the problems of minimization amplitude distortion and suppression of aliasing components emerged due to oversampling of filter bank channel signals. Proposed optimization-based design considerably reduces distortions of overall filter bank transfer function taking into account channel subsampling ratios. Matlab-implementation of the proposed warped CMFB design method is available in public GitHub repository.
2111.12528
Amir Naseredini
Amir Naseredini, Stefan Gast, Martin Schwarzl, Pedro Miguel Sousa Bernardo, Amel Smajic, Claudio Canella, Martin Berger, Daniel Gruss
Systematic Analysis of Programming Languages and Their Execution Environments for Spectre Attacks
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we analyze the security of programming languages and their execution environments (compilers and interpreters) with respect to Spectre attacks. The analysis shows that only 16 out of 42 execution environments have mitigations against at least one Spectre variant, i.e., 26 have no mitigations against any Spectre variant. Using our novel tool Speconnector, we develop Spectre proof-of-concept attacks in 8 programming languages and on code generated by 11 execution environments that were previously not known to be affected. Our results highlight some programming languages that are used to implement security-critical code, but remain entirely unprotected, even three years after the discovery of Spectre.
[ { "created": "Wed, 24 Nov 2021 14:54:03 GMT", "version": "v1" } ]
2021-11-25
[ [ "Naseredini", "Amir", "" ], [ "Gast", "Stefan", "" ], [ "Schwarzl", "Martin", "" ], [ "Bernardo", "Pedro Miguel Sousa", "" ], [ "Smajic", "Amel", "" ], [ "Canella", "Claudio", "" ], [ "Berger", "Martin", "" ], [ "Gruss", "Daniel", "" ] ]
In this paper, we analyze the security of programming languages and their execution environments (compilers and interpreters) with respect to Spectre attacks. The analysis shows that only 16 out of 42 execution environments have mitigations against at least one Spectre variant, i.e., 26 have no mitigations against any Spectre variant. Using our novel tool Speconnector, we develop Spectre proof-of-concept attacks in 8 programming languages and on code generated by 11 execution environments that were previously not known to be affected. Our results highlight some programming languages that are used to implement security-critical code, but remain entirely unprotected, even three years after the discovery of Spectre.
2201.02797
Shaoxiong Ji
Shaoxiong Ji and Wei Sun and Xiaobo Li and Hang Dong and Ara Taalas and Yijia Zhang and Honghan Wu and Esa Pitk\"anen and Pekka Marttinen
A Unified Review of Deep Learning for Automated Medical Coding
ACM Computing Surveys
null
10.1145/3664615
null
cs.CL cs.IR
http://creativecommons.org/licenses/by/4.0/
Automated medical coding, an essential task for healthcare operation and delivery, makes unstructured data manageable by predicting medical codes from clinical documents. Recent advances in deep learning and natural language processing have been widely applied to this task. However, deep learning-based medical coding lacks a unified view of the design of neural network architectures. This review proposes a unified framework to provide a general understanding of the building blocks of medical coding models and summarizes recent advanced models under the proposed framework. Our unified framework decomposes medical coding into four main components, i.e., encoder modules for text feature extraction, mechanisms for building deep encoder architectures, decoder modules for transforming hidden representations into medical codes, and the usage of auxiliary information. Finally, we introduce the benchmarks and real-world usage and discuss key research challenges and future directions.
[ { "created": "Sat, 8 Jan 2022 09:37:23 GMT", "version": "v1" }, { "created": "Wed, 25 Jan 2023 07:55:00 GMT", "version": "v2" }, { "created": "Mon, 24 Apr 2023 14:50:44 GMT", "version": "v3" }, { "created": "Sun, 5 May 2024 13:04:16 GMT", "version": "v4" }, { "created": "Fri, 10 May 2024 09:58:46 GMT", "version": "v5" } ]
2024-05-15
[ [ "Ji", "Shaoxiong", "" ], [ "Sun", "Wei", "" ], [ "Li", "Xiaobo", "" ], [ "Dong", "Hang", "" ], [ "Taalas", "Ara", "" ], [ "Zhang", "Yijia", "" ], [ "Wu", "Honghan", "" ], [ "Pitkänen", "Esa", "" ], [ "Marttinen", "Pekka", "" ] ]
Automated medical coding, an essential task for healthcare operation and delivery, makes unstructured data manageable by predicting medical codes from clinical documents. Recent advances in deep learning and natural language processing have been widely applied to this task. However, deep learning-based medical coding lacks a unified view of the design of neural network architectures. This review proposes a unified framework to provide a general understanding of the building blocks of medical coding models and summarizes recent advanced models under the proposed framework. Our unified framework decomposes medical coding into four main components, i.e., encoder modules for text feature extraction, mechanisms for building deep encoder architectures, decoder modules for transforming hidden representations into medical codes, and the usage of auxiliary information. Finally, we introduce the benchmarks and real-world usage and discuss key research challenges and future directions.
2406.19815
Feng Liu
Feng Liu, Qing Xu, Qijian Zheng
Emotion Loss Attacking: Adversarial Attack Perception for Skeleton based on Multi-dimensional Features
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial attack on skeletal motion is a hot topic. However, existing researches only consider part of dynamic features when measuring distance between skeleton graph sequences, which results in poor imperceptibility. To this end, we propose a novel adversarial attack method to attack action recognizers for skeletal motions. Firstly, our method systematically proposes a dynamic distance function to measure the difference between skeletal motions. Meanwhile, we innovatively introduce emotional features for complementary information. In addition, we use Alternating Direction Method of Multipliers(ADMM) to solve the constrained optimization problem, which generates adversarial samples with better imperceptibility to deceive the classifiers. Experiments show that our method is effective on multiple action classifiers and datasets. When the perturbation magnitude measured by l norms is the same, the dynamic perturbations generated by our method are much lower than that of other methods. What's more, we are the first to prove the effectiveness of emotional features, and provide a new idea for measuring the distance between skeletal motions.
[ { "created": "Fri, 28 Jun 2024 10:45:37 GMT", "version": "v1" } ]
2024-07-01
[ [ "Liu", "Feng", "" ], [ "Xu", "Qing", "" ], [ "Zheng", "Qijian", "" ] ]
Adversarial attack on skeletal motion is a hot topic. However, existing researches only consider part of dynamic features when measuring distance between skeleton graph sequences, which results in poor imperceptibility. To this end, we propose a novel adversarial attack method to attack action recognizers for skeletal motions. Firstly, our method systematically proposes a dynamic distance function to measure the difference between skeletal motions. Meanwhile, we innovatively introduce emotional features for complementary information. In addition, we use Alternating Direction Method of Multipliers(ADMM) to solve the constrained optimization problem, which generates adversarial samples with better imperceptibility to deceive the classifiers. Experiments show that our method is effective on multiple action classifiers and datasets. When the perturbation magnitude measured by l norms is the same, the dynamic perturbations generated by our method are much lower than that of other methods. What's more, we are the first to prove the effectiveness of emotional features, and provide a new idea for measuring the distance between skeletal motions.
2301.12326
Xuan Lu
Xuan Lu, Wei Ai, Yixin Wang, Qiaozhu Mei
Team Resilience under Shock: An Empirical Analysis of GitHub Repositories during Early COVID-19 Pandemic
12 pages, 4 figures. To be published in the 17th International AAAI Conference on Web and Social Media (ICWSM)
null
null
null
cs.LG cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While many organizations have shifted to working remotely during the COVID-19 pandemic, how the remote workforce and the remote teams are influenced by and would respond to this and future shocks remain largely unknown. Software developers have relied on remote collaborations long before the pandemic, working in virtual teams (GitHub repositories). The dynamics of these repositories through the pandemic provide a unique opportunity to understand how remote teams react under shock. This work presents a systematic analysis. We measure the overall effect of the early pandemic on public GitHub repositories by comparing their sizes and productivity with the counterfactual outcomes forecasted as if there were no pandemic. We find that the productivity level and the number of active members of these teams vary significantly during different periods of the pandemic. We then conduct a finer-grained investigation and study the heterogeneous effects of the shock on individual teams. We find that the resilience of a team is highly correlated to certain properties of the team before the pandemic. Through a bootstrapped regression analysis, we reveal which types of teams are robust or fragile to the shock.
[ { "created": "Sun, 29 Jan 2023 02:22:29 GMT", "version": "v1" } ]
2023-01-31
[ [ "Lu", "Xuan", "" ], [ "Ai", "Wei", "" ], [ "Wang", "Yixin", "" ], [ "Mei", "Qiaozhu", "" ] ]
While many organizations have shifted to working remotely during the COVID-19 pandemic, how the remote workforce and the remote teams are influenced by and would respond to this and future shocks remain largely unknown. Software developers have relied on remote collaborations long before the pandemic, working in virtual teams (GitHub repositories). The dynamics of these repositories through the pandemic provide a unique opportunity to understand how remote teams react under shock. This work presents a systematic analysis. We measure the overall effect of the early pandemic on public GitHub repositories by comparing their sizes and productivity with the counterfactual outcomes forecasted as if there were no pandemic. We find that the productivity level and the number of active members of these teams vary significantly during different periods of the pandemic. We then conduct a finer-grained investigation and study the heterogeneous effects of the shock on individual teams. We find that the resilience of a team is highly correlated to certain properties of the team before the pandemic. Through a bootstrapped regression analysis, we reveal which types of teams are robust or fragile to the shock.
1603.08019
Raj Jain
Arjan Durresi, Sastri Kota, Mukul Goyal, Raj Jain, Venkata Bharani
Achieving QoS for TCP Traffic in Satellite Networks with Differentiated Services
null
Space Communications, Volume 17, Number 1-3, 2001, pp. 125-136
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Satellite networks play an indispensable role in providing global Internet access and electronic connectivity. To achieve such a global communications, provisioning of quality of service (QoS) within the advanced satellite systems is the main requirement. One of the key mechanisms of implementing the quality of service is traffic management. Traffic management becomes a crucial factor in the case of satellite network because of the limited availability of their resources. Currently, Internet Protocol (IP) only has minimal traffic management capabilities and provides best effort services. In this paper, we present broadband satellite network QoS model and simulated performance results. In particular, we discuss the TCP flow aggregates performance for their good behavior in the presence of competing UDP flow aggregates in the same assured forwarding. We identify several factors that affect the performance in the mixed environments and quantify their effects using a full factorial design of experiment methodology.
[ { "created": "Fri, 25 Mar 2016 20:16:23 GMT", "version": "v1" } ]
2016-03-29
[ [ "Durresi", "Arjan", "" ], [ "Kota", "Sastri", "" ], [ "Goyal", "Mukul", "" ], [ "Jain", "Raj", "" ], [ "Bharani", "Venkata", "" ] ]
Satellite networks play an indispensable role in providing global Internet access and electronic connectivity. To achieve such a global communications, provisioning of quality of service (QoS) within the advanced satellite systems is the main requirement. One of the key mechanisms of implementing the quality of service is traffic management. Traffic management becomes a crucial factor in the case of satellite network because of the limited availability of their resources. Currently, Internet Protocol (IP) only has minimal traffic management capabilities and provides best effort services. In this paper, we present broadband satellite network QoS model and simulated performance results. In particular, we discuss the TCP flow aggregates performance for their good behavior in the presence of competing UDP flow aggregates in the same assured forwarding. We identify several factors that affect the performance in the mixed environments and quantify their effects using a full factorial design of experiment methodology.
1601.05952
Carsten Eickhoff
Siddharth Sarda, Carsten Eickhoff, Thomas Hofmann
Semantic Place Descriptors for Classification and Map Discovery
13 pages, 1 figure, 1 table
null
null
null
cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Urban environments develop complex, non-obvious structures that are often hard to represent in the form of maps or guides. Finding the right place to go often requires intimate familiarity with the location in question and cannot easily be deduced by visitors. In this work, we exploit large-scale samples of usage information, in the form of mobile phone traces and geo-tagged Twitter messages in order to automatically explore and annotate city maps via kernel density estimation. Our experiments are based on one year's worth of mobile phone activity collected by Nokia's Mobile Data Challenge (MDC). We show that usage information can be a strong predictor of semantic place categories, allowing us to automatically annotate maps based on the behavior of the local user base.
[ { "created": "Fri, 22 Jan 2016 10:46:29 GMT", "version": "v1" } ]
2016-01-25
[ [ "Sarda", "Siddharth", "" ], [ "Eickhoff", "Carsten", "" ], [ "Hofmann", "Thomas", "" ] ]
Urban environments develop complex, non-obvious structures that are often hard to represent in the form of maps or guides. Finding the right place to go often requires intimate familiarity with the location in question and cannot easily be deduced by visitors. In this work, we exploit large-scale samples of usage information, in the form of mobile phone traces and geo-tagged Twitter messages in order to automatically explore and annotate city maps via kernel density estimation. Our experiments are based on one year's worth of mobile phone activity collected by Nokia's Mobile Data Challenge (MDC). We show that usage information can be a strong predictor of semantic place categories, allowing us to automatically annotate maps based on the behavior of the local user base.
2407.01216
Xibo Li
Xibo Li, Shruti Patel and Christof B\"uskens
Let Hybrid A* Path Planner Obey Traffic Rules: A Deep Reinforcement Learning-Based Planning Framework
null
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
Deep reinforcement learning (DRL) allows a system to interact with its environment and take actions by training an efficient policy that maximizes self-defined rewards. In autonomous driving, it can be used as a strategy for high-level decision making, whereas low-level algorithms such as the hybrid A* path planning have proven their ability to solve the local trajectory planning problem. In this work, we combine these two methods where the DRL makes high-level decisions such as lane change commands. After obtaining the lane change command, the hybrid A* planner is able to generate a collision-free trajectory to be executed by a model predictive controller (MPC). In addition, the DRL algorithm is able to keep the lane change command consistent within a chosen time-period. Traffic rules are implemented using linear temporal logic (LTL), which is then utilized as a reward function in DRL. Furthermore, we validate the proposed method on a real system to demonstrate its feasibility from simulation to implementation on real hardware.
[ { "created": "Mon, 1 Jul 2024 12:00:10 GMT", "version": "v1" } ]
2024-07-02
[ [ "Li", "Xibo", "" ], [ "Patel", "Shruti", "" ], [ "Büskens", "Christof", "" ] ]
Deep reinforcement learning (DRL) allows a system to interact with its environment and take actions by training an efficient policy that maximizes self-defined rewards. In autonomous driving, it can be used as a strategy for high-level decision making, whereas low-level algorithms such as the hybrid A* path planning have proven their ability to solve the local trajectory planning problem. In this work, we combine these two methods where the DRL makes high-level decisions such as lane change commands. After obtaining the lane change command, the hybrid A* planner is able to generate a collision-free trajectory to be executed by a model predictive controller (MPC). In addition, the DRL algorithm is able to keep the lane change command consistent within a chosen time-period. Traffic rules are implemented using linear temporal logic (LTL), which is then utilized as a reward function in DRL. Furthermore, we validate the proposed method on a real system to demonstrate its feasibility from simulation to implementation on real hardware.
2310.05306
Ruiqi Wang
Ruiqi Wang, Hanyang Liu, Jiaming Qiu, Moran Xu, Roch Guerin, Chenyang Lu
Progressive Neural Compression for Adaptive Image Offloading under Timing Constraints
IEEE the 44th Real-Time System Symposium (RTSS), 2023
null
10.1109/RTSS59052.2023.00020
null
cs.DC cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
IoT devices are increasingly the source of data for machine learning (ML) applications running on edge servers. Data transmissions from devices to servers are often over local wireless networks whose bandwidth is not just limited but, more importantly, variable. Furthermore, in cyber-physical systems interacting with the physical environment, image offloading is also commonly subject to timing constraints. It is, therefore, important to develop an adaptive approach that maximizes the inference performance of ML applications under timing constraints and the resource constraints of IoT devices. In this paper, we use image classification as our target application and propose progressive neural compression (PNC) as an efficient solution to this problem. Although neural compression has been used to compress images for different ML applications, existing solutions often produce fixed-size outputs that are unsuitable for timing-constrained offloading over variable bandwidth. To address this limitation, we train a multi-objective rateless autoencoder that optimizes for multiple compression rates via stochastic taildrop to create a compression solution that produces features ordered according to their importance to inference performance. Features are then transmitted in that order based on available bandwidth, with classification ultimately performed using the (sub)set of features received by the deadline. We demonstrate the benefits of PNC over state-of-the-art neural compression approaches and traditional compression methods on a testbed comprising an IoT device and an edge server connected over a wireless network with varying bandwidth.
[ { "created": "Sun, 8 Oct 2023 22:58:31 GMT", "version": "v1" } ]
2024-02-26
[ [ "Wang", "Ruiqi", "" ], [ "Liu", "Hanyang", "" ], [ "Qiu", "Jiaming", "" ], [ "Xu", "Moran", "" ], [ "Guerin", "Roch", "" ], [ "Lu", "Chenyang", "" ] ]
IoT devices are increasingly the source of data for machine learning (ML) applications running on edge servers. Data transmissions from devices to servers are often over local wireless networks whose bandwidth is not just limited but, more importantly, variable. Furthermore, in cyber-physical systems interacting with the physical environment, image offloading is also commonly subject to timing constraints. It is, therefore, important to develop an adaptive approach that maximizes the inference performance of ML applications under timing constraints and the resource constraints of IoT devices. In this paper, we use image classification as our target application and propose progressive neural compression (PNC) as an efficient solution to this problem. Although neural compression has been used to compress images for different ML applications, existing solutions often produce fixed-size outputs that are unsuitable for timing-constrained offloading over variable bandwidth. To address this limitation, we train a multi-objective rateless autoencoder that optimizes for multiple compression rates via stochastic taildrop to create a compression solution that produces features ordered according to their importance to inference performance. Features are then transmitted in that order based on available bandwidth, with classification ultimately performed using the (sub)set of features received by the deadline. We demonstrate the benefits of PNC over state-of-the-art neural compression approaches and traditional compression methods on a testbed comprising an IoT device and an edge server connected over a wireless network with varying bandwidth.
0801.0597
Min Chen
Min Chen, Semih Serbetli and Aylin Yener
Distributed Power Allocation Strategies for Parallel Relay Networks
IEEE Transactions on Wireless Communications, accepted for publication. 10 pages, 7 figures
null
null
null
cs.IT math.IT
null
We consider a source-destination pair assisted by parallel regenerative decode-and-forward relays operating in orthogonal channels. We investigate distributed power allocation strategies for this system with limited channel state information at the source and the relay nodes. We first propose a distributed decision mechanism for each relay to individually make its decision on whether to forward the source data. The decision mechanism calls for each relay that is able to decode the information from the source to compare its relay-to-destination channel gain with a given threshold. We identify the optimum distributed power allocation strategy that minimizes the total transmit power while providing a target signal-to-noise ratio at the destination with a target outage probability. The strategy dictates the optimum choices for the source power as well as the threshold value at the relays. Next, we consider two simpler distributed power allocation strategies, namely the passive source model where the source power and the relay threshold are fixed, and the single relay model where only one relay is allowed to forward the source data. These models are motivated by limitations on the available channel state information as well as ease of implementation as compared to the optimum distributed strategy. Simulation results are presented to demonstrate the performance of the proposed distributed power allocation schemes. Specifically, we observe significant power savings with proposed methods as compared to random relay selection.
[ { "created": "Thu, 3 Jan 2008 21:01:48 GMT", "version": "v1" } ]
2008-01-07
[ [ "Chen", "Min", "" ], [ "Serbetli", "Semih", "" ], [ "Yener", "Aylin", "" ] ]
We consider a source-destination pair assisted by parallel regenerative decode-and-forward relays operating in orthogonal channels. We investigate distributed power allocation strategies for this system with limited channel state information at the source and the relay nodes. We first propose a distributed decision mechanism for each relay to individually make its decision on whether to forward the source data. The decision mechanism calls for each relay that is able to decode the information from the source to compare its relay-to-destination channel gain with a given threshold. We identify the optimum distributed power allocation strategy that minimizes the total transmit power while providing a target signal-to-noise ratio at the destination with a target outage probability. The strategy dictates the optimum choices for the source power as well as the threshold value at the relays. Next, we consider two simpler distributed power allocation strategies, namely the passive source model where the source power and the relay threshold are fixed, and the single relay model where only one relay is allowed to forward the source data. These models are motivated by limitations on the available channel state information as well as ease of implementation as compared to the optimum distributed strategy. Simulation results are presented to demonstrate the performance of the proposed distributed power allocation schemes. Specifically, we observe significant power savings with proposed methods as compared to random relay selection.
2207.07189
Nikola Simidjievski
Ivica Dimitrovski, Ivan Kitanovski, Dragi Kocev, Nikola Simidjievski
Current Trends in Deep Learning for Earth Observation: An Open-source Benchmark Arena for Image Classification
null
null
10.1016/j.isprsjprs.2023.01.014
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
We present AiTLAS: Benchmark Arena -- an open-source benchmark suite for evaluating state-of-the-art deep learning approaches for image classification in Earth Observation (EO). To this end, we present a comprehensive comparative analysis of more than 500 models derived from ten different state-of-the-art architectures and compare them to a variety of multi-class and multi-label classification tasks from 22 datasets with different sizes and properties. In addition to models trained entirely on these datasets, we benchmark models trained in the context of transfer learning, leveraging pre-trained model variants, as it is typically performed in practice. All presented approaches are general and can be easily extended to many other remote sensing image classification tasks not considered in this study. To ensure reproducibility and facilitate better usability and further developments, all of the experimental resources including the trained models, model configurations, and processing details of the datasets (with their corresponding splits used for training and evaluating the models) are publicly available on the repository: https://github.com/biasvariancelabs/aitlas-arena
[ { "created": "Thu, 14 Jul 2022 20:18:58 GMT", "version": "v1" }, { "created": "Sat, 14 Jan 2023 16:10:58 GMT", "version": "v2" } ]
2023-02-02
[ [ "Dimitrovski", "Ivica", "" ], [ "Kitanovski", "Ivan", "" ], [ "Kocev", "Dragi", "" ], [ "Simidjievski", "Nikola", "" ] ]
We present AiTLAS: Benchmark Arena -- an open-source benchmark suite for evaluating state-of-the-art deep learning approaches for image classification in Earth Observation (EO). To this end, we present a comprehensive comparative analysis of more than 500 models derived from ten different state-of-the-art architectures and compare them to a variety of multi-class and multi-label classification tasks from 22 datasets with different sizes and properties. In addition to models trained entirely on these datasets, we benchmark models trained in the context of transfer learning, leveraging pre-trained model variants, as it is typically performed in practice. All presented approaches are general and can be easily extended to many other remote sensing image classification tasks not considered in this study. To ensure reproducibility and facilitate better usability and further developments, all of the experimental resources including the trained models, model configurations, and processing details of the datasets (with their corresponding splits used for training and evaluating the models) are publicly available on the repository: https://github.com/biasvariancelabs/aitlas-arena
1002.3065
Ayfer Ozgur
Ayfer Ozgur, Olivier Leveque, David Tse
Linear Capacity Scaling in Wireless Networks: Beyond Physical Limits?
10 pages, 5 figures, in Proc. of IEEE Information Theory and Applications Workshop, Feb. 2010
null
10.1109/ITA.2010.5454112
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the role of cooperation in wireless networks subject to a spatial degrees of freedom limitation. To address the worst case scenario, we consider a free-space line-of-sight type environment with no scattering and no fading. We identify three qualitatively different operating regimes that are determined by how the area of the network A, normalized with respect to the wavelength lambda, compares to the number of users n. In networks with sqrt{A}/lambda < sqrt{n}, the limitation in spatial degrees of freedom does not allow to achieve a capacity scaling better than sqrt{n} and this performance can be readily achieved by multi-hopping. This result has been recently shown by Franceschetti et al. However, for networks with sqrt{A}/lambda > sqrt{n}, the number of available degrees of freedom is min(n, sqrt{A}/lambda), larger that what can be achieved by multi-hopping. We show that the optimal capacity scaling in this regime is achieved by hierarchical cooperation. In particular, in networks with sqrt{A}/lambda> n, hierarchical cooperation can achieve linear scaling.
[ { "created": "Tue, 16 Feb 2010 14:35:08 GMT", "version": "v1" } ]
2016-11-17
[ [ "Ozgur", "Ayfer", "" ], [ "Leveque", "Olivier", "" ], [ "Tse", "David", "" ] ]
We investigate the role of cooperation in wireless networks subject to a spatial degrees of freedom limitation. To address the worst case scenario, we consider a free-space line-of-sight type environment with no scattering and no fading. We identify three qualitatively different operating regimes that are determined by how the area of the network A, normalized with respect to the wavelength lambda, compares to the number of users n. In networks with sqrt{A}/lambda < sqrt{n}, the limitation in spatial degrees of freedom does not allow to achieve a capacity scaling better than sqrt{n} and this performance can be readily achieved by multi-hopping. This result has been recently shown by Franceschetti et al. However, for networks with sqrt{A}/lambda > sqrt{n}, the number of available degrees of freedom is min(n, sqrt{A}/lambda), larger that what can be achieved by multi-hopping. We show that the optimal capacity scaling in this regime is achieved by hierarchical cooperation. In particular, in networks with sqrt{A}/lambda> n, hierarchical cooperation can achieve linear scaling.
2211.11040
Aadesh Desai
Aadesh Desai, Saagar Parikh, Seema Kumari, Shanmuganathan Raman
PointResNet: Residual Network for 3D Point Cloud Segmentation and Classification
Paper Under Review at IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Point cloud segmentation and classification are some of the primary tasks in 3D computer vision with applications ranging from augmented reality to robotics. However, processing point clouds using deep learning-based algorithms is quite challenging due to the irregular point formats. Voxelization or 3D grid-based representation are different ways of applying deep neural networks to this problem. In this paper, we propose PointResNet, a residual block-based approach. Our model directly processes the 3D points, using a deep neural network for the segmentation and classification tasks. The main components of the architecture are: 1) residual blocks and 2) multi-layered perceptron (MLP). We show that it preserves profound features and structural information, which are useful for segmentation and classification tasks. The experimental evaluations demonstrate that the proposed model produces the best results for segmentation and comparable results for classification in comparison to the conventional baselines.
[ { "created": "Sun, 20 Nov 2022 17:39:48 GMT", "version": "v1" } ]
2022-11-22
[ [ "Desai", "Aadesh", "" ], [ "Parikh", "Saagar", "" ], [ "Kumari", "Seema", "" ], [ "Raman", "Shanmuganathan", "" ] ]
Point cloud segmentation and classification are some of the primary tasks in 3D computer vision with applications ranging from augmented reality to robotics. However, processing point clouds using deep learning-based algorithms is quite challenging due to the irregular point formats. Voxelization or 3D grid-based representation are different ways of applying deep neural networks to this problem. In this paper, we propose PointResNet, a residual block-based approach. Our model directly processes the 3D points, using a deep neural network for the segmentation and classification tasks. The main components of the architecture are: 1) residual blocks and 2) multi-layered perceptron (MLP). We show that it preserves profound features and structural information, which are useful for segmentation and classification tasks. The experimental evaluations demonstrate that the proposed model produces the best results for segmentation and comparable results for classification in comparison to the conventional baselines.
2004.10278
Qi Cheng
Yanbin Pan, Jun Xu, Nick Wadleigh, and Qi Cheng
On the ideal shortest vector problem over random rational primes
null
null
null
null
cs.CR math.NT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Any ideal in a number field can be factored into a product of prime ideals. In this paper we study the prime ideal shortest vector problem (SVP) in the ring $ \Z[x]/(x^{2^n} + 1) $, a popular choice in the design of ideal lattice based cryptosystems. We show that a majority of rational primes lie under prime ideals admitting a polynomial time algorithm for SVP. Although the shortest vector problem of ideal lattices underpins the security of Ring-LWE cryptosystem, this work does not break Ring-LWE, since the security reduction is from the worst case ideal SVP to the average case Ring-LWE, and it is one-way.
[ { "created": "Tue, 21 Apr 2020 20:21:33 GMT", "version": "v1" }, { "created": "Tue, 2 Mar 2021 16:16:57 GMT", "version": "v2" } ]
2021-03-03
[ [ "Pan", "Yanbin", "" ], [ "Xu", "Jun", "" ], [ "Wadleigh", "Nick", "" ], [ "Cheng", "Qi", "" ] ]
Any ideal in a number field can be factored into a product of prime ideals. In this paper we study the prime ideal shortest vector problem (SVP) in the ring $ \Z[x]/(x^{2^n} + 1) $, a popular choice in the design of ideal lattice based cryptosystems. We show that a majority of rational primes lie under prime ideals admitting a polynomial time algorithm for SVP. Although the shortest vector problem of ideal lattices underpins the security of Ring-LWE cryptosystem, this work does not break Ring-LWE, since the security reduction is from the worst case ideal SVP to the average case Ring-LWE, and it is one-way.
2202.10862
Noa Schiller
Hagit Attiya and Noa Schiller
Asynchronous Fully-Decentralized SGD in the Cluster-Based Model
null
CIAC 13 (2023) 52-66
10.1007/978-3-031-30448-4_5
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents fault-tolerant asynchronous Stochastic Gradient Descent (SGD) algorithms. SGD is widely used for approximating the minimum of a cost function $Q$, as a core part of optimization and learning algorithms. Our algorithms are designed for the cluster-based model, which combines message-passing and shared-memory communication layers. Processes may fail by crashing, and the algorithm inside each cluster is wait-free, using only reads and writes. For a strongly convex function $Q$, our algorithm tolerates any number of failures, and provides convergence rate that yields the maximal distributed acceleration over the optimal convergence rate of sequential SGD. For arbitrary functions, the convergence rate has an additional term that depends on the maximal difference between the parameters at the same iteration. (This holds under standard assumptions on $Q$.) In this case, the algorithm obtains the same convergence rate as sequential SGD, up to a logarithmic factor. This is achieved by using, at each iteration, a multidimensional approximate agreement algorithm, tailored for the cluster-based model. The algorithm for arbitrary functions requires that at least a majority of the clusters contain at least one nonfaulty process. We prove that this condition is necessary when optimizing some non-convex functions.
[ { "created": "Tue, 22 Feb 2022 12:50:00 GMT", "version": "v1" }, { "created": "Sun, 27 Mar 2022 11:53:37 GMT", "version": "v2" }, { "created": "Tue, 13 Jun 2023 17:19:24 GMT", "version": "v3" } ]
2023-06-14
[ [ "Attiya", "Hagit", "" ], [ "Schiller", "Noa", "" ] ]
This paper presents fault-tolerant asynchronous Stochastic Gradient Descent (SGD) algorithms. SGD is widely used for approximating the minimum of a cost function $Q$, as a core part of optimization and learning algorithms. Our algorithms are designed for the cluster-based model, which combines message-passing and shared-memory communication layers. Processes may fail by crashing, and the algorithm inside each cluster is wait-free, using only reads and writes. For a strongly convex function $Q$, our algorithm tolerates any number of failures, and provides convergence rate that yields the maximal distributed acceleration over the optimal convergence rate of sequential SGD. For arbitrary functions, the convergence rate has an additional term that depends on the maximal difference between the parameters at the same iteration. (This holds under standard assumptions on $Q$.) In this case, the algorithm obtains the same convergence rate as sequential SGD, up to a logarithmic factor. This is achieved by using, at each iteration, a multidimensional approximate agreement algorithm, tailored for the cluster-based model. The algorithm for arbitrary functions requires that at least a majority of the clusters contain at least one nonfaulty process. We prove that this condition is necessary when optimizing some non-convex functions.
1903.02953
Daniel Hershcovich
Daniel Hershcovich, Zohar Aizenbud, Leshem Choshen, Elior Sulem, Ari Rappoport, Omri Abend
SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA
SemEval 2019 Shared task. arXiv admin note: substantial text overlap with arXiv:1805.12386
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the SemEval 2019 shared task on UCCA parsing in English, German and French, and discuss the participating systems and results. UCCA is a cross-linguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website \url{https://competitions.codalab.org/competitions/19160}.
[ { "created": "Wed, 6 Mar 2019 16:55:58 GMT", "version": "v1" }, { "created": "Thu, 25 Apr 2019 05:18:27 GMT", "version": "v2" }, { "created": "Thu, 11 Jun 2020 09:20:56 GMT", "version": "v3" } ]
2020-06-12
[ [ "Hershcovich", "Daniel", "" ], [ "Aizenbud", "Zohar", "" ], [ "Choshen", "Leshem", "" ], [ "Sulem", "Elior", "" ], [ "Rappoport", "Ari", "" ], [ "Abend", "Omri", "" ] ]
We present the SemEval 2019 shared task on UCCA parsing in English, German and French, and discuss the participating systems and results. UCCA is a cross-linguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website \url{https://competitions.codalab.org/competitions/19160}.
1911.01163
Youngmin Jeong
Dung Phuong Trinh, Youngmin Jeong, Hyundong Shin, Moe Z. Win
Molecular Communication in H-Diffusion
This work has been submitted to the IEEE for possible publication
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The random propagation of molecules in a fluid medium is characterized by the spontaneous diffusion law as well as the interaction between the environment and molecules. In this paper, we embody the anomalous diffusion theory for modeling and analysis in molecular communication. We employ H-diffusion to model a non-Fickian behavior of molecules in diffusive channels. H-diffusion enables us to model anomalous diffusion as the subordinate relationship between self-similar parent and directing processes and their corresponding probability density functions with two H-variates in a unified fashion. In addition, we introduce standard H-diffusion to make a bridge of normal diffusion across well-known anomalous diffusions such as space-time fractional diffusion, Erdelyi-Kober fractional diffusion, grey Brownian motion, fractional Brownian motion, and Brownian motion. We then characterize the statistical properties of uncertainty of the random propagation time of a molecule governed by H-diffusion laws by introducing a general class of molecular noise---called H-noise. Since H-noise can be an algebraic tailed process, we provide a concept of H-noise power using finite logarithm moments based on zero-order statistics. Finally, we develop a unifying framework for error probability analysis in a timing-based molecular communication system with a concept of signal-to-noise power ratio.
[ { "created": "Mon, 4 Nov 2019 12:41:06 GMT", "version": "v1" } ]
2019-11-05
[ [ "Trinh", "Dung Phuong", "" ], [ "Jeong", "Youngmin", "" ], [ "Shin", "Hyundong", "" ], [ "Win", "Moe Z.", "" ] ]
The random propagation of molecules in a fluid medium is characterized by the spontaneous diffusion law as well as the interaction between the environment and molecules. In this paper, we embody the anomalous diffusion theory for modeling and analysis in molecular communication. We employ H-diffusion to model a non-Fickian behavior of molecules in diffusive channels. H-diffusion enables us to model anomalous diffusion as the subordinate relationship between self-similar parent and directing processes and their corresponding probability density functions with two H-variates in a unified fashion. In addition, we introduce standard H-diffusion to make a bridge of normal diffusion across well-known anomalous diffusions such as space-time fractional diffusion, Erdelyi-Kober fractional diffusion, grey Brownian motion, fractional Brownian motion, and Brownian motion. We then characterize the statistical properties of uncertainty of the random propagation time of a molecule governed by H-diffusion laws by introducing a general class of molecular noise---called H-noise. Since H-noise can be an algebraic tailed process, we provide a concept of H-noise power using finite logarithm moments based on zero-order statistics. Finally, we develop a unifying framework for error probability analysis in a timing-based molecular communication system with a concept of signal-to-noise power ratio.
1906.00246
Lei Chen
Le Wu, Lei Chen, Yonghui Yang, Richang Hong, Yong Ge, Xing Xie, Meng Wang
Personalized Multimedia Item and Key Frame Recommendation
The updated version is publised in IJCAI 2019, https://doi.org/10.24963/ijcai.2019/198
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When recommending or advertising items to users, an emerging trend is to present each multimedia item with a key frame image (e.g., the poster of a movie). As each multimedia item can be represented as multiple fine-grained visual images (e.g., related images of the movie), personalized key frame recommendation is necessary in these applications to attract users' unique visual preferences. However, previous personalized key frame recommendation models relied on users' fine-grained image behavior of multimedia items (e.g., user-image interaction behavior), which is often not available in real scenarios. In this paper, we study the general problem of joint multimedia item and key frame recommendation in the absence of the fine-grained user-image behavior. We argue that the key challenge of this problem lies in discovering users' visual profiles for key frame recommendation, as most recommendation models would fail without any users' fine-grained image behavior. To tackle this challenge, we leverage users' item behavior by projecting users (items) in two latent spaces: a collaborative latent space and a visual latent space. We further design a model to discern both the collaborative and visual dimensions of users, and model how users make decisive item preferences from these two spaces. As a result, the learned user visual profiles could be directly applied for key frame recommendation. Finally, experimental results on a real-world dataset clearly show the effectiveness of our proposed model on the two recommendation tasks.
[ { "created": "Sat, 1 Jun 2019 15:34:59 GMT", "version": "v1" }, { "created": "Sat, 4 Jan 2020 06:38:25 GMT", "version": "v2" } ]
2020-01-07
[ [ "Wu", "Le", "" ], [ "Chen", "Lei", "" ], [ "Yang", "Yonghui", "" ], [ "Hong", "Richang", "" ], [ "Ge", "Yong", "" ], [ "Xie", "Xing", "" ], [ "Wang", "Meng", "" ] ]
When recommending or advertising items to users, an emerging trend is to present each multimedia item with a key frame image (e.g., the poster of a movie). As each multimedia item can be represented as multiple fine-grained visual images (e.g., related images of the movie), personalized key frame recommendation is necessary in these applications to attract users' unique visual preferences. However, previous personalized key frame recommendation models relied on users' fine-grained image behavior of multimedia items (e.g., user-image interaction behavior), which is often not available in real scenarios. In this paper, we study the general problem of joint multimedia item and key frame recommendation in the absence of the fine-grained user-image behavior. We argue that the key challenge of this problem lies in discovering users' visual profiles for key frame recommendation, as most recommendation models would fail without any users' fine-grained image behavior. To tackle this challenge, we leverage users' item behavior by projecting users (items) in two latent spaces: a collaborative latent space and a visual latent space. We further design a model to discern both the collaborative and visual dimensions of users, and model how users make decisive item preferences from these two spaces. As a result, the learned user visual profiles could be directly applied for key frame recommendation. Finally, experimental results on a real-world dataset clearly show the effectiveness of our proposed model on the two recommendation tasks.
2207.00933
Wei Tang
Wei Tang, Margaret Martonosi
ScaleQC: A Scalable Framework for Hybrid Computation on Quantum and Classical Processors
12 pages, 13 figures
null
null
null
cs.ET quant-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Quantum processing unit (QPU) has to satisfy highly demanding quantity and quality requirements on its qubits to produce accurate results for problems at useful scales. Furthermore, classical simulations of quantum circuits generally do not scale. Instead, quantum circuit cutting techniques cut and distribute a large quantum circuit into multiple smaller subcircuits feasible for less powerful QPUs. However, the classical post-processing incurred from the cutting introduces runtime and memory bottlenecks. Our tool, called ScaleQC, addresses the bottlenecks by developing novel algorithmic techniques including (1) a quantum states merging framework that quickly locates the solution states of large quantum circuits; (2) an automatic solver that cuts complex quantum circuits to fit on less powerful QPUs; and (3) a tensor network based post-processing that minimizes the classical overhead. Our experiments demonstrate both QPU requirement advantages over the purely quantum platforms, and runtime advantages over the purely classical platforms for benchmarks up to 1000 qubits.
[ { "created": "Sun, 3 Jul 2022 01:44:31 GMT", "version": "v1" } ]
2022-07-05
[ [ "Tang", "Wei", "" ], [ "Martonosi", "Margaret", "" ] ]
Quantum processing unit (QPU) has to satisfy highly demanding quantity and quality requirements on its qubits to produce accurate results for problems at useful scales. Furthermore, classical simulations of quantum circuits generally do not scale. Instead, quantum circuit cutting techniques cut and distribute a large quantum circuit into multiple smaller subcircuits feasible for less powerful QPUs. However, the classical post-processing incurred from the cutting introduces runtime and memory bottlenecks. Our tool, called ScaleQC, addresses the bottlenecks by developing novel algorithmic techniques including (1) a quantum states merging framework that quickly locates the solution states of large quantum circuits; (2) an automatic solver that cuts complex quantum circuits to fit on less powerful QPUs; and (3) a tensor network based post-processing that minimizes the classical overhead. Our experiments demonstrate both QPU requirement advantages over the purely quantum platforms, and runtime advantages over the purely classical platforms for benchmarks up to 1000 qubits.
2206.00279
Depeng Liu
Depeng Liu, Lutan Zhao, Pengfei Yang, Bow-Yaw Wang, Rui Hou, Lijun Zhang, Naijun Zhan
Defensive Design of Saturating Counters Based on Differential Privacy
null
null
null
null
cs.CR cs.FL
http://creativecommons.org/licenses/by-nc-nd/4.0/
The saturating counter is the basic module of the dynamic branch predictor, which involves the core technique to improve instruction level parallelism performance in modern processors. However, most studies focus on the performance improvement and hardware consumption of saturating counters, while ignoring the security problems they may cause. In this paper, we creatively propose to study and design saturating counters from the defense perspective of differential privacy, so that attackers cannot distinguish the states that saturating counters are in and further infer sensitive information. To obtain theoretical guarantees, we use Markov chain to formalize the attack algorithm applied to the saturating counter, investigate into the optimal attack strategy and calculate the probability of successful attack. Furthermore, we find that the attacker is able to accurately guess the branch execution of the victim's process in the existing saturating counters. To avoid this, we design a new probabilistic saturating counter, which generalizes the existing conventional and probabilistic saturating counters. The guarantee of differential privacy is applied to deduce parameters of the new saturating counters so that the security requirement can be satisfied. We also theoretically calculate the misprediction rate when the saturating counter reaches the steady state. The experimental results on testing programs show that the calculated theoretical results agree with the experimental performances. Compared with the existing conventional and probabilistic saturating counters, when the parameters of our designed models are selected appropriately, the new saturating counters can not only ensure similar operational performance, but also establish strict security guarantee.
[ { "created": "Wed, 1 Jun 2022 07:19:31 GMT", "version": "v1" } ]
2022-06-02
[ [ "Liu", "Depeng", "" ], [ "Zhao", "Lutan", "" ], [ "Yang", "Pengfei", "" ], [ "Wang", "Bow-Yaw", "" ], [ "Hou", "Rui", "" ], [ "Zhang", "Lijun", "" ], [ "Zhan", "Naijun", "" ] ]
The saturating counter is the basic module of the dynamic branch predictor, which involves the core technique to improve instruction level parallelism performance in modern processors. However, most studies focus on the performance improvement and hardware consumption of saturating counters, while ignoring the security problems they may cause. In this paper, we creatively propose to study and design saturating counters from the defense perspective of differential privacy, so that attackers cannot distinguish the states that saturating counters are in and further infer sensitive information. To obtain theoretical guarantees, we use Markov chain to formalize the attack algorithm applied to the saturating counter, investigate into the optimal attack strategy and calculate the probability of successful attack. Furthermore, we find that the attacker is able to accurately guess the branch execution of the victim's process in the existing saturating counters. To avoid this, we design a new probabilistic saturating counter, which generalizes the existing conventional and probabilistic saturating counters. The guarantee of differential privacy is applied to deduce parameters of the new saturating counters so that the security requirement can be satisfied. We also theoretically calculate the misprediction rate when the saturating counter reaches the steady state. The experimental results on testing programs show that the calculated theoretical results agree with the experimental performances. Compared with the existing conventional and probabilistic saturating counters, when the parameters of our designed models are selected appropriately, the new saturating counters can not only ensure similar operational performance, but also establish strict security guarantee.
2209.13304
Manuel Vidigueira
Martina Camaioni, Rachid Guerraoui, Matteo Monti, Manuel Vidigueira
Oracular Byzantine Reliable Broadcast [Extended Version]
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Byzantine Reliable Broadcast (BRB) is a fundamental distributed computing primitive, with applications ranging from notifications to asynchronous payment systems. Motivated by practical consideration, we study Client-Server Byzantine Reliable Broadcast (CSB), a multi-shot variant of BRB whose interface is split between broadcasting clients and delivering servers. We present Draft, an optimally resilient implementation of CSB. Like most implementations of BRB, Draft guarantees both liveness and safety in an asynchronous environment. Under good conditions, however, Draft achieves unparalleled efficiency. In a moment of synchrony, free from Byzantine misbehaviour, and at the limit of infinitely many broadcasting clients, a Draft server delivers a $b$-bits payload at an asymptotic amortized cost of $0$ signature verifications, and $\log_2(c) + b$ bits exchanged, where $c$ is the number of clients in the system. This is the information-theoretical minimum number of bits required to convey the payload ($b$ bits, assuming it is compressed), along with an identifier for its sender ($\log_2(c)$ bits, necessary to enumerate any set of $c$ elements, and optimal if broadcasting frequencies are uniform or unknown). These two achievements have profound practical implications. Real-world BRB implementations are often bottlenecked either by expensive signature verifications, or by communication overhead. For Draft, instead, the network is the limit: a server can deliver payloads as quickly as it would receive them from an infallible oracle.
[ { "created": "Tue, 27 Sep 2022 11:09:54 GMT", "version": "v1" } ]
2022-09-28
[ [ "Camaioni", "Martina", "" ], [ "Guerraoui", "Rachid", "" ], [ "Monti", "Matteo", "" ], [ "Vidigueira", "Manuel", "" ] ]
Byzantine Reliable Broadcast (BRB) is a fundamental distributed computing primitive, with applications ranging from notifications to asynchronous payment systems. Motivated by practical consideration, we study Client-Server Byzantine Reliable Broadcast (CSB), a multi-shot variant of BRB whose interface is split between broadcasting clients and delivering servers. We present Draft, an optimally resilient implementation of CSB. Like most implementations of BRB, Draft guarantees both liveness and safety in an asynchronous environment. Under good conditions, however, Draft achieves unparalleled efficiency. In a moment of synchrony, free from Byzantine misbehaviour, and at the limit of infinitely many broadcasting clients, a Draft server delivers a $b$-bits payload at an asymptotic amortized cost of $0$ signature verifications, and $\log_2(c) + b$ bits exchanged, where $c$ is the number of clients in the system. This is the information-theoretical minimum number of bits required to convey the payload ($b$ bits, assuming it is compressed), along with an identifier for its sender ($\log_2(c)$ bits, necessary to enumerate any set of $c$ elements, and optimal if broadcasting frequencies are uniform or unknown). These two achievements have profound practical implications. Real-world BRB implementations are often bottlenecked either by expensive signature verifications, or by communication overhead. For Draft, instead, the network is the limit: a server can deliver payloads as quickly as it would receive them from an infallible oracle.
2007.10876
Bakheet Aljedaani
Bakheet Aljedaani, M. Ali Babar
Challenges in Developing Secure Mobile Health Applications, A Systematic Review
This paper has 5 figures and 1 table
null
null
null
cs.SE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Mobile health (mHealth) applications (apps) have gained significant popularity over the last few years due to its tremendous benefits, such as lowering healthcare cost and increasing patient awareness. However, the sensitivity of healthcare data makes the security of mHealth apps a serious concern. In this review, we aim to identify and analyse the reported challenges that the developers of mHealth apps face concerning security. Additionally, our study aimed to develop a conceptual framework with the challenges faced by mHealth apps development organization for developing secure apps. The knowledge of such challenges can help to reduce the risk of developing insecure mHealth apps. We followed the Systematic Literature Review method for this review. We selected studies that have been published between January 2008 and October 2020. We selected 32 primary studies using predefined criteria and used thematic analysis method for analysing the extracted data. We identified nine challenges that can affect the development of secure mHealth apps. Such as 1) lack of security guidelines and regulations for developing secure mHealth apps, 2) developers lack of knowledge and expertise for secure mHealth app development, 3) lack of stakeholders involvement during mHealth app development, etc . Based on our analysis, we have presented a conceptual framework which highlights the correlation between the identified challenges. We conclude that our findings can help them identify their weaknesses and improve their security practices. Similarly, mHealth apps developers can identify the challenges they face to develop mHealth apps that do not pose security risks for users. Our review is a step towards providing insights into the development of secure mHealth apps. Our proposed conceptual framework can act as a practice guideline for practitioners to enhance secure mHealth apps development.
[ { "created": "Tue, 21 Jul 2020 15:06:46 GMT", "version": "v1" }, { "created": "Fri, 15 Jan 2021 21:59:09 GMT", "version": "v2" } ]
2021-01-19
[ [ "Aljedaani", "Bakheet", "" ], [ "Babar", "M. Ali", "" ] ]
Mobile health (mHealth) applications (apps) have gained significant popularity over the last few years due to its tremendous benefits, such as lowering healthcare cost and increasing patient awareness. However, the sensitivity of healthcare data makes the security of mHealth apps a serious concern. In this review, we aim to identify and analyse the reported challenges that the developers of mHealth apps face concerning security. Additionally, our study aimed to develop a conceptual framework with the challenges faced by mHealth apps development organization for developing secure apps. The knowledge of such challenges can help to reduce the risk of developing insecure mHealth apps. We followed the Systematic Literature Review method for this review. We selected studies that have been published between January 2008 and October 2020. We selected 32 primary studies using predefined criteria and used thematic analysis method for analysing the extracted data. We identified nine challenges that can affect the development of secure mHealth apps. Such as 1) lack of security guidelines and regulations for developing secure mHealth apps, 2) developers lack of knowledge and expertise for secure mHealth app development, 3) lack of stakeholders involvement during mHealth app development, etc . Based on our analysis, we have presented a conceptual framework which highlights the correlation between the identified challenges. We conclude that our findings can help them identify their weaknesses and improve their security practices. Similarly, mHealth apps developers can identify the challenges they face to develop mHealth apps that do not pose security risks for users. Our review is a step towards providing insights into the development of secure mHealth apps. Our proposed conceptual framework can act as a practice guideline for practitioners to enhance secure mHealth apps development.
cs/0206001
Carlos Gershenson
A. Das, M. Marko, A. Probst, M. A. Porter, C. Gershenson
Neural Net Model for Featured Word Extraction
null
null
null
null
cs.NE cs.NI
null
Search engines perform the task of retrieving information related to the user-supplied query words. This task has two parts; one is finding "featured words" which describe an article best and the other is finding a match among these words to user-defined search terms. There are two main independent approaches to achieve this task. The first one, using the concepts of semantics, has been implemented partially. For more details see another paper of Marko et al., 2002. The second approach is reported in this paper. It is a theoretical model based on using Neural Network (NN). Instead of using keywords or reading from the first few lines from papers/articles, the present model gives emphasis on extracting "featured words" from an article. Obviously we propose to exclude prepositions, articles and so on, that is, English words like "of, the, are, so, therefore, " etc. from such a list. A neural model is taken with its nodes pre-assigned energies. Whenever a match is found with featured words and userdefined search words, the node is fired and jumps to a higher energy. This firing continues until the model attains a steady energy level and total energy is now calculated. Clearly, higher match will generate higher energy; so on the basis of total energy, a ranking is done to the article indicating degree of relevance to the user's interest. Another important feature of the proposed model is incorporating a semantic module to refine the search words; like finding association among search words, etc. In this manner, information retrieval can be improved markedly.
[ { "created": "Sat, 1 Jun 2002 15:10:56 GMT", "version": "v1" } ]
2007-05-23
[ [ "Das", "A.", "" ], [ "Marko", "M.", "" ], [ "Probst", "A.", "" ], [ "Porter", "M. A.", "" ], [ "Gershenson", "C.", "" ] ]
Search engines perform the task of retrieving information related to the user-supplied query words. This task has two parts; one is finding "featured words" which describe an article best and the other is finding a match among these words to user-defined search terms. There are two main independent approaches to achieve this task. The first one, using the concepts of semantics, has been implemented partially. For more details see another paper of Marko et al., 2002. The second approach is reported in this paper. It is a theoretical model based on using Neural Network (NN). Instead of using keywords or reading from the first few lines from papers/articles, the present model gives emphasis on extracting "featured words" from an article. Obviously we propose to exclude prepositions, articles and so on, that is, English words like "of, the, are, so, therefore, " etc. from such a list. A neural model is taken with its nodes pre-assigned energies. Whenever a match is found with featured words and userdefined search words, the node is fired and jumps to a higher energy. This firing continues until the model attains a steady energy level and total energy is now calculated. Clearly, higher match will generate higher energy; so on the basis of total energy, a ranking is done to the article indicating degree of relevance to the user's interest. Another important feature of the proposed model is incorporating a semantic module to refine the search words; like finding association among search words, etc. In this manner, information retrieval can be improved markedly.
2302.04761
Timo Schick
Timo Schick, Jane Dwivedi-Yu, Roberto Dess\`i, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom
Toolformer: Language Models Can Teach Themselves to Use Tools
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Language models (LMs) exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale. They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller models excel. In this paper, we show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds. We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction. This is done in a self-supervised way, requiring nothing more than a handful of demonstrations for each API. We incorporate a range of tools, including a calculator, a Q\&A system, two different search engines, a translation system, and a calendar. Toolformer achieves substantially improved zero-shot performance across a variety of downstream tasks, often competitive with much larger models, without sacrificing its core language modeling abilities.
[ { "created": "Thu, 9 Feb 2023 16:49:57 GMT", "version": "v1" } ]
2023-02-10
[ [ "Schick", "Timo", "" ], [ "Dwivedi-Yu", "Jane", "" ], [ "Dessì", "Roberto", "" ], [ "Raileanu", "Roberta", "" ], [ "Lomeli", "Maria", "" ], [ "Zettlemoyer", "Luke", "" ], [ "Cancedda", "Nicola", "" ], [ "Scialom", "Thomas", "" ] ]
Language models (LMs) exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale. They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller models excel. In this paper, we show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds. We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction. This is done in a self-supervised way, requiring nothing more than a handful of demonstrations for each API. We incorporate a range of tools, including a calculator, a Q\&A system, two different search engines, a translation system, and a calendar. Toolformer achieves substantially improved zero-shot performance across a variety of downstream tasks, often competitive with much larger models, without sacrificing its core language modeling abilities.
1002.0855
Bartek Blaszczyszyn
Fran\c{c}ois Baccelli (INRIA Rocquencourt), Bartek Blaszczyszyn (INRIA Rocquencourt)
A New Phase Transition for Local Delays in MANETs
accepted for IEEE Infocom 2010
null
10.1109/INFCOM.2010.5462132
null
cs.NI math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider Mobile Ad-hoc Network (MANET) with transmitters located according to a Poisson point in the Euclidean plane, slotted Aloha Medium Access (MAC) protocol and the so-called outage scenario, where a successful transmission requires a Signal-to-Interference-and-Noise (SINR) larger than some threshold. We analyze the local delays in such a network, namely the number of times slots required for nodes to transmit a packet to their prescribed next-hop receivers. The analysis depends very much on the receiver scenario and on the variability of the fading. In most cases, each node has finite-mean geometric random delay and thus a positive next hop throughput. However, the spatial (or large population) averaging of these individual finite mean-delays leads to infinite values in several practical cases, including the Rayleigh fading and positive thermal noise case. In some cases it exhibits an interesting phase transition phenomenon where the spatial average is finite when certain model parameters are below a threshold and infinite above. We call this phenomenon, contention phase transition. We argue that the spatial average of the mean local delays is infinite primarily because of the outage logic, where one transmits full packets at time slots when the receiver is covered at the required SINR and where one wastes all the other time slots. This results in the "RESTART" mechanism, which in turn explains why we have infinite spatial average. Adaptive coding offers a nice way of breaking the outage/RESTART logic. We show examples where the average delays are finite in the adaptive coding case, whereas they are infinite in the outage case.
[ { "created": "Wed, 3 Feb 2010 21:18:20 GMT", "version": "v1" } ]
2010-10-28
[ [ "Baccelli", "François", "", "INRIA Rocquencourt" ], [ "Blaszczyszyn", "Bartek", "", "INRIA\n Rocquencourt" ] ]
We consider Mobile Ad-hoc Network (MANET) with transmitters located according to a Poisson point in the Euclidean plane, slotted Aloha Medium Access (MAC) protocol and the so-called outage scenario, where a successful transmission requires a Signal-to-Interference-and-Noise (SINR) larger than some threshold. We analyze the local delays in such a network, namely the number of times slots required for nodes to transmit a packet to their prescribed next-hop receivers. The analysis depends very much on the receiver scenario and on the variability of the fading. In most cases, each node has finite-mean geometric random delay and thus a positive next hop throughput. However, the spatial (or large population) averaging of these individual finite mean-delays leads to infinite values in several practical cases, including the Rayleigh fading and positive thermal noise case. In some cases it exhibits an interesting phase transition phenomenon where the spatial average is finite when certain model parameters are below a threshold and infinite above. We call this phenomenon, contention phase transition. We argue that the spatial average of the mean local delays is infinite primarily because of the outage logic, where one transmits full packets at time slots when the receiver is covered at the required SINR and where one wastes all the other time slots. This results in the "RESTART" mechanism, which in turn explains why we have infinite spatial average. Adaptive coding offers a nice way of breaking the outage/RESTART logic. We show examples where the average delays are finite in the adaptive coding case, whereas they are infinite in the outage case.
1803.09576
Lucas Isenmann
Daniel Gon\c{c}alves and Lucas Isenmann
Dushnik-Miller dimension of TD-Delaunay complexes
A short version appears in the proceedings of EuroCG 2017
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
TD-Delaunay graphs, where TD stands for triangular distance, is a variation of the classical Delaunay triangulations obtained from a specific convex distance function. Bonichon et. al. noticed that every triangulation is the TD-Delaunay graph of a set of points in $\mathbb{R}^2$, and conversely every TD-Delaunay graph is planar. It seems natural to study the generalization of this property in higher dimensions. Such a generalization is obtained by defining an analogue of the triangular distance for $\mathbb{R}^d$. It is easy to see that TD-Delaunay complexes of $\mathbb{R}^{d-1}$ are of Dushnik-Miller dimension $d$. The converse holds for $d=2$ or $3$ and it was conjectured independently by Mary and Evans et. al. to hold for larger $d$. Here we disprove the conjecture already for $d = 4$.
[ { "created": "Mon, 26 Mar 2018 13:28:54 GMT", "version": "v1" } ]
2018-03-28
[ [ "Gonçalves", "Daniel", "" ], [ "Isenmann", "Lucas", "" ] ]
TD-Delaunay graphs, where TD stands for triangular distance, is a variation of the classical Delaunay triangulations obtained from a specific convex distance function. Bonichon et. al. noticed that every triangulation is the TD-Delaunay graph of a set of points in $\mathbb{R}^2$, and conversely every TD-Delaunay graph is planar. It seems natural to study the generalization of this property in higher dimensions. Such a generalization is obtained by defining an analogue of the triangular distance for $\mathbb{R}^d$. It is easy to see that TD-Delaunay complexes of $\mathbb{R}^{d-1}$ are of Dushnik-Miller dimension $d$. The converse holds for $d=2$ or $3$ and it was conjectured independently by Mary and Evans et. al. to hold for larger $d$. Here we disprove the conjecture already for $d = 4$.
2405.14176
Ambar Pal
Ambar Pal and Ren\'e Vidal and Jeremias Sulam
Certified Robustness against Sparse Adversarial Perturbations via Data Localization
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent work in adversarial robustness suggests that natural data distributions are localized, i.e., they place high probability in small volume regions of the input space, and that this property can be utilized for designing classifiers with improved robustness guarantees for $\ell_2$-bounded perturbations. Yet, it is still unclear if this observation holds true for more general metrics. In this work, we extend this theory to $\ell_0$-bounded adversarial perturbations, where the attacker can modify a few pixels of the image but is unrestricted in the magnitude of perturbation, and we show necessary and sufficient conditions for the existence of $\ell_0$-robust classifiers. Theoretical certification approaches in this regime essentially employ voting over a large ensemble of classifiers. Such procedures are combinatorial and expensive or require complicated certification techniques. In contrast, a simple classifier emerges from our theory, dubbed Box-NN, which naturally incorporates the geometry of the problem and improves upon the current state-of-the-art in certified robustness against sparse attacks for the MNIST and Fashion-MNIST datasets.
[ { "created": "Thu, 23 May 2024 05:02:00 GMT", "version": "v1" } ]
2024-05-24
[ [ "Pal", "Ambar", "" ], [ "Vidal", "René", "" ], [ "Sulam", "Jeremias", "" ] ]
Recent work in adversarial robustness suggests that natural data distributions are localized, i.e., they place high probability in small volume regions of the input space, and that this property can be utilized for designing classifiers with improved robustness guarantees for $\ell_2$-bounded perturbations. Yet, it is still unclear if this observation holds true for more general metrics. In this work, we extend this theory to $\ell_0$-bounded adversarial perturbations, where the attacker can modify a few pixels of the image but is unrestricted in the magnitude of perturbation, and we show necessary and sufficient conditions for the existence of $\ell_0$-robust classifiers. Theoretical certification approaches in this regime essentially employ voting over a large ensemble of classifiers. Such procedures are combinatorial and expensive or require complicated certification techniques. In contrast, a simple classifier emerges from our theory, dubbed Box-NN, which naturally incorporates the geometry of the problem and improves upon the current state-of-the-art in certified robustness against sparse attacks for the MNIST and Fashion-MNIST datasets.
2009.05183
Ye Tao
Ye Tao, Can Wang, Lina Yao, Weimin Li, Yonghong Yu
TRec: Sequential Recommender Based On Latent Item Trend Information
8 pages, accepted by IJCNN2020
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommendation system plays an important role in online web applications. Sequential recommender further models user short-term preference through exploiting information from latest user-item interaction history. Most of the sequential recommendation methods neglect the importance of ever-changing item popularity. We propose the model from the intuition that items with most user interactions may be popular in the past but could go out of fashion in recent days. To this end, this paper proposes a novel sequential recommendation approach dubbed TRec, TRec learns item trend information from implicit user interaction history and incorporates item trend information into next item recommendation tasks. Then a self-attention mechanism is used to learn better node representation. Our model is trained via pairwise rank-based optimization. We conduct extensive experiments with seven baseline methods on four benchmark datasets, The empirical result shows our approach outperforms other stateof-the-art methods while maintains a superiorly low runtime cost. Our study demonstrates the importance of item trend information in recommendation system designs, and our method also possesses great efficiency which enables it to be practical in real-world scenarios.
[ { "created": "Fri, 11 Sep 2020 00:31:39 GMT", "version": "v1" } ]
2020-09-14
[ [ "Tao", "Ye", "" ], [ "Wang", "Can", "" ], [ "Yao", "Lina", "" ], [ "Li", "Weimin", "" ], [ "Yu", "Yonghong", "" ] ]
Recommendation system plays an important role in online web applications. Sequential recommender further models user short-term preference through exploiting information from latest user-item interaction history. Most of the sequential recommendation methods neglect the importance of ever-changing item popularity. We propose the model from the intuition that items with most user interactions may be popular in the past but could go out of fashion in recent days. To this end, this paper proposes a novel sequential recommendation approach dubbed TRec, TRec learns item trend information from implicit user interaction history and incorporates item trend information into next item recommendation tasks. Then a self-attention mechanism is used to learn better node representation. Our model is trained via pairwise rank-based optimization. We conduct extensive experiments with seven baseline methods on four benchmark datasets, The empirical result shows our approach outperforms other stateof-the-art methods while maintains a superiorly low runtime cost. Our study demonstrates the importance of item trend information in recommendation system designs, and our method also possesses great efficiency which enables it to be practical in real-world scenarios.
1901.04908
Iuliia Kotseruba
John K. Tsotsos, Iuliia Kotseruba, Calden Wloka
Rapid Visual Categorization is not Guided by Early Salience-Based Selection
22 pages, 9 figures
null
10.1371/journal.pone.0224306
null
cs.CV q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The current dominant visual processing paradigm in both human and machine research is the feedforward, layered hierarchy of neural-like processing elements. Within this paradigm, visual saliency is seen by many to have a specific role, namely that of early selection. Early selection is thought to enable very fast visual performance by limiting processing to only the most salient candidate portions of an image. This strategy has led to a plethora of saliency algorithms that have indeed improved processing time efficiency in machine algorithms, which in turn have strengthened the suggestion that human vision also employs a similar early selection strategy. However, at least one set of critical tests of this idea has never been performed with respect to the role of early selection in human vision. How would the best of the current saliency models perform on the stimuli used by experimentalists who first provided evidence for this visual processing paradigm? Would the algorithms really provide correct candidate sub-images to enable fast categorization on those same images? Do humans really need this early selection for their impressive performance? Here, we report on a new series of tests of these questions whose results suggest that it is quite unlikely that such an early selection process has any role in human rapid visual categorization.
[ { "created": "Tue, 15 Jan 2019 16:22:24 GMT", "version": "v1" }, { "created": "Fri, 20 Dec 2019 14:43:38 GMT", "version": "v2" }, { "created": "Thu, 30 Jan 2020 20:58:35 GMT", "version": "v3" } ]
2020-02-03
[ [ "Tsotsos", "John K.", "" ], [ "Kotseruba", "Iuliia", "" ], [ "Wloka", "Calden", "" ] ]
The current dominant visual processing paradigm in both human and machine research is the feedforward, layered hierarchy of neural-like processing elements. Within this paradigm, visual saliency is seen by many to have a specific role, namely that of early selection. Early selection is thought to enable very fast visual performance by limiting processing to only the most salient candidate portions of an image. This strategy has led to a plethora of saliency algorithms that have indeed improved processing time efficiency in machine algorithms, which in turn have strengthened the suggestion that human vision also employs a similar early selection strategy. However, at least one set of critical tests of this idea has never been performed with respect to the role of early selection in human vision. How would the best of the current saliency models perform on the stimuli used by experimentalists who first provided evidence for this visual processing paradigm? Would the algorithms really provide correct candidate sub-images to enable fast categorization on those same images? Do humans really need this early selection for their impressive performance? Here, we report on a new series of tests of these questions whose results suggest that it is quite unlikely that such an early selection process has any role in human rapid visual categorization.
1809.04170
S\'ebastien Philippe
S\'ebastien Philippe, Alexander Glaser, and Edward W. Felten
A Cryptographic Escrow for Treaty Declarations and Step-by-Step Verification
14 pages, 4 figures
Science & Global Security, pp.1-12 (2019)
10.1080/08929882.2019.1573483
null
cs.CR cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The verification of arms-control and disarmament agreements requires states to provide declarations, including information on sensitive military sites and assets. There are important cases, however, where negotiations of these agreements are impeded because states are reluctant to provide any such data, because of concerns about prematurely handing over militarily significant information. To address this challenge, we present a cryptographic escrow that allows a state to make a complete declaration of sites and assets at the outset and commit to its content, but only reveal the sensitive information therein sequentially. Combined with an inspection regime, our escrow allows for step-by-step verification of the correctness and completeness of the initial declaration so that the information release and inspections keep pace with parallel diplomatic and political processes. We apply this approach to the possible denuclearization of North Korea. Such approach can be applied, however, to any agreement requiring the sharing of sensitive information.
[ { "created": "Tue, 11 Sep 2018 21:17:18 GMT", "version": "v1" } ]
2019-05-13
[ [ "Philippe", "Sébastien", "" ], [ "Glaser", "Alexander", "" ], [ "Felten", "Edward W.", "" ] ]
The verification of arms-control and disarmament agreements requires states to provide declarations, including information on sensitive military sites and assets. There are important cases, however, where negotiations of these agreements are impeded because states are reluctant to provide any such data, because of concerns about prematurely handing over militarily significant information. To address this challenge, we present a cryptographic escrow that allows a state to make a complete declaration of sites and assets at the outset and commit to its content, but only reveal the sensitive information therein sequentially. Combined with an inspection regime, our escrow allows for step-by-step verification of the correctness and completeness of the initial declaration so that the information release and inspections keep pace with parallel diplomatic and political processes. We apply this approach to the possible denuclearization of North Korea. Such approach can be applied, however, to any agreement requiring the sharing of sensitive information.
2011.07222
Risul Islam
Risul Islam, Md Omar Faruk Rokon, Ahmad Darki, Michalis Faloutsos
HackerScope: The Dynamics of a Massive Hacker Online Ecosystem
8 pages, 7 figures, and 4 tables. In press of ASONAM'20
null
null
null
cs.CR cs.IR
http://creativecommons.org/publicdomain/zero/1.0/
Authors of malicious software are not hiding as much as one would assume: they have a visible online footprint. Apart from online forums, this footprint appears in software development platforms, where authors create publicly-accessible malware repositories to share and collaborate. With the exception of a few recent efforts, the existence and the dynamics of this community has received surprisingly limited attention. The goal of our work is to analyze this ecosystem of hackers in order to: (a) understand their collaborative patterns, and (b) identify and profile its most influential authors. We develop HackerScope, a systematic approach for analyzing the dynamics of this hacker ecosystem. Leveraging our targeted data collection, we conduct an extensive study of 7389 authors of malware repositories on GitHub, which we combine with their activity on four security forums. From a modeling point of view, we study the ecosystem using three network representations: (a) the author-author network, (b) the author-repository network, and (c) cross-platform egonets. Our analysis leads to the following key observations: (a) the ecosystem is growing at an accelerating rate as the number of new malware authors per year triples every 2 years, (b) it is highly collaborative, more so than the rest of GitHub authors, and (c) it includes influential and professional hackers. We find 30 authors maintain an online "brand" across GitHub and our security forums. Our study is a significant step towards using public online information for understanding the malicious hacker community.
[ { "created": "Sat, 14 Nov 2020 05:19:54 GMT", "version": "v1" } ]
2020-11-17
[ [ "Islam", "Risul", "" ], [ "Rokon", "Md Omar Faruk", "" ], [ "Darki", "Ahmad", "" ], [ "Faloutsos", "Michalis", "" ] ]
Authors of malicious software are not hiding as much as one would assume: they have a visible online footprint. Apart from online forums, this footprint appears in software development platforms, where authors create publicly-accessible malware repositories to share and collaborate. With the exception of a few recent efforts, the existence and the dynamics of this community has received surprisingly limited attention. The goal of our work is to analyze this ecosystem of hackers in order to: (a) understand their collaborative patterns, and (b) identify and profile its most influential authors. We develop HackerScope, a systematic approach for analyzing the dynamics of this hacker ecosystem. Leveraging our targeted data collection, we conduct an extensive study of 7389 authors of malware repositories on GitHub, which we combine with their activity on four security forums. From a modeling point of view, we study the ecosystem using three network representations: (a) the author-author network, (b) the author-repository network, and (c) cross-platform egonets. Our analysis leads to the following key observations: (a) the ecosystem is growing at an accelerating rate as the number of new malware authors per year triples every 2 years, (b) it is highly collaborative, more so than the rest of GitHub authors, and (c) it includes influential and professional hackers. We find 30 authors maintain an online "brand" across GitHub and our security forums. Our study is a significant step towards using public online information for understanding the malicious hacker community.
2111.11955
Priyanka Meel
Chahat Raj, Priyanka Meel
People Lie, Actions Don't! Modeling Infodemic Proliferation Predictors among Social Media Users
32 pages
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social media is interactive, and interaction brings misinformation. With the growing amount of user-generated data, fake news on online platforms has become much frequent since the arrival of social networks. Now and then, an event occurs and becomes the topic of discussion, generating and propagating false information. Existing literature studying fake news primarily elaborates on fake news classification models. Approaches exploring fake news characteristics and ways to distinguish it from real news are minimal. Not many researches have focused on statistical testing and generating new factor discoveries. This study assumes fourteen hypotheses to identify factors exhibiting a relationship with fake news. We perform the experiments on two real-world COVID-19 datasets using qualitative and quantitative testing methods. This study concludes that sentiment polarity and gender can significantly identify fake news. Dependence on the presence of visual media is, however, inconclusive. Additionally, Twitter-specific factors like followers count, friends count, and retweet count significantly differ in fake and real news. Though, the contribution of status count and favorites count is disputed. This study identifies practical factors to be conjunctly utilized in the development of fake news detection algorithms.
[ { "created": "Tue, 23 Nov 2021 15:48:54 GMT", "version": "v1" } ]
2021-11-24
[ [ "Raj", "Chahat", "" ], [ "Meel", "Priyanka", "" ] ]
Social media is interactive, and interaction brings misinformation. With the growing amount of user-generated data, fake news on online platforms has become much frequent since the arrival of social networks. Now and then, an event occurs and becomes the topic of discussion, generating and propagating false information. Existing literature studying fake news primarily elaborates on fake news classification models. Approaches exploring fake news characteristics and ways to distinguish it from real news are minimal. Not many researches have focused on statistical testing and generating new factor discoveries. This study assumes fourteen hypotheses to identify factors exhibiting a relationship with fake news. We perform the experiments on two real-world COVID-19 datasets using qualitative and quantitative testing methods. This study concludes that sentiment polarity and gender can significantly identify fake news. Dependence on the presence of visual media is, however, inconclusive. Additionally, Twitter-specific factors like followers count, friends count, and retweet count significantly differ in fake and real news. Though, the contribution of status count and favorites count is disputed. This study identifies practical factors to be conjunctly utilized in the development of fake news detection algorithms.
1511.07922
Yi Zhang
Yi Zhang
Contraction of Ore Ideals with Applications
null
null
null
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ore operators form a common algebraic abstraction of linear ordinary differential and recurrence equations. Given an Ore operator $L$ with polynomial coefficients in $x$, it generates a left ideal $I$ in the Ore algebra over the field $\mathbf{k}(x)$ of rational functions. We present an algorithm for computing a basis of the contraction ideal of $I$ in the Ore algebra over the ring $R[x]$ of polynomials, where $R$ may be either $\mathbf{k}$ or a domain with $\mathbf{k}$ as its fraction field. This algorithm is based on recent work on desingularization for Ore operators by Chen, Jaroschek, Kauers and Singer. Using a basis of the contraction ideal, we compute a completely desingularized operator for $L$ whose leading coefficient not only has minimal degree in $x$ but also has minimal content. Completely desingularized operators have interesting applications such as certifying integer sequences and checking special cases of a conjecture of Krattenthaler.
[ { "created": "Wed, 25 Nov 2015 00:15:50 GMT", "version": "v1" }, { "created": "Thu, 26 Nov 2015 21:08:36 GMT", "version": "v2" }, { "created": "Wed, 16 Dec 2015 06:54:48 GMT", "version": "v3" }, { "created": "Thu, 17 Dec 2015 08:54:06 GMT", "version": "v4" }, { "created": "Wed, 23 Dec 2015 12:07:20 GMT", "version": "v5" }, { "created": "Sun, 3 Jan 2016 02:38:59 GMT", "version": "v6" }, { "created": "Thu, 28 Jan 2016 02:47:23 GMT", "version": "v7" }, { "created": "Fri, 29 Jan 2016 03:02:20 GMT", "version": "v8" } ]
2016-02-01
[ [ "Zhang", "Yi", "" ] ]
Ore operators form a common algebraic abstraction of linear ordinary differential and recurrence equations. Given an Ore operator $L$ with polynomial coefficients in $x$, it generates a left ideal $I$ in the Ore algebra over the field $\mathbf{k}(x)$ of rational functions. We present an algorithm for computing a basis of the contraction ideal of $I$ in the Ore algebra over the ring $R[x]$ of polynomials, where $R$ may be either $\mathbf{k}$ or a domain with $\mathbf{k}$ as its fraction field. This algorithm is based on recent work on desingularization for Ore operators by Chen, Jaroschek, Kauers and Singer. Using a basis of the contraction ideal, we compute a completely desingularized operator for $L$ whose leading coefficient not only has minimal degree in $x$ but also has minimal content. Completely desingularized operators have interesting applications such as certifying integer sequences and checking special cases of a conjecture of Krattenthaler.
1704.03118
Neil Zhenqiang Gong
Neil Zhenqiang Gong, Altay Ozen, Yu Wu, Xiaoyu Cao, Richard Shin, Dawn Song, Hongxia Jin and Xuan Bao
PIANO: Proximity-based User Authentication on Voice-Powered Internet-of-Things Devices
To appear in ICDCS'17
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Voice is envisioned to be a popular way for humans to interact with Internet-of-Things (IoT) devices. We propose a proximity-based user authentication method (called PIANO) for access control on such voice-powered IoT devices. PIANO leverages the built-in speaker, microphone, and Bluetooth that voice-powered IoT devices often already have. Specifically, we assume that a user carries a personal voice-powered device (e.g., smartphone, smartwatch, or smartglass), which serves as the user's identity. When another voice-powered IoT device of the user requires authentication, PIANO estimates the distance between the two devices by playing and detecting certain acoustic signals; PIANO grants access if the estimated distance is no larger than a user-selected threshold. We implemented a proof-of-concept prototype of PIANO. Through theoretical and empirical evaluations, we find that PIANO is secure, reliable, personalizable, and efficient.
[ { "created": "Tue, 11 Apr 2017 02:27:31 GMT", "version": "v1" } ]
2017-04-12
[ [ "Gong", "Neil Zhenqiang", "" ], [ "Ozen", "Altay", "" ], [ "Wu", "Yu", "" ], [ "Cao", "Xiaoyu", "" ], [ "Shin", "Richard", "" ], [ "Song", "Dawn", "" ], [ "Jin", "Hongxia", "" ], [ "Bao", "Xuan", "" ] ]
Voice is envisioned to be a popular way for humans to interact with Internet-of-Things (IoT) devices. We propose a proximity-based user authentication method (called PIANO) for access control on such voice-powered IoT devices. PIANO leverages the built-in speaker, microphone, and Bluetooth that voice-powered IoT devices often already have. Specifically, we assume that a user carries a personal voice-powered device (e.g., smartphone, smartwatch, or smartglass), which serves as the user's identity. When another voice-powered IoT device of the user requires authentication, PIANO estimates the distance between the two devices by playing and detecting certain acoustic signals; PIANO grants access if the estimated distance is no larger than a user-selected threshold. We implemented a proof-of-concept prototype of PIANO. Through theoretical and empirical evaluations, we find that PIANO is secure, reliable, personalizable, and efficient.
2404.12555
Tiffany Nguyen
Tiffany T. Nguyen, Cinthya Jauregui, Sarah H. Sallee, Mohan R. Chandrasekar, Liam A'Hearn, Dominic J. Woetzel, Pinak Paliwal, Madison Nguyen, Isabella `Amne Gomez, Xinqi Zhang, Lee M. Panich, Danielle M. Heitmuller, Amy Lueck, Kai Lukoff
Sociotechnical Considerations for SLAM Anchors in Location-Based AR
Presented at CHI 2024 (arXiv:2404.05889)
null
null
ARSJ/2024/13
cs.HC
http://creativecommons.org/licenses/by/4.0/
In this position paper, we explore the power of storytelling and its connection to place through the use of Augmented Reality (AR) technology, particularly within the context of Th\'amien Ohlone history on the Santa Clara University campus. To do this, we utilized SLAM and 8th Wall to create virtual, location-based experiences that geolocate tribal stories at present-day sites, showcase the living culture of the Th\'amien Ohlone tribe, and advocate for physical markers that could exist to recognize their story. When doing so, we made sure to select locations that added to the story each stop tells to serve as our anchors. Our research then investigates both the social and technical considerations involved in selecting anchors for AR experiences, using the Th\'amien Ohlone AR Tour as a case study.
[ { "created": "Fri, 19 Apr 2024 00:19:43 GMT", "version": "v1" } ]
2024-04-22
[ [ "Nguyen", "Tiffany T.", "" ], [ "Jauregui", "Cinthya", "" ], [ "Sallee", "Sarah H.", "" ], [ "Chandrasekar", "Mohan R.", "" ], [ "A'Hearn", "Liam", "" ], [ "Woetzel", "Dominic J.", "" ], [ "Paliwal", "Pinak", "" ], [ "Nguyen", "Madison", "" ], [ "Gomez", "Isabella `Amne", "" ], [ "Zhang", "Xinqi", "" ], [ "Panich", "Lee M.", "" ], [ "Heitmuller", "Danielle M.", "" ], [ "Lueck", "Amy", "" ], [ "Lukoff", "Kai", "" ] ]
In this position paper, we explore the power of storytelling and its connection to place through the use of Augmented Reality (AR) technology, particularly within the context of Th\'amien Ohlone history on the Santa Clara University campus. To do this, we utilized SLAM and 8th Wall to create virtual, location-based experiences that geolocate tribal stories at present-day sites, showcase the living culture of the Th\'amien Ohlone tribe, and advocate for physical markers that could exist to recognize their story. When doing so, we made sure to select locations that added to the story each stop tells to serve as our anchors. Our research then investigates both the social and technical considerations involved in selecting anchors for AR experiences, using the Th\'amien Ohlone AR Tour as a case study.
2204.09610
Noah Daniels
Polina Shpilker, John Freeman, Hailey McKelvie, Jill Ashey, Jay-Miguel Fonticella, Hollie Putnam, Jane Greenberg, Lenore J. Cowen, Alva Couch, Noah M. Daniels
MEDFORD: A human and machine readable metadata markup language
10 pages, no figures
null
null
null
cs.DL cs.DB q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reproducibility of research is essential for science. However, in the way modern computational biology research is done, it is easy to lose track of small, but extremely critical, details. Key details, such as the specific version of a software used or iteration of a genome can easily be lost in the shuffle, or perhaps not noted at all. Much work is being done on the database and storage side of things, ensuring that there exists a space to store experiment-specific details, but current mechanisms for recording details are cumbersome for scientists to use. We propose a new metadata description language, named MEDFORD, in which scientists can record all details relevant to their research. Human-readable, easily-editable, and templatable, MEDFORD serves as a collection point for all notes that a researcher could find relevant to their research, be it for internal use or for future replication. MEDFORD has been applied to coral research, documenting research from RNA-seq analyses to photo collections.
[ { "created": "Wed, 20 Apr 2022 16:45:03 GMT", "version": "v1" }, { "created": "Thu, 16 Jun 2022 16:46:57 GMT", "version": "v2" } ]
2022-06-17
[ [ "Shpilker", "Polina", "" ], [ "Freeman", "John", "" ], [ "McKelvie", "Hailey", "" ], [ "Ashey", "Jill", "" ], [ "Fonticella", "Jay-Miguel", "" ], [ "Putnam", "Hollie", "" ], [ "Greenberg", "Jane", "" ], [ "Cowen", "Lenore J.", "" ], [ "Couch", "Alva", "" ], [ "Daniels", "Noah M.", "" ] ]
Reproducibility of research is essential for science. However, in the way modern computational biology research is done, it is easy to lose track of small, but extremely critical, details. Key details, such as the specific version of a software used or iteration of a genome can easily be lost in the shuffle, or perhaps not noted at all. Much work is being done on the database and storage side of things, ensuring that there exists a space to store experiment-specific details, but current mechanisms for recording details are cumbersome for scientists to use. We propose a new metadata description language, named MEDFORD, in which scientists can record all details relevant to their research. Human-readable, easily-editable, and templatable, MEDFORD serves as a collection point for all notes that a researcher could find relevant to their research, be it for internal use or for future replication. MEDFORD has been applied to coral research, documenting research from RNA-seq analyses to photo collections.
1203.1212
Luciano Panek
Luciano Panek and Marcelo Firer
Codes Satisfying the Chain Condition with a Poset Weights
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we extend the concept of generalized Wei weights for poset-weight codes and show that all linear codes C satisfy the chain condition if support of C is a subposet totally ordered.
[ { "created": "Tue, 6 Mar 2012 14:43:55 GMT", "version": "v1" } ]
2012-03-07
[ [ "Panek", "Luciano", "" ], [ "Firer", "Marcelo", "" ] ]
In this paper we extend the concept of generalized Wei weights for poset-weight codes and show that all linear codes C satisfy the chain condition if support of C is a subposet totally ordered.
1904.07664
Mika\"el Rabie
Carole Delporte-Gallet, Hugues Fauconnier, Pierre Fraigniaud, Mika\"el Rabie
Distributed Computing in the Asynchronous LOCAL model
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The LOCAL model is among the main models for studying locality in the framework of distributed network computing. This model is however subject to pertinent criticisms, including the facts that all nodes wake up simultaneously, perform in lock steps, and are failure-free. We show that relaxing these hypotheses to some extent does not hurt local computing. In particular, we show that, for any construction task $T$ associated to a locally checkable labeling (LCL), if $T$ is solvable in $t$ rounds in the LOCAL model, then $T$ remains solvable in $O(t)$ rounds in the asynchronous LOCAL model. This improves the result by Casta\~neda et al. [SSS 2016], which was restricted to 3-coloring the rings. More generally, the main contribution of this paper is to show that, perhaps surprisingly, asynchrony and failures in the computations do not restrict the power of the LOCAL model, as long as the communications remain synchronous and failure-free.
[ { "created": "Tue, 16 Apr 2019 13:40:25 GMT", "version": "v1" }, { "created": "Mon, 20 May 2019 14:05:49 GMT", "version": "v2" }, { "created": "Fri, 6 Dec 2019 17:27:16 GMT", "version": "v3" } ]
2019-12-09
[ [ "Delporte-Gallet", "Carole", "" ], [ "Fauconnier", "Hugues", "" ], [ "Fraigniaud", "Pierre", "" ], [ "Rabie", "Mikaël", "" ] ]
The LOCAL model is among the main models for studying locality in the framework of distributed network computing. This model is however subject to pertinent criticisms, including the facts that all nodes wake up simultaneously, perform in lock steps, and are failure-free. We show that relaxing these hypotheses to some extent does not hurt local computing. In particular, we show that, for any construction task $T$ associated to a locally checkable labeling (LCL), if $T$ is solvable in $t$ rounds in the LOCAL model, then $T$ remains solvable in $O(t)$ rounds in the asynchronous LOCAL model. This improves the result by Casta\~neda et al. [SSS 2016], which was restricted to 3-coloring the rings. More generally, the main contribution of this paper is to show that, perhaps surprisingly, asynchrony and failures in the computations do not restrict the power of the LOCAL model, as long as the communications remain synchronous and failure-free.
2108.04527
Zan Gao
Zan Gao, Hongwei Wei, Weili Guan, Weizhi Nie, Meng Liu, Meng Wang
Multigranular Visual-Semantic Embedding for Cloth-Changing Person Re-identification
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Person reidentification (ReID) is a very hot research topic in machine learning and computer vision, and many person ReID approaches have been proposed; however, most of these methods assume that the same person has the same clothes within a short time interval, and thus their visual appearance must be similar. However, in an actual surveillance environment, a given person has a great probability of changing clothes after a long time span, and they also often take different personal belongings with them. When the existing person ReID methods are applied in this type of case, almost all of them fail. To date, only a few works have focused on the cloth-changing person ReID task, but since it is very difficult to extract generalized and robust features for representing people with different clothes, their performances need to be improved. Moreover, visual-semantic information is often ignored. To solve these issues, in this work, a novel multigranular visual-semantic embedding algorithm (MVSE) is proposed for cloth-changing person ReID, where visual semantic information and human attributes are embedded into the network, and the generalized features of human appearance can be well learned to effectively solve the problem of clothing changes. Specifically, to fully represent a person with clothing changes, a multigranular feature representation scheme (MGR) is employed to focus on the unchanged part of the human, and then a cloth desensitization network (CDN) is designed to improve the feature robustness of the approach for the person with different clothing, where different high-level human attributes are fully utilized. Moreover, to further solve the issue of pose changes and occlusion under different camera perspectives, a partially semantically aligned network (PSA) is proposed to obtain the visual-semantic information that is used to align the human attributes.
[ { "created": "Tue, 10 Aug 2021 09:14:44 GMT", "version": "v1" } ]
2021-08-11
[ [ "Gao", "Zan", "" ], [ "Wei", "Hongwei", "" ], [ "Guan", "Weili", "" ], [ "Nie", "Weizhi", "" ], [ "Liu", "Meng", "" ], [ "Wang", "Meng", "" ] ]
Person reidentification (ReID) is a very hot research topic in machine learning and computer vision, and many person ReID approaches have been proposed; however, most of these methods assume that the same person has the same clothes within a short time interval, and thus their visual appearance must be similar. However, in an actual surveillance environment, a given person has a great probability of changing clothes after a long time span, and they also often take different personal belongings with them. When the existing person ReID methods are applied in this type of case, almost all of them fail. To date, only a few works have focused on the cloth-changing person ReID task, but since it is very difficult to extract generalized and robust features for representing people with different clothes, their performances need to be improved. Moreover, visual-semantic information is often ignored. To solve these issues, in this work, a novel multigranular visual-semantic embedding algorithm (MVSE) is proposed for cloth-changing person ReID, where visual semantic information and human attributes are embedded into the network, and the generalized features of human appearance can be well learned to effectively solve the problem of clothing changes. Specifically, to fully represent a person with clothing changes, a multigranular feature representation scheme (MGR) is employed to focus on the unchanged part of the human, and then a cloth desensitization network (CDN) is designed to improve the feature robustness of the approach for the person with different clothing, where different high-level human attributes are fully utilized. Moreover, to further solve the issue of pose changes and occlusion under different camera perspectives, a partially semantically aligned network (PSA) is proposed to obtain the visual-semantic information that is used to align the human attributes.
1707.01736
Emiel van Miltenburg
Emiel van Miltenburg, Desmond Elliott, Piek Vossen
Cross-linguistic differences and similarities in image descriptions
Accepted for INLG 2017, Santiago de Compostela, Spain, 4-7 September, 2017. Camera-ready version. See the ACL anthology for full bibliographic information
null
null
null
cs.CL cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic image description systems are commonly trained and evaluated on large image description datasets. Recently, researchers have started to collect such datasets for languages other than English. An unexplored question is how different these datasets are from English and, if there are any differences, what causes them to differ. This paper provides a cross-linguistic comparison of Dutch, English, and German image descriptions. We find that these descriptions are similar in many respects, but the familiarity of crowd workers with the subjects of the images has a noticeable influence on description specificity.
[ { "created": "Thu, 6 Jul 2017 11:53:41 GMT", "version": "v1" }, { "created": "Sun, 13 Aug 2017 10:18:44 GMT", "version": "v2" } ]
2017-08-15
[ [ "van Miltenburg", "Emiel", "" ], [ "Elliott", "Desmond", "" ], [ "Vossen", "Piek", "" ] ]
Automatic image description systems are commonly trained and evaluated on large image description datasets. Recently, researchers have started to collect such datasets for languages other than English. An unexplored question is how different these datasets are from English and, if there are any differences, what causes them to differ. This paper provides a cross-linguistic comparison of Dutch, English, and German image descriptions. We find that these descriptions are similar in many respects, but the familiarity of crowd workers with the subjects of the images has a noticeable influence on description specificity.
1911.04820
Qiang Ren
Qiang Ren
Grouping Capsules Based Different Types
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Capsule network was introduced as a new architecture of neural networks, it encoding features as capsules to overcome the lacking of equivariant in the convolutional neural networks. It uses dynamic routing algorithm to train parameters in different capsule layers, but the dynamic routing algorithm need to be improved. In this paper, we propose a novel capsule network architecture and discussed the effect of initialization method of the coupling coefficient $c_{ij}$ on the model. First, we analyze the rate of change of the initial value of $c_{ij}$ when the dynamic routing algorithm iterates. The larger the initial value of $c_{ij}$, the better effect of the model. Then, we proposed improvement that training different types of capsules by grouping capsules based different types. And this improvement can adjust the initial value of $c_{ij}$ to make it more suitable. We experimented with our improvements on some computer vision datasets and achieved better results than the original capsule network
[ { "created": "Tue, 12 Nov 2019 12:39:20 GMT", "version": "v1" } ]
2019-11-13
[ [ "Ren", "Qiang", "" ] ]
Capsule network was introduced as a new architecture of neural networks, it encoding features as capsules to overcome the lacking of equivariant in the convolutional neural networks. It uses dynamic routing algorithm to train parameters in different capsule layers, but the dynamic routing algorithm need to be improved. In this paper, we propose a novel capsule network architecture and discussed the effect of initialization method of the coupling coefficient $c_{ij}$ on the model. First, we analyze the rate of change of the initial value of $c_{ij}$ when the dynamic routing algorithm iterates. The larger the initial value of $c_{ij}$, the better effect of the model. Then, we proposed improvement that training different types of capsules by grouping capsules based different types. And this improvement can adjust the initial value of $c_{ij}$ to make it more suitable. We experimented with our improvements on some computer vision datasets and achieved better results than the original capsule network
2402.01488
Max Peter Ronecker
Max Peter Ronecker, Markus Schratter, Lukas Kuschnig and Daniel Watzenig
Dynamic Occupancy Grids for Object Detection: A Radar-Centric Approach
Accepted at 2024 IEEE International Conference on Robotics and Automation (ICRA) Second Version: Corrected Typos
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Dynamic Occupancy Grid Mapping is a technique used to generate a local map of the environment containing both static and dynamic information. Typically, these maps are primarily generated using lidar measurements. However, with improvements in radar sensing, resulting in better accuracy and higher resolution, radar is emerging as a viable alternative to lidar as the primary sensor for mapping. In this paper, we propose a radar-centric dynamic occupancy grid mapping algorithm with adaptations to the state computation, inverse sensor model, and field-of-view computation tailored to the specifics of radar measurements. We extensively evaluate our approach using real data to demonstrate its effectiveness and establish the first benchmark for radar-based dynamic occupancy grid mapping using the publicly available Radarscenes dataset.
[ { "created": "Fri, 2 Feb 2024 15:14:47 GMT", "version": "v1" }, { "created": "Wed, 22 May 2024 02:55:49 GMT", "version": "v2" } ]
2024-05-24
[ [ "Ronecker", "Max Peter", "" ], [ "Schratter", "Markus", "" ], [ "Kuschnig", "Lukas", "" ], [ "Watzenig", "Daniel", "" ] ]
Dynamic Occupancy Grid Mapping is a technique used to generate a local map of the environment containing both static and dynamic information. Typically, these maps are primarily generated using lidar measurements. However, with improvements in radar sensing, resulting in better accuracy and higher resolution, radar is emerging as a viable alternative to lidar as the primary sensor for mapping. In this paper, we propose a radar-centric dynamic occupancy grid mapping algorithm with adaptations to the state computation, inverse sensor model, and field-of-view computation tailored to the specifics of radar measurements. We extensively evaluate our approach using real data to demonstrate its effectiveness and establish the first benchmark for radar-based dynamic occupancy grid mapping using the publicly available Radarscenes dataset.
1205.1860
Mohandeep Sharma
Mohandeep Sharma, Dilip Kumar
Wishbone bus Architecture - A Survey and Comparison
18 pages
International journal of VLSI Design & Communication Systems (VLSICS) Vol.3, No.2 April 2012, 107-124
10.5121/vlsic.2012.3210
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The performance of an on-chip interconnection architecture used for communication between IP cores depends on the efficiency of its bus architecture. Any bus architecture having advantages of faster bus clock speed, extra data transfer cycle, improved bus width and throughput is highly desirable for a low cost, reduced time-to-market and efficient System-on-Chip (SoC). This paper presents a survey of WISHBONE bus architecture and its comparison with three other on-chip bus architectures viz. Advanced Micro controller Bus Architecture (AMBA) by ARM, CoreConnect by IBM and Avalon by Altera. The WISHBONE Bus Architecture by Silicore Corporation appears to be gaining an upper edge over the other three bus architecture types because of its special performance parameters like the use of flexible arbitration scheme and additional data transfer cycle (Read-Modify-Write cycle). Moreover, its IP Cores are available free for use requiring neither any registration nor any agreement or license.
[ { "created": "Wed, 9 May 2012 03:18:40 GMT", "version": "v1" } ]
2012-05-10
[ [ "Sharma", "Mohandeep", "" ], [ "Kumar", "Dilip", "" ] ]
The performance of an on-chip interconnection architecture used for communication between IP cores depends on the efficiency of its bus architecture. Any bus architecture having advantages of faster bus clock speed, extra data transfer cycle, improved bus width and throughput is highly desirable for a low cost, reduced time-to-market and efficient System-on-Chip (SoC). This paper presents a survey of WISHBONE bus architecture and its comparison with three other on-chip bus architectures viz. Advanced Micro controller Bus Architecture (AMBA) by ARM, CoreConnect by IBM and Avalon by Altera. The WISHBONE Bus Architecture by Silicore Corporation appears to be gaining an upper edge over the other three bus architecture types because of its special performance parameters like the use of flexible arbitration scheme and additional data transfer cycle (Read-Modify-Write cycle). Moreover, its IP Cores are available free for use requiring neither any registration nor any agreement or license.
2306.13935
Abhishek Ghose
Emma Thuong Nguyen, Abhishek Ghose
Are Good Explainers Secretly Human-in-the-Loop Active Learners?
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explainable AI (XAI) techniques have become popular for multiple use-cases in the past few years. Here we consider its use in studying model predictions to gather additional training data. We argue that this is equivalent to Active Learning, where the query strategy involves a human-in-the-loop. We provide a mathematical approximation for the role of the human, and present a general formalization of the end-to-end workflow. This enables us to rigorously compare this use with standard Active Learning algorithms, while allowing for extensions to the workflow. An added benefit is that their utility can be assessed via simulation instead of conducting expensive user-studies. We also present some initial promising results.
[ { "created": "Sat, 24 Jun 2023 10:50:42 GMT", "version": "v1" }, { "created": "Sat, 15 Jul 2023 14:03:55 GMT", "version": "v2" }, { "created": "Tue, 16 Apr 2024 16:33:07 GMT", "version": "v3" } ]
2024-04-17
[ [ "Nguyen", "Emma Thuong", "" ], [ "Ghose", "Abhishek", "" ] ]
Explainable AI (XAI) techniques have become popular for multiple use-cases in the past few years. Here we consider its use in studying model predictions to gather additional training data. We argue that this is equivalent to Active Learning, where the query strategy involves a human-in-the-loop. We provide a mathematical approximation for the role of the human, and present a general formalization of the end-to-end workflow. This enables us to rigorously compare this use with standard Active Learning algorithms, while allowing for extensions to the workflow. An added benefit is that their utility can be assessed via simulation instead of conducting expensive user-studies. We also present some initial promising results.
2211.04259
Istv\'an Andr\'as Seres
Domokos Mikl\'os Kelen and Istv\'an Andr\'as Seres
Towards Measuring the Traceability of Cryptocurrencies
Pre-print. Various major updates since the previous version. Added more applications, explanation, and clarification
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Cryptocurrencies aim to replicate physical cash in the digital realm while removing centralized and trusted intermediaries. Decentralization is achieved by the blockchain, a permanent public ledger that contains a record of every transaction. The public ledger ensures transparency, which enables public verifiability but harms untraceability, fungibility, and anonymity. In the last decade, cryptocurrencies attracted millions of users, with their total market cap reaching approximately three trillion USD at its peak. However, their anonymity guarantees are poorly understood and plagued by widespread misbeliefs. Indeed, previous notions of privacy, anonymity, and traceability for cryptocurrencies are either non-quantitative or inapplicable, e.g., computationally hard to measure. In this work, we put forward a formal framework to measure the (un)traceability and anonymity of cryptocurrencies, allowing us to quantitatively reason about the mixing characteristics of cryptocurrencies and the privacy-enhancing technologies built on top of them. Our methods apply absorbing Markov chains combined with Shannon entropy. To the best of our knowledge, our work provides the first practical, efficient, and probabilistic measure to assess the traceability of cryptocurrencies quantitatively, which also generalizes to entire cryptocurrency transaction graphs. We implement and extensively evaluate our proposed traceability measure on several cryptocurrency transaction graphs. Among other quantitative results, we find that in the studied one-week interval, the Bitcoin blockchain, on average, provided comparable but quantifiably more natural mixing than the Ethereum blockchain.
[ { "created": "Tue, 8 Nov 2022 14:08:39 GMT", "version": "v1" }, { "created": "Sat, 1 Jun 2024 14:05:33 GMT", "version": "v2" } ]
2024-06-04
[ [ "Kelen", "Domokos Miklós", "" ], [ "Seres", "István András", "" ] ]
Cryptocurrencies aim to replicate physical cash in the digital realm while removing centralized and trusted intermediaries. Decentralization is achieved by the blockchain, a permanent public ledger that contains a record of every transaction. The public ledger ensures transparency, which enables public verifiability but harms untraceability, fungibility, and anonymity. In the last decade, cryptocurrencies attracted millions of users, with their total market cap reaching approximately three trillion USD at its peak. However, their anonymity guarantees are poorly understood and plagued by widespread misbeliefs. Indeed, previous notions of privacy, anonymity, and traceability for cryptocurrencies are either non-quantitative or inapplicable, e.g., computationally hard to measure. In this work, we put forward a formal framework to measure the (un)traceability and anonymity of cryptocurrencies, allowing us to quantitatively reason about the mixing characteristics of cryptocurrencies and the privacy-enhancing technologies built on top of them. Our methods apply absorbing Markov chains combined with Shannon entropy. To the best of our knowledge, our work provides the first practical, efficient, and probabilistic measure to assess the traceability of cryptocurrencies quantitatively, which also generalizes to entire cryptocurrency transaction graphs. We implement and extensively evaluate our proposed traceability measure on several cryptocurrency transaction graphs. Among other quantitative results, we find that in the studied one-week interval, the Bitcoin blockchain, on average, provided comparable but quantifiably more natural mixing than the Ethereum blockchain.
1304.2744
Donald H. Mitchell
Donald H. Mitchell, Steven A. Harp, David K. Simkin
A Knowledge Engineer's Comparison of Three Evidence Aggregation Methods
Appears in Proceedings of the Third Conference on Uncertainty in Artificial Intelligence (UAI1987)
null
null
UAI-P-1987-PG-297-304
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The comparisons of uncertainty calculi from the last two Uncertainty Workshops have all used theoretical probabilistic accuracy as the sole metric. While mathematical correctness is important, there are other factors which should be considered when developing reasoning systems. These other factors include, among other things, the error in uncertainty measures obtainable for the problem and the effect of this error on the performance of the resulting system.
[ { "created": "Wed, 27 Mar 2013 19:48:59 GMT", "version": "v1" } ]
2013-04-11
[ [ "Mitchell", "Donald H.", "" ], [ "Harp", "Steven A.", "" ], [ "Simkin", "David K.", "" ] ]
The comparisons of uncertainty calculi from the last two Uncertainty Workshops have all used theoretical probabilistic accuracy as the sole metric. While mathematical correctness is important, there are other factors which should be considered when developing reasoning systems. These other factors include, among other things, the error in uncertainty measures obtainable for the problem and the effect of this error on the performance of the resulting system.
1412.2109
Janne H. Korhonen
Petteri Kaski, Janne H. Korhonen, Christoph Lenzen, Jukka Suomela
Algebrisation in Distributed Graph Algorithms: Fast Matrix Multiplication in the Congested Clique
This paper has been withdrawn by the authors. This paper has been superseded by arXiv:1503.04963 (merged from arXiv:1412.2109 and arXiv:1412.2667)
null
null
null
cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While algebrisation constitutes a powerful technique in the design and analysis of centralised algorithms, to date there have been hardly any applications of algebraic techniques in the context of distributed graph algorithms. This work is a case study that demonstrates the potential of algebrisation in the distributed context. We will focus on distributed graph algorithms in the congested clique model; the graph problems that we will consider include, e.g., the triangle detection problem and the all-pairs shortest path problem (APSP). There is plenty of prior work on combinatorial algorithms in the congested clique model: for example, Dolev et al. (DISC 2012) gave an algorithm for triangle detection with a running time of $\tilde O(n^{1/3})$, and Nanongkai (STOC 2014) gave an approximation algorithm for APSP with a running time of $\tilde O(n^{1/2})$. In this work, we will use algebraic techniques -- in particular, algorithms based on fast matrix multiplication -- to solve both triangle detection and the unweighted APSP in time $O(n^{0.15715})$; for weighted APSP, we give a $(1+o(1))$-approximation with this running time, as well as an exact $\tilde O(n^{1/3})$ solution.
[ { "created": "Fri, 5 Dec 2014 19:33:46 GMT", "version": "v1" }, { "created": "Wed, 18 Mar 2015 09:52:27 GMT", "version": "v2" } ]
2015-03-19
[ [ "Kaski", "Petteri", "" ], [ "Korhonen", "Janne H.", "" ], [ "Lenzen", "Christoph", "" ], [ "Suomela", "Jukka", "" ] ]
While algebrisation constitutes a powerful technique in the design and analysis of centralised algorithms, to date there have been hardly any applications of algebraic techniques in the context of distributed graph algorithms. This work is a case study that demonstrates the potential of algebrisation in the distributed context. We will focus on distributed graph algorithms in the congested clique model; the graph problems that we will consider include, e.g., the triangle detection problem and the all-pairs shortest path problem (APSP). There is plenty of prior work on combinatorial algorithms in the congested clique model: for example, Dolev et al. (DISC 2012) gave an algorithm for triangle detection with a running time of $\tilde O(n^{1/3})$, and Nanongkai (STOC 2014) gave an approximation algorithm for APSP with a running time of $\tilde O(n^{1/2})$. In this work, we will use algebraic techniques -- in particular, algorithms based on fast matrix multiplication -- to solve both triangle detection and the unweighted APSP in time $O(n^{0.15715})$; for weighted APSP, we give a $(1+o(1))$-approximation with this running time, as well as an exact $\tilde O(n^{1/3})$ solution.
2302.10730
Lorenzo Vaquero
Saqib Nazir, Lorenzo Vaquero, Manuel Mucientes, V\'ictor M. Brea, Daniela Coltuc
Depth Estimation and Image Restoration by Deep Learning from Defocused Images
null
IEEE Transactions on Computational Imaging, vol. 9, pp. 607-619, 2023
10.1109/TCI.2023.3288335
null
cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Monocular depth estimation and image deblurring are two fundamental tasks in computer vision, given their crucial role in understanding 3D scenes. Performing any of them by relying on a single image is an ill-posed problem. The recent advances in the field of Deep Convolutional Neural Networks (DNNs) have revolutionized many tasks in computer vision, including depth estimation and image deblurring. When it comes to using defocused images, the depth estimation and the recovery of the All-in-Focus (Aif) image become related problems due to defocus physics. Despite this, most of the existing models treat them separately. There are, however, recent models that solve these problems simultaneously by concatenating two networks in a sequence to first estimate the depth or defocus map and then reconstruct the focused image based on it. We propose a DNN that solves the depth estimation and image deblurring in parallel. Our Two-headed Depth Estimation and Deblurring Network (2HDED:NET) extends a conventional Depth from Defocus (DFD) networks with a deblurring branch that shares the same encoder as the depth branch. The proposed method has been successfully tested on two benchmarks, one for indoor and the other for outdoor scenes: NYU-v2 and Make3D. Extensive experiments with 2HDED:NET on these benchmarks have demonstrated superior or close performances to those of the state-of-the-art models for depth estimation and image deblurring.
[ { "created": "Tue, 21 Feb 2023 15:28:42 GMT", "version": "v1" }, { "created": "Thu, 27 Jul 2023 19:29:15 GMT", "version": "v2" } ]
2023-07-31
[ [ "Nazir", "Saqib", "" ], [ "Vaquero", "Lorenzo", "" ], [ "Mucientes", "Manuel", "" ], [ "Brea", "Víctor M.", "" ], [ "Coltuc", "Daniela", "" ] ]
Monocular depth estimation and image deblurring are two fundamental tasks in computer vision, given their crucial role in understanding 3D scenes. Performing any of them by relying on a single image is an ill-posed problem. The recent advances in the field of Deep Convolutional Neural Networks (DNNs) have revolutionized many tasks in computer vision, including depth estimation and image deblurring. When it comes to using defocused images, the depth estimation and the recovery of the All-in-Focus (Aif) image become related problems due to defocus physics. Despite this, most of the existing models treat them separately. There are, however, recent models that solve these problems simultaneously by concatenating two networks in a sequence to first estimate the depth or defocus map and then reconstruct the focused image based on it. We propose a DNN that solves the depth estimation and image deblurring in parallel. Our Two-headed Depth Estimation and Deblurring Network (2HDED:NET) extends a conventional Depth from Defocus (DFD) networks with a deblurring branch that shares the same encoder as the depth branch. The proposed method has been successfully tested on two benchmarks, one for indoor and the other for outdoor scenes: NYU-v2 and Make3D. Extensive experiments with 2HDED:NET on these benchmarks have demonstrated superior or close performances to those of the state-of-the-art models for depth estimation and image deblurring.
2104.09925
Benjamin Rosen
Benjamin Rosen, Adnan M. Abu-Mahfouz and Ling Cheng
A Note on Slepian-Wolf Bounds for Several Node Grouping Configurations
4 pages, 5 figures
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
The Slepian-Wolf bound on the admissible coding rate forms the most fundamental aspect of distributed source coding. As such, it is necessary to provide a framework with which to model more practical scenarios with respect to the arrangement of nodes in order to make Slepian-Wolf coding more suitable for multi-node Wireless Sensor Networks. This paper provides two practical scenarios in order to achieve this aim. The first is by grouping the nodes based on correlation while the second involves simplifying the structure using Markov correlation. It is found that although the bounds of these scenarios are more restrictive than the original Slepian-Wolf bound, the overall model and bound are simplified.
[ { "created": "Tue, 20 Apr 2021 12:31:52 GMT", "version": "v1" } ]
2021-04-21
[ [ "Rosen", "Benjamin", "" ], [ "Abu-Mahfouz", "Adnan M.", "" ], [ "Cheng", "Ling", "" ] ]
The Slepian-Wolf bound on the admissible coding rate forms the most fundamental aspect of distributed source coding. As such, it is necessary to provide a framework with which to model more practical scenarios with respect to the arrangement of nodes in order to make Slepian-Wolf coding more suitable for multi-node Wireless Sensor Networks. This paper provides two practical scenarios in order to achieve this aim. The first is by grouping the nodes based on correlation while the second involves simplifying the structure using Markov correlation. It is found that although the bounds of these scenarios are more restrictive than the original Slepian-Wolf bound, the overall model and bound are simplified.
1505.07194
Peng Liu
Peng Liu and Saeed Gazor and Il-Min Kim and Dong In Kim
Noncoherent Relaying in Energy Harvesting Communication Systems
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-sa/3.0/
In energy harvesting (EH) relay networks, the coherent communication requires accurate estima- tion/tracking of the instantaneous channel state information (CSI) which consumes extra power. As a remedy, we propose two noncoherent EH relaying protocols based on the amplify-and-forward (AF) relaying, namely, power splitting noncoherent AF (PS-NcAF) and time switching noncoherent AF (TS-NcAF), which do not require any instantaneous CSI. We develop a noncoherent framework of simultaneous wireless information and power transfer (SWIPT), embracing PS-NcAF and TS-NcAF in a unified form. For arbitrary M-ary noncoherent frequency-shift keying (FSK) and differential phase- shift keying (DPSK), we derive maximum-likelihood detectors (MLDs) for PS-NcAF and TS-NcAF in a unified form, which involves integral evaluations yet serves as the optimum performance benchmark. To avoid expensive integral computations, we propose a closed-form detector using the Gauss-Legendre approximation, which achieves almost identical performance as the MLD but at substantially lower complexity. These EH-based noncoherent detectors achieve full diversity in Rayleigh fading. Numerical results demonstrate that our proposed PS-NcAF and TS-NcAF may outperform the conventional grid- powered relay system under the same total power constraint. Various insights which are useful for the design of practical SWIPT relaying systems are obtained. Interestingly, PS-NcAF outperforms TS-NcAF in the single-relay case, whereas TS-NcAF outperforms PS-NcAF in the multi-relay case.
[ { "created": "Wed, 27 May 2015 05:56:15 GMT", "version": "v1" }, { "created": "Fri, 12 Jun 2015 14:18:26 GMT", "version": "v2" } ]
2015-06-15
[ [ "Liu", "Peng", "" ], [ "Gazor", "Saeed", "" ], [ "Kim", "Il-Min", "" ], [ "Kim", "Dong In", "" ] ]
In energy harvesting (EH) relay networks, the coherent communication requires accurate estima- tion/tracking of the instantaneous channel state information (CSI) which consumes extra power. As a remedy, we propose two noncoherent EH relaying protocols based on the amplify-and-forward (AF) relaying, namely, power splitting noncoherent AF (PS-NcAF) and time switching noncoherent AF (TS-NcAF), which do not require any instantaneous CSI. We develop a noncoherent framework of simultaneous wireless information and power transfer (SWIPT), embracing PS-NcAF and TS-NcAF in a unified form. For arbitrary M-ary noncoherent frequency-shift keying (FSK) and differential phase- shift keying (DPSK), we derive maximum-likelihood detectors (MLDs) for PS-NcAF and TS-NcAF in a unified form, which involves integral evaluations yet serves as the optimum performance benchmark. To avoid expensive integral computations, we propose a closed-form detector using the Gauss-Legendre approximation, which achieves almost identical performance as the MLD but at substantially lower complexity. These EH-based noncoherent detectors achieve full diversity in Rayleigh fading. Numerical results demonstrate that our proposed PS-NcAF and TS-NcAF may outperform the conventional grid- powered relay system under the same total power constraint. Various insights which are useful for the design of practical SWIPT relaying systems are obtained. Interestingly, PS-NcAF outperforms TS-NcAF in the single-relay case, whereas TS-NcAF outperforms PS-NcAF in the multi-relay case.
2202.12575
Guillaume Wisniewski Dr.
Aur\'elien Max and Guillaume Wisniewski
Mining Naturally-occurring Corrections and Paraphrases from Wikipedia's Revision History
Accepted at LREC'10
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Naturally-occurring instances of linguistic phenomena are important both for training and for evaluating automatic processes on text. When available in large quantities, they also prove interesting material for linguistic studies. In this article, we present a new resource built from Wikipedia's revision history, called WiCoPaCo (Wikipedia Correction and Paraphrase Corpus), which contains numerous editings by human contributors, including various corrections and rewritings. We discuss the main motivations for building such a resource, describe how it was built and present initial applications on French.
[ { "created": "Fri, 25 Feb 2022 09:24:38 GMT", "version": "v1" } ]
2022-02-28
[ [ "Max", "Aurélien", "" ], [ "Wisniewski", "Guillaume", "" ] ]
Naturally-occurring instances of linguistic phenomena are important both for training and for evaluating automatic processes on text. When available in large quantities, they also prove interesting material for linguistic studies. In this article, we present a new resource built from Wikipedia's revision history, called WiCoPaCo (Wikipedia Correction and Paraphrase Corpus), which contains numerous editings by human contributors, including various corrections and rewritings. We discuss the main motivations for building such a resource, describe how it was built and present initial applications on French.
1208.4809
Husnabad Venkateswara Reddy
H. Venkateswara Reddy, Dr.S.Viswanadha Raju, B.Ramasubba Reddy
Comparing N-Node Set Importance Representative results with Node Importance Representative results for Categorical Clustering: An exploratory study
16 pages, 4 figures, 3 equations
null
null
null
cs.DB
http://creativecommons.org/licenses/by/3.0/
The proportionate increase in the size of the data with increase in space implies that clustering a very large data set becomes difficult and is a time consuming process.Sampling is one important technique to scale down the size of dataset and to improve the efficiency of clustering. After sampling allocating unlabeled objects into proper clusters is impossible in the categorical domain.To address the problem, Chen employed a method called MAximal Representative Data Labeling to allocate each unlabeled data point to the appropriate cluster based on Node Importance Representative and N-Node Importance Representative algorithms. This paper took off from Chen s investigation and analyzed and compared the results of NIR and NNIR leading to the conclusion that the two processes contradict each other when it comes to finding the resemblance between an unlabeled data point and a cluster.A new and better way of solving the problem was arrived at that finds resemblance between unlabeled data point within all clusters, while also providing maximal resemblance for allocation of data in the required cluster.
[ { "created": "Thu, 23 Aug 2012 17:32:32 GMT", "version": "v1" } ]
2015-03-13
[ [ "Reddy", "H. Venkateswara", "" ], [ "Raju", "Dr. S. Viswanadha", "" ], [ "Reddy", "B. Ramasubba", "" ] ]
The proportionate increase in the size of the data with increase in space implies that clustering a very large data set becomes difficult and is a time consuming process.Sampling is one important technique to scale down the size of dataset and to improve the efficiency of clustering. After sampling allocating unlabeled objects into proper clusters is impossible in the categorical domain.To address the problem, Chen employed a method called MAximal Representative Data Labeling to allocate each unlabeled data point to the appropriate cluster based on Node Importance Representative and N-Node Importance Representative algorithms. This paper took off from Chen s investigation and analyzed and compared the results of NIR and NNIR leading to the conclusion that the two processes contradict each other when it comes to finding the resemblance between an unlabeled data point and a cluster.A new and better way of solving the problem was arrived at that finds resemblance between unlabeled data point within all clusters, while also providing maximal resemblance for allocation of data in the required cluster.
2304.02554
Mingqi Gao
Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, Xiaojun Wan
Human-like Summarization Evaluation with ChatGPT
9 pages, 5 figures, in process
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evaluating text summarization is a challenging problem, and existing evaluation metrics are far from satisfactory. In this study, we explored ChatGPT's ability to perform human-like summarization evaluation using four human evaluation methods on five datasets. We found that ChatGPT was able to complete annotations relatively smoothly using Likert scale scoring, pairwise comparison, Pyramid, and binary factuality evaluation. Additionally, it outperformed commonly used automatic evaluation metrics on some datasets. Furthermore, we discussed the impact of different prompts, compared its performance with that of human evaluation, and analyzed the generated explanations and invalid responses.
[ { "created": "Wed, 5 Apr 2023 16:17:32 GMT", "version": "v1" } ]
2023-04-06
[ [ "Gao", "Mingqi", "" ], [ "Ruan", "Jie", "" ], [ "Sun", "Renliang", "" ], [ "Yin", "Xunjian", "" ], [ "Yang", "Shiping", "" ], [ "Wan", "Xiaojun", "" ] ]
Evaluating text summarization is a challenging problem, and existing evaluation metrics are far from satisfactory. In this study, we explored ChatGPT's ability to perform human-like summarization evaluation using four human evaluation methods on five datasets. We found that ChatGPT was able to complete annotations relatively smoothly using Likert scale scoring, pairwise comparison, Pyramid, and binary factuality evaluation. Additionally, it outperformed commonly used automatic evaluation metrics on some datasets. Furthermore, we discussed the impact of different prompts, compared its performance with that of human evaluation, and analyzed the generated explanations and invalid responses.
2104.11386
Nam Wook Kim
Nam Wook Kim
Recording Reusable and Guided Analytics From Interaction Histories
2 pages, 2 figures, conference
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
The use of visual analytics tools has gained popularity in various domains, helping users discover meaningful information from complex and large data sets. Users often face difficulty in disseminating the knowledge discovered without clear recall of their exploration paths and analysis processes. We introduce a visual analysis tool that allows analysts to record reusable and guided analytics from their interaction logs. To capture the analysis process, we use a decision tree whose node embeds visualizations and guide to define a visual analysis task. The tool enables analysts to formalize analysis strategies, build best practices, and guide novices through systematic workflows.
[ { "created": "Fri, 23 Apr 2021 02:46:00 GMT", "version": "v1" } ]
2021-04-26
[ [ "Kim", "Nam Wook", "" ] ]
The use of visual analytics tools has gained popularity in various domains, helping users discover meaningful information from complex and large data sets. Users often face difficulty in disseminating the knowledge discovered without clear recall of their exploration paths and analysis processes. We introduce a visual analysis tool that allows analysts to record reusable and guided analytics from their interaction logs. To capture the analysis process, we use a decision tree whose node embeds visualizations and guide to define a visual analysis task. The tool enables analysts to formalize analysis strategies, build best practices, and guide novices through systematic workflows.
1811.08996
Shipeng Wang
Shipeng Wang, Jian Sun and Zongben Xu
HyperAdam: A Learnable Task-Adaptive Adam for Network Training
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks are traditionally trained using human-designed stochastic optimization algorithms, such as SGD and Adam. Recently, the approach of learning to optimize network parameters has emerged as a promising research topic. However, these learned black-box optimizers sometimes do not fully utilize the experience in human-designed optimizers, therefore have limitation in generalization ability. In this paper, a new optimizer, dubbed as \textit{HyperAdam}, is proposed that combines the idea of "learning to optimize" and traditional Adam optimizer. Given a network for training, its parameter update in each iteration generated by HyperAdam is an adaptive combination of multiple updates generated by Adam with varying decay rates. The combination weights and decay rates in HyperAdam are adaptively learned depending on the task. HyperAdam is modeled as a recurrent neural network with AdamCell, WeightCell and StateCell. It is justified to be state-of-the-art for various network training, such as multilayer perceptron, CNN and LSTM.
[ { "created": "Thu, 22 Nov 2018 02:37:53 GMT", "version": "v1" } ]
2018-11-26
[ [ "Wang", "Shipeng", "" ], [ "Sun", "Jian", "" ], [ "Xu", "Zongben", "" ] ]
Deep neural networks are traditionally trained using human-designed stochastic optimization algorithms, such as SGD and Adam. Recently, the approach of learning to optimize network parameters has emerged as a promising research topic. However, these learned black-box optimizers sometimes do not fully utilize the experience in human-designed optimizers, therefore have limitation in generalization ability. In this paper, a new optimizer, dubbed as \textit{HyperAdam}, is proposed that combines the idea of "learning to optimize" and traditional Adam optimizer. Given a network for training, its parameter update in each iteration generated by HyperAdam is an adaptive combination of multiple updates generated by Adam with varying decay rates. The combination weights and decay rates in HyperAdam are adaptively learned depending on the task. HyperAdam is modeled as a recurrent neural network with AdamCell, WeightCell and StateCell. It is justified to be state-of-the-art for various network training, such as multilayer perceptron, CNN and LSTM.
1604.02677
Hao Chen
Hao Chen, Xiaojuan Qi, Lequan Yu, Pheng-Ann Heng
DCAN: Deep Contour-Aware Networks for Accurate Gland Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The morphology of glands has been used routinely by pathologists to assess the malignancy degree of adenocarcinomas. Accurate segmentation of glands from histology images is a crucial step to obtain reliable morphological statistics for quantitative diagnosis. In this paper, we proposed an efficient deep contour-aware network (DCAN) to solve this challenging problem under a unified multi-task learning framework. In the proposed network, multi-level contextual features from the hierarchical architecture are explored with auxiliary supervision for accurate gland segmentation. When incorporated with multi-task regularization during the training, the discriminative capability of intermediate features can be further improved. Moreover, our network can not only output accurate probability maps of glands, but also depict clear contours simultaneously for separating clustered objects, which further boosts the gland segmentation performance. This unified framework can be efficient when applied to large-scale histopathological data without resorting to additional steps to generate contours based on low-level cues for post-separating. Our method won the 2015 MICCAI Gland Segmentation Challenge out of 13 competitive teams, surpassing all the other methods by a significant margin.
[ { "created": "Sun, 10 Apr 2016 12:12:24 GMT", "version": "v1" } ]
2016-04-12
[ [ "Chen", "Hao", "" ], [ "Qi", "Xiaojuan", "" ], [ "Yu", "Lequan", "" ], [ "Heng", "Pheng-Ann", "" ] ]
The morphology of glands has been used routinely by pathologists to assess the malignancy degree of adenocarcinomas. Accurate segmentation of glands from histology images is a crucial step to obtain reliable morphological statistics for quantitative diagnosis. In this paper, we proposed an efficient deep contour-aware network (DCAN) to solve this challenging problem under a unified multi-task learning framework. In the proposed network, multi-level contextual features from the hierarchical architecture are explored with auxiliary supervision for accurate gland segmentation. When incorporated with multi-task regularization during the training, the discriminative capability of intermediate features can be further improved. Moreover, our network can not only output accurate probability maps of glands, but also depict clear contours simultaneously for separating clustered objects, which further boosts the gland segmentation performance. This unified framework can be efficient when applied to large-scale histopathological data without resorting to additional steps to generate contours based on low-level cues for post-separating. Our method won the 2015 MICCAI Gland Segmentation Challenge out of 13 competitive teams, surpassing all the other methods by a significant margin.
2407.05280
Adri Bhattacharya
Pritam Goswami, Adri Bhattacharya, Raja Das and Partha Sarathi Mandal
Perpetual Exploration of a Ring in Presence of Byzantine Black Hole
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Perpetual exploration is a fundamental problem in the domain of mobile agents, where an agent needs to visit each node infinitely often. This issue has received lot of attention, mainly for ring topologies, presence of black holes adds more complexity. A black hole can destroy any incoming agent without any observable trace. In \cite{BampasImprovedPeriodicDataRetrieval,KralovivcPeriodicDataRetrievalFirst}, the authors considered this problem in the context of \textit{ Periodic data retrieval}. They introduced a variant of black hole called gray hole (where the adversary chooses whether to destroy an agent or let it pass) among others and showed that 4 asynchronous and co-located agents are essential to solve this problem (hence perpetual exploration) in presence of such a gray hole if each node of the ring has a whiteboard. This paper investigates the exploration of a ring in presence of a ``byzantine black hole''. In addition to the capabilities of a gray hole, in this variant, the adversary chooses whether to erase any previously stored information on that node. Previously, one particular initial scenario (i.e., agents are co-located) and one particular communication model (i.e., whiteboard) are investigated. Now, there can be other initial scenarios where all agents may not be co-located. Also, there are many weaker models of communications (i.e., Face-to-Face, Pebble) where this problem is yet to be investigated. The agents are synchronous. The main results focus on minimizing the agent number while ensuring that perpetual exploration is achieved even in presence of such a node under various communication models and starting positions. Further, we achieved a better upper and lower bound result (i.e., 3 agents) for this problem (where the malicious node is a generalized version of a gray hole), by trading-off scheduler capability, for co-located and in presence of a whiteboard.
[ { "created": "Sun, 7 Jul 2024 06:41:17 GMT", "version": "v1" } ]
2024-07-09
[ [ "Goswami", "Pritam", "" ], [ "Bhattacharya", "Adri", "" ], [ "Das", "Raja", "" ], [ "Mandal", "Partha Sarathi", "" ] ]
Perpetual exploration is a fundamental problem in the domain of mobile agents, where an agent needs to visit each node infinitely often. This issue has received lot of attention, mainly for ring topologies, presence of black holes adds more complexity. A black hole can destroy any incoming agent without any observable trace. In \cite{BampasImprovedPeriodicDataRetrieval,KralovivcPeriodicDataRetrievalFirst}, the authors considered this problem in the context of \textit{ Periodic data retrieval}. They introduced a variant of black hole called gray hole (where the adversary chooses whether to destroy an agent or let it pass) among others and showed that 4 asynchronous and co-located agents are essential to solve this problem (hence perpetual exploration) in presence of such a gray hole if each node of the ring has a whiteboard. This paper investigates the exploration of a ring in presence of a ``byzantine black hole''. In addition to the capabilities of a gray hole, in this variant, the adversary chooses whether to erase any previously stored information on that node. Previously, one particular initial scenario (i.e., agents are co-located) and one particular communication model (i.e., whiteboard) are investigated. Now, there can be other initial scenarios where all agents may not be co-located. Also, there are many weaker models of communications (i.e., Face-to-Face, Pebble) where this problem is yet to be investigated. The agents are synchronous. The main results focus on minimizing the agent number while ensuring that perpetual exploration is achieved even in presence of such a node under various communication models and starting positions. Further, we achieved a better upper and lower bound result (i.e., 3 agents) for this problem (where the malicious node is a generalized version of a gray hole), by trading-off scheduler capability, for co-located and in presence of a whiteboard.
1309.3858
Matias Korman
Oswin Aichholzer, Thomas Hackl, Matias Korman, Alexander Pilz, Birgit Vogtenhuber
Geodesic-Preserving Polygon Simplification
null
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Polygons are a paramount data structure in computational geometry. While the complexity of many algorithms on simple polygons or polygons with holes depends on the size of the input polygon, the intrinsic complexity of the problems these algorithms solve is often related to the reflex vertices of the polygon. In this paper, we give an easy-to-describe linear-time method to replace an input polygon $\mathcal{P}$ by a polygon $\mathcal{P}'$ such that (1) $\mathcal{P}'$ contains $\mathcal{P}$, (2) $\mathcal{P}'$ has its reflex vertices at the same positions as $\mathcal{P}$, and (3) the number of vertices of $\mathcal{P}'$ is linear in the number of reflex vertices. Since the solutions of numerous problems on polygons (including shortest paths, geodesic hulls, separating point sets, and Voronoi diagrams) are equivalent for both $\mathcal{P}$ and $\mathcal{P}'$, our algorithm can be used as a preprocessing step for several algorithms and makes their running time dependent on the number of reflex vertices rather than on the size of $\mathcal{P}$.
[ { "created": "Mon, 16 Sep 2013 08:50:17 GMT", "version": "v1" } ]
2013-09-17
[ [ "Aichholzer", "Oswin", "" ], [ "Hackl", "Thomas", "" ], [ "Korman", "Matias", "" ], [ "Pilz", "Alexander", "" ], [ "Vogtenhuber", "Birgit", "" ] ]
Polygons are a paramount data structure in computational geometry. While the complexity of many algorithms on simple polygons or polygons with holes depends on the size of the input polygon, the intrinsic complexity of the problems these algorithms solve is often related to the reflex vertices of the polygon. In this paper, we give an easy-to-describe linear-time method to replace an input polygon $\mathcal{P}$ by a polygon $\mathcal{P}'$ such that (1) $\mathcal{P}'$ contains $\mathcal{P}$, (2) $\mathcal{P}'$ has its reflex vertices at the same positions as $\mathcal{P}$, and (3) the number of vertices of $\mathcal{P}'$ is linear in the number of reflex vertices. Since the solutions of numerous problems on polygons (including shortest paths, geodesic hulls, separating point sets, and Voronoi diagrams) are equivalent for both $\mathcal{P}$ and $\mathcal{P}'$, our algorithm can be used as a preprocessing step for several algorithms and makes their running time dependent on the number of reflex vertices rather than on the size of $\mathcal{P}$.
2301.06690
Jing Li
Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Linchao Bao, Zhenyu He
Audio2Gestures: Generating Diverse Gestures from Audio
arXiv admin note: substantial text overlap with arXiv:2108.06720
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
People may perform diverse gestures affected by various mental and physical factors when speaking the same sentences. This inherent one-to-many relationship makes co-speech gesture generation from audio particularly challenging. Conventional CNNs/RNNs assume one-to-one mapping, and thus tend to predict the average of all possible target motions, easily resulting in plain/boring motions during inference. So we propose to explicitly model the one-to-many audio-to-motion mapping by splitting the cross-modal latent code into shared code and motion-specific code. The shared code is expected to be responsible for the motion component that is more correlated to the audio while the motion-specific code is expected to capture diverse motion information that is more independent of the audio. However, splitting the latent code into two parts poses extra training difficulties. Several crucial training losses/strategies, including relaxed motion loss, bicycle constraint, and diversity loss, are designed to better train the VAE. Experiments on both 3D and 2D motion datasets verify that our method generates more realistic and diverse motions than previous state-of-the-art methods, quantitatively and qualitatively. Besides, our formulation is compatible with discrete cosine transformation (DCT) modeling and other popular backbones (\textit{i.e.} RNN, Transformer). As for motion losses and quantitative motion evaluation, we find structured losses/metrics (\textit{e.g.} STFT) that consider temporal and/or spatial context complement the most commonly used point-wise losses (\textit{e.g.} PCK), resulting in better motion dynamics and more nuanced motion details. Finally, we demonstrate that our method can be readily used to generate motion sequences with user-specified motion clips on the timeline.
[ { "created": "Tue, 17 Jan 2023 04:09:58 GMT", "version": "v1" } ]
2023-01-18
[ [ "Li", "Jing", "" ], [ "Kang", "Di", "" ], [ "Pei", "Wenjie", "" ], [ "Zhe", "Xuefei", "" ], [ "Zhang", "Ying", "" ], [ "Bao", "Linchao", "" ], [ "He", "Zhenyu", "" ] ]
People may perform diverse gestures affected by various mental and physical factors when speaking the same sentences. This inherent one-to-many relationship makes co-speech gesture generation from audio particularly challenging. Conventional CNNs/RNNs assume one-to-one mapping, and thus tend to predict the average of all possible target motions, easily resulting in plain/boring motions during inference. So we propose to explicitly model the one-to-many audio-to-motion mapping by splitting the cross-modal latent code into shared code and motion-specific code. The shared code is expected to be responsible for the motion component that is more correlated to the audio while the motion-specific code is expected to capture diverse motion information that is more independent of the audio. However, splitting the latent code into two parts poses extra training difficulties. Several crucial training losses/strategies, including relaxed motion loss, bicycle constraint, and diversity loss, are designed to better train the VAE. Experiments on both 3D and 2D motion datasets verify that our method generates more realistic and diverse motions than previous state-of-the-art methods, quantitatively and qualitatively. Besides, our formulation is compatible with discrete cosine transformation (DCT) modeling and other popular backbones (\textit{i.e.} RNN, Transformer). As for motion losses and quantitative motion evaluation, we find structured losses/metrics (\textit{e.g.} STFT) that consider temporal and/or spatial context complement the most commonly used point-wise losses (\textit{e.g.} PCK), resulting in better motion dynamics and more nuanced motion details. Finally, we demonstrate that our method can be readily used to generate motion sequences with user-specified motion clips on the timeline.