id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2308.04468
Mohammad Naanaa
Mohammad Naanaa, Katharina Schmid, Yinyu Nie
3D Scene Diffusion Guidance using Scene Graphs
5 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Guided synthesis of high-quality 3D scenes is a challenging task. Diffusion models have shown promise in generating diverse data, including 3D scenes. However, current methods rely directly on text embeddings for controlling the generation, limiting the incorporation of complex spatial relationships between objects. We propose a novel approach for 3D scene diffusion guidance using scene graphs. To leverage the relative spatial information the scene graphs provide, we make use of relational graph convolutional blocks within our denoising network. We show that our approach significantly improves the alignment between scene description and generated scene.
[ { "created": "Tue, 8 Aug 2023 06:16:37 GMT", "version": "v1" } ]
2023-08-10
[ [ "Naanaa", "Mohammad", "" ], [ "Schmid", "Katharina", "" ], [ "Nie", "Yinyu", "" ] ]
Guided synthesis of high-quality 3D scenes is a challenging task. Diffusion models have shown promise in generating diverse data, including 3D scenes. However, current methods rely directly on text embeddings for controlling the generation, limiting the incorporation of complex spatial relationships between objects. We propose a novel approach for 3D scene diffusion guidance using scene graphs. To leverage the relative spatial information the scene graphs provide, we make use of relational graph convolutional blocks within our denoising network. We show that our approach significantly improves the alignment between scene description and generated scene.
1106.5305
Michael Lesnick
Michael Lesnick
The Theory of the Interleaving Distance on Multidimensional Persistence Modules
Major revision; exposition improved throughout. To appear in Foundations of Computational Mathematics. 36 pages
Foundations of Computational Mathematics: Volume 15, Issue 3 (2015), Page 613-650
10.1007/s10208-015-9255-y
null
cs.CG math.AT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In 2009, Chazal et al. introduced $\epsilon$-interleavings of persistence modules. $\epsilon$-interleavings induce a pseudometric $d_I$ on (isomorphism classes of) persistence modules, the interleaving distance. The definitions of $\epsilon$-interleavings and $d_I$ generalize readily to multidimensional persistence modules. In this paper, we develop the theory of multidimensional interleavings, with a view towards applications to topological data analysis. We present four main results. First, we show that on 1-D persistence modules, $d_I$ is equal to the bottleneck distance $d_B$. This result, which first appeared in an earlier preprint of this paper, has since appeared in several other places, and is now known as the isometry theorem. Second, we present a characterization of the $\epsilon$-interleaving relation on multidimensional persistence modules. This expresses transparently the sense in which two $\epsilon$-interleaved modules are algebraically similar. Third, using this characterization, we show that when we define our persistence modules over a prime field, $d_I$ satisfies a universality property. This universality result is the central result of the paper. It says that $d_I$ satisfies a stability property generalizing one which $d_B$ is known to satisfy, and that in addition, if $d$ is any other pseudometric on multidimensional persistence modules satisfying the same stability property, then $d\leq d_I$. We also show that a variant of this universality result holds for $d_B$, over arbitrary fields. Finally, we show that $d_I$ restricts to a metric on isomorphism classes of finitely presented multidimensional persistence modules.
[ { "created": "Mon, 27 Jun 2011 06:05:20 GMT", "version": "v1" }, { "created": "Thu, 14 Jul 2011 06:11:54 GMT", "version": "v2" }, { "created": "Tue, 5 Feb 2013 16:10:40 GMT", "version": "v3" }, { "created": "Mon, 2 Feb 2015 21:08:13 GMT", "version": "v4" } ]
2015-05-22
[ [ "Lesnick", "Michael", "" ] ]
In 2009, Chazal et al. introduced $\epsilon$-interleavings of persistence modules. $\epsilon$-interleavings induce a pseudometric $d_I$ on (isomorphism classes of) persistence modules, the interleaving distance. The definitions of $\epsilon$-interleavings and $d_I$ generalize readily to multidimensional persistence modules. In this paper, we develop the theory of multidimensional interleavings, with a view towards applications to topological data analysis. We present four main results. First, we show that on 1-D persistence modules, $d_I$ is equal to the bottleneck distance $d_B$. This result, which first appeared in an earlier preprint of this paper, has since appeared in several other places, and is now known as the isometry theorem. Second, we present a characterization of the $\epsilon$-interleaving relation on multidimensional persistence modules. This expresses transparently the sense in which two $\epsilon$-interleaved modules are algebraically similar. Third, using this characterization, we show that when we define our persistence modules over a prime field, $d_I$ satisfies a universality property. This universality result is the central result of the paper. It says that $d_I$ satisfies a stability property generalizing one which $d_B$ is known to satisfy, and that in addition, if $d$ is any other pseudometric on multidimensional persistence modules satisfying the same stability property, then $d\leq d_I$. We also show that a variant of this universality result holds for $d_B$, over arbitrary fields. Finally, we show that $d_I$ restricts to a metric on isomorphism classes of finitely presented multidimensional persistence modules.
2102.06326
Steve Dai
Steve Dai, Alicia Klinefelter, Haoxing Ren, Rangharajan Venkatesan, Ben Keller, Nathaniel Pinckney, Brucek Khailany
Verifying High-Level Latency-Insensitive Designs with Formal Model Checking
null
null
null
null
cs.LO cs.AR cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Latency-insensitive design mitigates increasing interconnect delay and enables productive component reuse in complex digital systems. This design style has been adopted in high-level design flows because untimed functional blocks connected through latency-insensitive interfaces provide a natural communication abstraction. However, latency-insensitive design with high-level languages also introduces a unique set of verification challenges that jeopardize functional correctness. In particular, bugs due to invalid consumption of inputs and deadlocks can be difficult to detect and debug with dynamic simulation methods. To tackle these two classes of bugs, we propose formal model checking methods to guarantee that a high-level latency-insensitive design is unaffected by invalid input data and is free of deadlock. We develop a well-structured verification wrapper for each property to automatically construct the corresponding formal model for checking. Our experiments demonstrate that the formal checks are effective in realistic bug scenarios from high-level designs.
[ { "created": "Fri, 12 Feb 2021 01:56:23 GMT", "version": "v1" } ]
2021-02-19
[ [ "Dai", "Steve", "" ], [ "Klinefelter", "Alicia", "" ], [ "Ren", "Haoxing", "" ], [ "Venkatesan", "Rangharajan", "" ], [ "Keller", "Ben", "" ], [ "Pinckney", "Nathaniel", "" ], [ "Khailany", "Brucek", "" ] ]
Latency-insensitive design mitigates increasing interconnect delay and enables productive component reuse in complex digital systems. This design style has been adopted in high-level design flows because untimed functional blocks connected through latency-insensitive interfaces provide a natural communication abstraction. However, latency-insensitive design with high-level languages also introduces a unique set of verification challenges that jeopardize functional correctness. In particular, bugs due to invalid consumption of inputs and deadlocks can be difficult to detect and debug with dynamic simulation methods. To tackle these two classes of bugs, we propose formal model checking methods to guarantee that a high-level latency-insensitive design is unaffected by invalid input data and is free of deadlock. We develop a well-structured verification wrapper for each property to automatically construct the corresponding formal model for checking. Our experiments demonstrate that the formal checks are effective in realistic bug scenarios from high-level designs.
1907.09100
EPTCS
Ramit Das, R. Ramanujam, Sunil Simon
Reasoning about Social Choice and Games in Monadic Fixed-Point Logic
In Proceedings TARK 2019, arXiv:1907.08335
EPTCS 297, 2019, pp. 106-120
10.4204/EPTCS.297.8
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Whether it be in normal form games, or in fair allocations, or in voter preferences in voting systems, a certain pattern of reasoning is common. From a particular profile, an agent or a group of agents may have an incentive to shift to a new one. This induces a natural graph structure that we call the improvement graph on the strategy space of these systems. We suggest that the monadic fixed-point logic with counting, an extension of monadic first-order logic on graphs with fixed-point and counting quantifiers, is a natural specification language on improvement graphs, and thus for a class of properties that can be interpreted across these domains. The logic has an efficient model checking algorithm (in the size of the improvement graph).
[ { "created": "Mon, 22 Jul 2019 03:14:29 GMT", "version": "v1" } ]
2019-07-23
[ [ "Das", "Ramit", "" ], [ "Ramanujam", "R.", "" ], [ "Simon", "Sunil", "" ] ]
Whether it be in normal form games, or in fair allocations, or in voter preferences in voting systems, a certain pattern of reasoning is common. From a particular profile, an agent or a group of agents may have an incentive to shift to a new one. This induces a natural graph structure that we call the improvement graph on the strategy space of these systems. We suggest that the monadic fixed-point logic with counting, an extension of monadic first-order logic on graphs with fixed-point and counting quantifiers, is a natural specification language on improvement graphs, and thus for a class of properties that can be interpreted across these domains. The logic has an efficient model checking algorithm (in the size of the improvement graph).
2306.09764
Vincent Berenz
Vincent Berenz, Felix Widmaier, Simon Guist, Bernhard Sch\"olkopf and Dieter B\"uchler
Synchronizing Machine Learning Algorithms, Realtime Robotic Control and Simulated Environment with o80
work presented at the Robot Software Architectures Workshop - RSA 2023, ICRA
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robotic applications require the integration of various modalities, encompassing perception, control of real robots and possibly the control of simulated environments. While the state-of-the-art robotic software solutions such as ROS 2 provide most of the required features, flexible synchronization between algorithms, data streams and control loops can be tedious. o80 is a versatile C++ framework for robotics which provides a shared memory model and a command framework for real-time critical systems. It enables expert users to set up complex robotic systems and generate Python bindings for scientists. o80's unique feature is its flexible synchronization between processes, including the traditional blocking commands and the novel ``bursting mode'', which allows user code to control the execution of the lower process control loop. This makes it particularly useful for setups that mix real and simulated environments.
[ { "created": "Fri, 16 Jun 2023 10:50:21 GMT", "version": "v1" } ]
2023-06-19
[ [ "Berenz", "Vincent", "" ], [ "Widmaier", "Felix", "" ], [ "Guist", "Simon", "" ], [ "Schölkopf", "Bernhard", "" ], [ "Büchler", "Dieter", "" ] ]
Robotic applications require the integration of various modalities, encompassing perception, control of real robots and possibly the control of simulated environments. While the state-of-the-art robotic software solutions such as ROS 2 provide most of the required features, flexible synchronization between algorithms, data streams and control loops can be tedious. o80 is a versatile C++ framework for robotics which provides a shared memory model and a command framework for real-time critical systems. It enables expert users to set up complex robotic systems and generate Python bindings for scientists. o80's unique feature is its flexible synchronization between processes, including the traditional blocking commands and the novel ``bursting mode'', which allows user code to control the execution of the lower process control loop. This makes it particularly useful for setups that mix real and simulated environments.
2307.07407
Brunello Tirozzi
Brunello Tirozzi, Orchidea Maria Lecian
Retrieval of phonemes and Kohonen algorithm
10 pages
null
null
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
A phoneme-retrieval technique is proposed, which is due to the particular way of the construction of the network. An initial set of neurons is given. The number of these neurons is approximately equal to the number of typical structures of the data. For example if the network is built for voice retrieval then the number of neurons must be equal to the number of characteristic phonemes of the alphabet of the language spoken by the social group to which the particular person belongs. Usually this task is very complicated and the network can depend critically on the samples used for the learning. If the network is built for image retrieval then it works only if the data to be retrieved belong to a particular set of images. If the network is built for voice recognition it works only for some particular set of words. A typical example is the words used for the flight of airplanes. For example a command like the "airplane should make a turn of 120 degrees towards the east" can be easily recognized by the network if a suitable learning procedure is used.
[ { "created": "Mon, 10 Jul 2023 17:25:07 GMT", "version": "v1" } ]
2023-07-27
[ [ "Tirozzi", "Brunello", "" ], [ "Lecian", "Orchidea Maria", "" ] ]
A phoneme-retrieval technique is proposed, which is due to the particular way of the construction of the network. An initial set of neurons is given. The number of these neurons is approximately equal to the number of typical structures of the data. For example if the network is built for voice retrieval then the number of neurons must be equal to the number of characteristic phonemes of the alphabet of the language spoken by the social group to which the particular person belongs. Usually this task is very complicated and the network can depend critically on the samples used for the learning. If the network is built for image retrieval then it works only if the data to be retrieved belong to a particular set of images. If the network is built for voice recognition it works only for some particular set of words. A typical example is the words used for the flight of airplanes. For example a command like the "airplane should make a turn of 120 degrees towards the east" can be easily recognized by the network if a suitable learning procedure is used.
1806.11128
Justin Deters
Justin Deters, Jiaye Wu, Yifan Xu, I-Ting Angelina Lee
A NUMA-Aware Provably-Efficient Task-Parallel Platform Based on the Work-First Principle
14 pages, 9 figures
2018 IEEE International Symposium on Workload Characterization (IISWC), Raleigh, NC, USA, 2018, pp. 59-70
10.1109/IISWC.2018.8573486
null
cs.DC cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Task parallelism is designed to simplify the task of parallel programming. When executing a task parallel program on modern NUMA architectures, it can fail to scale due to the phenomenon called work inflation, where the overall processing time that multiple cores spend on doing useful work is higher compared to the time required to do the same amount of work on one core, due to effects experienced only during parallel executions such as additional cache misses, remote memory accesses, and memory bandwidth issues. It's possible to mitigate work inflation by co-locating the computation with the data, but this is nontrivial to do with task parallel programs. First, by design, the scheduling for task parallel programs is automated, giving the user little control over where the computation is performed. Second, the platforms tend to employ work stealing, which provides strong theoretical guarantees, but its randomized protocol for load balancing does not discern between work items that are far away versus ones that are closer. In this work, we propose NUMA-WS, a NUMA-aware task parallel platform engineering based on the work-first principle. By abiding by the work-first principle, we are able to obtain a platform that is work efficient, provides the same theoretical guarantees as the classic work stealing scheduler, and mitigates work inflation. Furthermore, we implemented a prototype platform by modifying Intel's Cilk Plus runtime system and empirically demonstrate that the resulting system is work efficient and scalable.
[ { "created": "Thu, 28 Jun 2018 18:00:42 GMT", "version": "v1" }, { "created": "Tue, 4 Sep 2018 17:02:41 GMT", "version": "v2" }, { "created": "Wed, 5 Sep 2018 18:29:45 GMT", "version": "v3" }, { "created": "Mon, 7 Jan 2019 15:46:26 GMT", "version": "v4" } ]
2019-01-08
[ [ "Deters", "Justin", "" ], [ "Wu", "Jiaye", "" ], [ "Xu", "Yifan", "" ], [ "Lee", "I-Ting Angelina", "" ] ]
Task parallelism is designed to simplify the task of parallel programming. When executing a task parallel program on modern NUMA architectures, it can fail to scale due to the phenomenon called work inflation, where the overall processing time that multiple cores spend on doing useful work is higher compared to the time required to do the same amount of work on one core, due to effects experienced only during parallel executions such as additional cache misses, remote memory accesses, and memory bandwidth issues. It's possible to mitigate work inflation by co-locating the computation with the data, but this is nontrivial to do with task parallel programs. First, by design, the scheduling for task parallel programs is automated, giving the user little control over where the computation is performed. Second, the platforms tend to employ work stealing, which provides strong theoretical guarantees, but its randomized protocol for load balancing does not discern between work items that are far away versus ones that are closer. In this work, we propose NUMA-WS, a NUMA-aware task parallel platform engineering based on the work-first principle. By abiding by the work-first principle, we are able to obtain a platform that is work efficient, provides the same theoretical guarantees as the classic work stealing scheduler, and mitigates work inflation. Furthermore, we implemented a prototype platform by modifying Intel's Cilk Plus runtime system and empirically demonstrate that the resulting system is work efficient and scalable.
1902.06097
Andreas Abel
Andreas Abel and Christian Sattler
Normalization by Evaluation for Call-by-Push-Value and Polarized Lambda-Calculus
null
null
null
null
cs.PL cs.LO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We observe that normalization by evaluation for simply-typed lambda-calculus with weak coproducts can be carried out in a weak bi-cartesian closed category of presheaves equipped with a monad that allows us to perform case distinction on neutral terms of sum type. The placement of the monad influences the normal forms we obtain: for instance, placing the monad on coproducts gives us eta-long beta-pi normal forms where pi refers to permutation of case distinctions out of elimination positions. We further observe that placing the monad on every coproduct is rather wasteful, and an optimal placement of the monad can be determined by considering polarized simple types inspired by focalization. Polarization classifies types into positive and negative, and it is sufficient to place the monad at the embedding of positive types into negative ones. We consider two calculi based on polarized types: pure call-by-push-value (CBPV) and polarized lambda-calculus, the natural deduction calculus corresponding to focalized sequent calculus. For these two calculi, we present algorithms for normalization by evaluation. We further discuss different implementations of the monad and their relation to existing normalization proofs for lambda-calculus with sums. Our developments have been partially formalized in the Agda proof assistant.
[ { "created": "Sat, 16 Feb 2019 12:26:22 GMT", "version": "v1" } ]
2019-02-20
[ [ "Abel", "Andreas", "" ], [ "Sattler", "Christian", "" ] ]
We observe that normalization by evaluation for simply-typed lambda-calculus with weak coproducts can be carried out in a weak bi-cartesian closed category of presheaves equipped with a monad that allows us to perform case distinction on neutral terms of sum type. The placement of the monad influences the normal forms we obtain: for instance, placing the monad on coproducts gives us eta-long beta-pi normal forms where pi refers to permutation of case distinctions out of elimination positions. We further observe that placing the monad on every coproduct is rather wasteful, and an optimal placement of the monad can be determined by considering polarized simple types inspired by focalization. Polarization classifies types into positive and negative, and it is sufficient to place the monad at the embedding of positive types into negative ones. We consider two calculi based on polarized types: pure call-by-push-value (CBPV) and polarized lambda-calculus, the natural deduction calculus corresponding to focalized sequent calculus. For these two calculi, we present algorithms for normalization by evaluation. We further discuss different implementations of the monad and their relation to existing normalization proofs for lambda-calculus with sums. Our developments have been partially formalized in the Agda proof assistant.
1806.07110
Nuno C. Garcia
Nuno Garcia, Pietro Morerio, Vittorio Murino
Modality Distillation with Multiple Stream Networks for Action Recognition
Accepted at ECCV 2018; Supp. material at p.16; code available
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diverse input data modalities can provide complementary cues for several tasks, usually leading to more robust algorithms and better performance. However, while a (training) dataset could be accurately designed to include a variety of sensory inputs, it is often the case that not all modalities could be available in real life (testing) scenarios, where a model has to be deployed. This raises the challenge of how to learn robust representations leveraging multimodal data in the training stage, while considering limitations at test time, such as noisy or missing modalities. This paper presents a new approach for multimodal video action recognition, developed within the unified frameworks of distillation and privileged information, named generalized distillation. Particularly, we consider the case of learning representations from depth and RGB videos, while relying on RGB data only at test time. We propose a new approach to train an hallucination network that learns to distill depth features through multiplicative connections of spatiotemporal representations, leveraging soft labels and hard labels, as well as distance between feature maps. We report state-of-the-art results on video action classification on the largest multimodal dataset available for this task, the NTU RGB+D. Code available at https://github.com/ncgarcia/modality-distillation .
[ { "created": "Tue, 19 Jun 2018 08:56:13 GMT", "version": "v1" }, { "created": "Mon, 29 Oct 2018 15:19:56 GMT", "version": "v2" } ]
2018-10-30
[ [ "Garcia", "Nuno", "" ], [ "Morerio", "Pietro", "" ], [ "Murino", "Vittorio", "" ] ]
Diverse input data modalities can provide complementary cues for several tasks, usually leading to more robust algorithms and better performance. However, while a (training) dataset could be accurately designed to include a variety of sensory inputs, it is often the case that not all modalities could be available in real life (testing) scenarios, where a model has to be deployed. This raises the challenge of how to learn robust representations leveraging multimodal data in the training stage, while considering limitations at test time, such as noisy or missing modalities. This paper presents a new approach for multimodal video action recognition, developed within the unified frameworks of distillation and privileged information, named generalized distillation. Particularly, we consider the case of learning representations from depth and RGB videos, while relying on RGB data only at test time. We propose a new approach to train an hallucination network that learns to distill depth features through multiplicative connections of spatiotemporal representations, leveraging soft labels and hard labels, as well as distance between feature maps. We report state-of-the-art results on video action classification on the largest multimodal dataset available for this task, the NTU RGB+D. Code available at https://github.com/ncgarcia/modality-distillation .
2402.17336
Theofanis Raptis
Hrant Khachatrian, Rafayel Mkrtchyan, Theofanis P. Raptis
Outdoor Environment Reconstruction with Deep Learning on Radio Propagation Paths
This work has been submitted to the IEEE for possible publication. Work partly supported by the RA Science Committee grant No. 22rl-052 (DISTAL) and the EU under Italian National Recovery and Resilience Plan of NextGenerationEU on "Telecommunications of the Future" (PE00000001 - program "RESTART")
null
null
null
cs.NI cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional methods for outdoor environment reconstruction rely predominantly on vision-based techniques like photogrammetry and LiDAR, facing limitations such as constrained coverage, susceptibility to environmental conditions, and high computational and energy demands. These challenges are particularly pronounced in applications like augmented reality navigation, especially when integrated with wearable devices featuring constrained computational resources and energy budgets. In response, this paper proposes a novel approach harnessing ambient wireless signals for outdoor environment reconstruction. By analyzing radio frequency (RF) data, the paper aims to deduce the environmental characteristics and digitally reconstruct the outdoor surroundings. Investigating the efficacy of selected deep learning (DL) techniques on the synthetic RF dataset WAIR-D, the study endeavors to address the research gap in this domain. Two DL-driven approaches are evaluated (convolutional U-Net and CLIP+ based on vision transformers), with performance assessed using metrics like intersection-over-union (IoU), Hausdorff distance, and Chamfer distance. The results demonstrate promising performance of the RF-based reconstruction method, paving the way towards lightweight and scalable reconstruction solutions.
[ { "created": "Tue, 27 Feb 2024 09:11:10 GMT", "version": "v1" } ]
2024-04-13
[ [ "Khachatrian", "Hrant", "" ], [ "Mkrtchyan", "Rafayel", "" ], [ "Raptis", "Theofanis P.", "" ] ]
Conventional methods for outdoor environment reconstruction rely predominantly on vision-based techniques like photogrammetry and LiDAR, facing limitations such as constrained coverage, susceptibility to environmental conditions, and high computational and energy demands. These challenges are particularly pronounced in applications like augmented reality navigation, especially when integrated with wearable devices featuring constrained computational resources and energy budgets. In response, this paper proposes a novel approach harnessing ambient wireless signals for outdoor environment reconstruction. By analyzing radio frequency (RF) data, the paper aims to deduce the environmental characteristics and digitally reconstruct the outdoor surroundings. Investigating the efficacy of selected deep learning (DL) techniques on the synthetic RF dataset WAIR-D, the study endeavors to address the research gap in this domain. Two DL-driven approaches are evaluated (convolutional U-Net and CLIP+ based on vision transformers), with performance assessed using metrics like intersection-over-union (IoU), Hausdorff distance, and Chamfer distance. The results demonstrate promising performance of the RF-based reconstruction method, paving the way towards lightweight and scalable reconstruction solutions.
2305.06543
Lei Yan
Lei Yan, Zhijin Qin, Chunfeng Li, Rui Zhang, Yongzhao Li, and Xiaoming Tao
QoE-based Semantic-Aware Resource Allocation for Multi-Task Networks
This work has been accepted by IEEE Transactions on Wireless Communications. arXiv admin note: text overlap with arXiv:2205.14530
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By transmitting task-related information only, semantic communications yield significant performance gains over conventional communications. However, the lack of mature semantic theory about semantic information quantification and performance evaluation makes it challenging to perform resource allocation for semantic communications, especially when multiple tasks coexist in the network. To cope with this challenge, we propose a quality-of-experience (QoE) based semantic-aware resource allocation method for multi-task networks in this paper. First, semantic entropy is defined to quantify the semantic information for different tasks, and the relationship between semantic entropy and Shannon entropy is analyzed. Then, we develop a novel QoE model to formulate the semantic-aware resource allocation in terms of semantic compression, channel assignment, and transmit power. The compatibility of the formulated problem with conventional communications is further demonstrated. To solve this problem, we decouple it into two subproblems and solved them by a developed deep Q-network (DQN) based method and a proposed low-complexity matching algorithm, respectively. Finally, simulation results validate the effectiveness and superiority of the proposed method, as well as its compatibility with conventional communications.
[ { "created": "Thu, 11 May 2023 03:18:06 GMT", "version": "v1" }, { "created": "Mon, 8 Apr 2024 14:40:07 GMT", "version": "v2" } ]
2024-04-09
[ [ "Yan", "Lei", "" ], [ "Qin", "Zhijin", "" ], [ "Li", "Chunfeng", "" ], [ "Zhang", "Rui", "" ], [ "Li", "Yongzhao", "" ], [ "Tao", "Xiaoming", "" ] ]
By transmitting task-related information only, semantic communications yield significant performance gains over conventional communications. However, the lack of mature semantic theory about semantic information quantification and performance evaluation makes it challenging to perform resource allocation for semantic communications, especially when multiple tasks coexist in the network. To cope with this challenge, we propose a quality-of-experience (QoE) based semantic-aware resource allocation method for multi-task networks in this paper. First, semantic entropy is defined to quantify the semantic information for different tasks, and the relationship between semantic entropy and Shannon entropy is analyzed. Then, we develop a novel QoE model to formulate the semantic-aware resource allocation in terms of semantic compression, channel assignment, and transmit power. The compatibility of the formulated problem with conventional communications is further demonstrated. To solve this problem, we decouple it into two subproblems and solved them by a developed deep Q-network (DQN) based method and a proposed low-complexity matching algorithm, respectively. Finally, simulation results validate the effectiveness and superiority of the proposed method, as well as its compatibility with conventional communications.
1801.09063
Parastoo Sadeghi
Yucheng Liu, Parastoo Sadeghi, Fatemeh Arbabjolfaei, and Young-Han Kim
Capacity Theorems for Distributed Index Coding
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In index coding, a server broadcasts multiple messages to their respective receivers, each with some side information that can be utilized to reduce the amount of communication from the server. Distributed index coding is an extension of index coding in which the messages are broadcast from multiple servers, each storing different subsets of the messages. In this paper, the optimal tradeoff among the message rates and the server broadcast rates, which is defined formally as the capacity region, is studied for a general distributed index coding problem. Inner and outer bounds on the capacity region are established that have matching sum-rates for all 218 non-isomorphic four-message problems with equal link capacities for all the links from servers to receivers. The proposed inner bound is built on a distributed composite coding scheme that outperforms the existing schemes by incorporating more flexible decoding configurations and enhanced fractional rate allocations into two-stage composite coding, a scheme that was originally introduced for centralized index coding. The proposed outer bound is built on the polymatroidal axioms of entropy, as well as functional dependences such as the $\rm{fd}$-separation introduced by the multi-server nature of the problem. This outer bound utilizes general groupings of servers with different levels of granularity, which allows a natural tradeoff between computational complexity and tightness of the bound, and includes and improves upon all existing outer bounds for distributed index coding. Specific features of the proposed inner and outer bounds are demonstrated through concrete examples with four or five messages.
[ { "created": "Sat, 27 Jan 2018 10:05:49 GMT", "version": "v1" }, { "created": "Sun, 5 Apr 2020 03:37:21 GMT", "version": "v2" } ]
2020-04-07
[ [ "Liu", "Yucheng", "" ], [ "Sadeghi", "Parastoo", "" ], [ "Arbabjolfaei", "Fatemeh", "" ], [ "Kim", "Young-Han", "" ] ]
In index coding, a server broadcasts multiple messages to their respective receivers, each with some side information that can be utilized to reduce the amount of communication from the server. Distributed index coding is an extension of index coding in which the messages are broadcast from multiple servers, each storing different subsets of the messages. In this paper, the optimal tradeoff among the message rates and the server broadcast rates, which is defined formally as the capacity region, is studied for a general distributed index coding problem. Inner and outer bounds on the capacity region are established that have matching sum-rates for all 218 non-isomorphic four-message problems with equal link capacities for all the links from servers to receivers. The proposed inner bound is built on a distributed composite coding scheme that outperforms the existing schemes by incorporating more flexible decoding configurations and enhanced fractional rate allocations into two-stage composite coding, a scheme that was originally introduced for centralized index coding. The proposed outer bound is built on the polymatroidal axioms of entropy, as well as functional dependences such as the $\rm{fd}$-separation introduced by the multi-server nature of the problem. This outer bound utilizes general groupings of servers with different levels of granularity, which allows a natural tradeoff between computational complexity and tightness of the bound, and includes and improves upon all existing outer bounds for distributed index coding. Specific features of the proposed inner and outer bounds are demonstrated through concrete examples with four or five messages.
1801.06274
Yuhao Zhu
Yuhao Zhu, Matthew Mattina, Paul Whatmough
Mobile Machine Learning Hardware at ARM: A Systems-on-Chip (SoC) Perspective
null
null
null
null
cs.LG cs.AR cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning is playing an increasingly significant role in emerging mobile application domains such as AR/VR, ADAS, etc. Accordingly, hardware architects have designed customized hardware for machine learning algorithms, especially neural networks, to improve compute efficiency. However, machine learning is typically just one processing stage in complex end-to-end applications, involving multiple components in a mobile Systems-on-a-chip (SoC). Focusing only on ML accelerators loses bigger optimization opportunity at the system (SoC) level. This paper argues that hardware architects should expand the optimization scope to the entire SoC. We demonstrate one particular case-study in the domain of continuous computer vision where camera sensor, image signal processor (ISP), memory, and NN accelerator are synergistically co-designed to achieve optimal system-level efficiency.
[ { "created": "Fri, 19 Jan 2018 02:42:10 GMT", "version": "v1" }, { "created": "Thu, 1 Feb 2018 23:54:27 GMT", "version": "v2" } ]
2018-02-05
[ [ "Zhu", "Yuhao", "" ], [ "Mattina", "Matthew", "" ], [ "Whatmough", "Paul", "" ] ]
Machine learning is playing an increasingly significant role in emerging mobile application domains such as AR/VR, ADAS, etc. Accordingly, hardware architects have designed customized hardware for machine learning algorithms, especially neural networks, to improve compute efficiency. However, machine learning is typically just one processing stage in complex end-to-end applications, involving multiple components in a mobile Systems-on-a-chip (SoC). Focusing only on ML accelerators loses bigger optimization opportunity at the system (SoC) level. This paper argues that hardware architects should expand the optimization scope to the entire SoC. We demonstrate one particular case-study in the domain of continuous computer vision where camera sensor, image signal processor (ISP), memory, and NN accelerator are synergistically co-designed to achieve optimal system-level efficiency.
2209.15439
Yifan Lu
Yifan Lu, Gurkirt Singh, Suman Saha, Luc Van Gool
Exploiting Instance-based Mixed Sampling via Auxiliary Source Domain Supervision for Domain-adaptive Action Detection
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose a novel domain adaptive action detection approach and a new adaptation protocol that leverages the recent advancements in image-level unsupervised domain adaptation (UDA) techniques and handle vagaries of instance-level video data. Self-training combined with cross-domain mixed sampling has shown remarkable performance gain in semantic segmentation in UDA (unsupervised domain adaptation) context. Motivated by this fact, we propose an approach for human action detection in videos that transfers knowledge from the source domain (annotated dataset) to the target domain (unannotated dataset) using mixed sampling and pseudo-label-based selftraining. The existing UDA techniques follow a ClassMix algorithm for semantic segmentation. However, simply adopting ClassMix for action detection does not work, mainly because these are two entirely different problems, i.e., pixel-label classification vs. instance-label detection. To tackle this, we propose a novel action instance mixed sampling technique that combines information across domains based on action instances instead of action classes. Moreover, we propose a new UDA training protocol that addresses the long-tail sample distribution and domain shift problem by using supervision from an auxiliary source domain (ASD). For the ASD, we propose a new action detection dataset with dense frame-level annotations. We name our proposed framework as domain-adaptive action instance mixing (DA-AIM). We demonstrate that DA-AIM consistently outperforms prior works on challenging domain adaptation benchmarks. The source code is available at https://github.com/wwwfan628/DA-AIM.
[ { "created": "Wed, 28 Sep 2022 22:03:25 GMT", "version": "v1" }, { "created": "Thu, 6 Oct 2022 09:48:43 GMT", "version": "v2" } ]
2022-10-07
[ [ "Lu", "Yifan", "" ], [ "Singh", "Gurkirt", "" ], [ "Saha", "Suman", "" ], [ "Van Gool", "Luc", "" ] ]
We propose a novel domain adaptive action detection approach and a new adaptation protocol that leverages the recent advancements in image-level unsupervised domain adaptation (UDA) techniques and handle vagaries of instance-level video data. Self-training combined with cross-domain mixed sampling has shown remarkable performance gain in semantic segmentation in UDA (unsupervised domain adaptation) context. Motivated by this fact, we propose an approach for human action detection in videos that transfers knowledge from the source domain (annotated dataset) to the target domain (unannotated dataset) using mixed sampling and pseudo-label-based selftraining. The existing UDA techniques follow a ClassMix algorithm for semantic segmentation. However, simply adopting ClassMix for action detection does not work, mainly because these are two entirely different problems, i.e., pixel-label classification vs. instance-label detection. To tackle this, we propose a novel action instance mixed sampling technique that combines information across domains based on action instances instead of action classes. Moreover, we propose a new UDA training protocol that addresses the long-tail sample distribution and domain shift problem by using supervision from an auxiliary source domain (ASD). For the ASD, we propose a new action detection dataset with dense frame-level annotations. We name our proposed framework as domain-adaptive action instance mixing (DA-AIM). We demonstrate that DA-AIM consistently outperforms prior works on challenging domain adaptation benchmarks. The source code is available at https://github.com/wwwfan628/DA-AIM.
1801.02690
Benjamin Elizalde
Abelino Jimenez, Benjamin Elizalde, Bhiksha Raj
DCASE 2017 Task 1: Acoustic Scene Classification Using Shift-Invariant Kernels and Random Features
null
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Acoustic scene recordings are represented by different types of handcrafted or Neural Network-derived features. These features, typically of thousands of dimensions, are classified in state of the art approaches using kernel machines, such as the Support Vector Machines (SVM). However, the complexity of training these methods increases with the dimensionality of these input features and the size of the dataset. A solution is to map the input features to a randomized lower-dimensional feature space. The resulting random features can approximate non-linear kernels with faster linear kernel computation. In this work, we computed a set of 6,553 input features and used them to compute random features to approximate three types of kernels, Gaussian, Laplacian and Cauchy. We compared their performance using an SVM in the context of the DCASE Task 1 - Acoustic Scene Classification. Experiments show that both, input and random features outperformed the DCASE baseline by an absolute 4%. Moreover, the random features reduced the dimensionality of the input by more than three times with minimal loss of performance and by more than six times and still outperformed the baseline. Hence, random features could be employed by state of the art approaches to compute low-storage features and perform faster kernel computations.
[ { "created": "Mon, 8 Jan 2018 21:12:49 GMT", "version": "v1" } ]
2018-01-10
[ [ "Jimenez", "Abelino", "" ], [ "Elizalde", "Benjamin", "" ], [ "Raj", "Bhiksha", "" ] ]
Acoustic scene recordings are represented by different types of handcrafted or Neural Network-derived features. These features, typically of thousands of dimensions, are classified in state of the art approaches using kernel machines, such as the Support Vector Machines (SVM). However, the complexity of training these methods increases with the dimensionality of these input features and the size of the dataset. A solution is to map the input features to a randomized lower-dimensional feature space. The resulting random features can approximate non-linear kernels with faster linear kernel computation. In this work, we computed a set of 6,553 input features and used them to compute random features to approximate three types of kernels, Gaussian, Laplacian and Cauchy. We compared their performance using an SVM in the context of the DCASE Task 1 - Acoustic Scene Classification. Experiments show that both, input and random features outperformed the DCASE baseline by an absolute 4%. Moreover, the random features reduced the dimensionality of the input by more than three times with minimal loss of performance and by more than six times and still outperformed the baseline. Hence, random features could be employed by state of the art approaches to compute low-storage features and perform faster kernel computations.
2211.12223
Hassan Hussein
Hassan Hussein, Allard Oelen, Oliver Karras, S\"oren Auer
KGMM -- A Maturity Model for Scholarly Knowledge Graphs based on Intertwined Human-Machine Collaboration
Accepted as a full paper at the ICADL 2022: International Conference on Asian Digital Libraries 2022
null
null
null
cs.DL cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Knowledge Graphs (KG) have gained increasing importance in science, business and society in the last years. However, most knowledge graphs were either extracted or compiled from existing sources. There are only relatively few examples where knowledge graphs were genuinely created by an intertwined human-machine collaboration. Also, since the quality of data and knowledge graphs is of paramount importance, a number of data quality assessment models have been proposed. However, they do not take the specific aspects of intertwined human-machine curated knowledge graphs into account. In this work, we propose a graded maturity model for scholarly knowledge graphs (KGMM), which specifically focuses on aspects related to the joint, evolutionary curation of knowledge graphs for digital libraries. Our model comprises 5 maturity stages with 20 quality measures. We demonstrate the implementation of our model in a large scale scholarly knowledge graph curation effort.
[ { "created": "Tue, 22 Nov 2022 12:29:08 GMT", "version": "v1" } ]
2022-11-23
[ [ "Hussein", "Hassan", "" ], [ "Oelen", "Allard", "" ], [ "Karras", "Oliver", "" ], [ "Auer", "Sören", "" ] ]
Knowledge Graphs (KG) have gained increasing importance in science, business and society in the last years. However, most knowledge graphs were either extracted or compiled from existing sources. There are only relatively few examples where knowledge graphs were genuinely created by an intertwined human-machine collaboration. Also, since the quality of data and knowledge graphs is of paramount importance, a number of data quality assessment models have been proposed. However, they do not take the specific aspects of intertwined human-machine curated knowledge graphs into account. In this work, we propose a graded maturity model for scholarly knowledge graphs (KGMM), which specifically focuses on aspects related to the joint, evolutionary curation of knowledge graphs for digital libraries. Our model comprises 5 maturity stages with 20 quality measures. We demonstrate the implementation of our model in a large scale scholarly knowledge graph curation effort.
1808.04468
Yinlam Chow
Jonathan Lacotte and Mohammad Ghavamzadeh and Yinlam Chow and Marco Pavone
Risk-Sensitive Generative Adversarial Imitation Learning
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study risk-sensitive imitation learning where the agent's goal is to perform at least as well as the expert in terms of a risk profile. We first formulate our risk-sensitive imitation learning setting. We consider the generative adversarial approach to imitation learning (GAIL) and derive an optimization problem for our formulation, which we call it risk-sensitive GAIL (RS-GAIL). We then derive two different versions of our RS-GAIL optimization problem that aim at matching the risk profiles of the agent and the expert w.r.t. Jensen-Shannon (JS) divergence and Wasserstein distance, and develop risk-sensitive generative adversarial imitation learning algorithms based on these optimization problems. We evaluate the performance of our algorithms and compare them with GAIL and the risk-averse imitation learning (RAIL) algorithms in two MuJoCo and two OpenAI classical control tasks.
[ { "created": "Mon, 13 Aug 2018 21:08:46 GMT", "version": "v1" }, { "created": "Mon, 24 Dec 2018 02:41:29 GMT", "version": "v2" } ]
2018-12-27
[ [ "Lacotte", "Jonathan", "" ], [ "Ghavamzadeh", "Mohammad", "" ], [ "Chow", "Yinlam", "" ], [ "Pavone", "Marco", "" ] ]
We study risk-sensitive imitation learning where the agent's goal is to perform at least as well as the expert in terms of a risk profile. We first formulate our risk-sensitive imitation learning setting. We consider the generative adversarial approach to imitation learning (GAIL) and derive an optimization problem for our formulation, which we call it risk-sensitive GAIL (RS-GAIL). We then derive two different versions of our RS-GAIL optimization problem that aim at matching the risk profiles of the agent and the expert w.r.t. Jensen-Shannon (JS) divergence and Wasserstein distance, and develop risk-sensitive generative adversarial imitation learning algorithms based on these optimization problems. We evaluate the performance of our algorithms and compare them with GAIL and the risk-averse imitation learning (RAIL) algorithms in two MuJoCo and two OpenAI classical control tasks.
1508.06853
Daniele Liciotti
Daniele Liciotti, Marco Contigiani, Emanuele Frontoni, Adriano Mancini, Primo Zingaretti, Valerio Placidi
Shopper Analytics: a customer activity recognition system using a distributed RGB-D camera network
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of this paper is to present an integrated system consisted of a RGB-D camera and a software able to monitor shoppers in intelligent retail environments. We propose an innovative low cost smart system that can understand the shoppers' behavior and, in particular, their interactions with the products in the shelves, with the aim to develop an automatic RGB-D technique for video analysis. The system of cameras detects the presence of people and univocally identifies them. Through the depth frames, the system detects the interactions of the shoppers with the products on the shelf and determines if a product is picked up or if the product is taken and then put back and finally, if there is not contact with the products. The system is low cost and easy to install, and experimental results demonstrated that its performances are satisfactory also in real environments.
[ { "created": "Thu, 27 Aug 2015 13:31:09 GMT", "version": "v1" } ]
2015-08-28
[ [ "Liciotti", "Daniele", "" ], [ "Contigiani", "Marco", "" ], [ "Frontoni", "Emanuele", "" ], [ "Mancini", "Adriano", "" ], [ "Zingaretti", "Primo", "" ], [ "Placidi", "Valerio", "" ] ]
The aim of this paper is to present an integrated system consisted of a RGB-D camera and a software able to monitor shoppers in intelligent retail environments. We propose an innovative low cost smart system that can understand the shoppers' behavior and, in particular, their interactions with the products in the shelves, with the aim to develop an automatic RGB-D technique for video analysis. The system of cameras detects the presence of people and univocally identifies them. Through the depth frames, the system detects the interactions of the shoppers with the products on the shelf and determines if a product is picked up or if the product is taken and then put back and finally, if there is not contact with the products. The system is low cost and easy to install, and experimental results demonstrated that its performances are satisfactory also in real environments.
2310.05034
Zhifeng Hu
Zhifeng Hu, Chong Han, Xudong Wang
Deep Reinforcement Learning Based Cross-Layer Design in Terahertz Mesh Backhaul Networks
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Supporting ultra-high data rates and flexible reconfigurability, Terahertz (THz) mesh networks are attractive for next-generation wireless backhaul systems that empower the integrated access and backhaul (IAB). In THz mesh backhaul networks, the efficient cross-layer routing and long-term resource allocation is yet an open problem due to dynamic traffic demands as well as possible link failures caused by the high directivity and high non-line-of-sight (NLoS) path loss of THz spectrum. In addition, unpredictable data traffic and the mixed integer programming property with the NP-hard nature further challenge the effective routing and long-term resource allocation design. In this paper, a deep reinforcement learning (DRL) based cross-layer design in THz mesh backhaul networks (DEFLECT) is proposed, by considering dynamic traffic demands and possible sudden link failures. In DEFLECT, a heuristic routing metric is first devised to facilitate resource efficiency (RE) enhancement regarding energy and sub-array usages. Furthermore, a DRL based resource allocation algorithm is developed to realize long-term RE maximization and fast recovery from broken links. Specifically in the DRL method, the exploited multi-task structure cooperatively benefits joint power and sub-array allocation. Additionally, the leveraged hierarchical architecture realizes tailored resource allocation for each base station and learned knowledge transfer for fast recovery. Simulation results show that DEFLECT routing consumes less resource, compared to the minimal hop-count metric. Moreover, unlike conventional DRL methods causing packet loss and second-level latency, DEFLECT DRL realizes the long-term RE maximization with no packet loss and millisecond-level latency, and recovers resource-efficient backhaul from broken links within 1s.
[ { "created": "Sun, 8 Oct 2023 06:36:00 GMT", "version": "v1" } ]
2023-10-10
[ [ "Hu", "Zhifeng", "" ], [ "Han", "Chong", "" ], [ "Wang", "Xudong", "" ] ]
Supporting ultra-high data rates and flexible reconfigurability, Terahertz (THz) mesh networks are attractive for next-generation wireless backhaul systems that empower the integrated access and backhaul (IAB). In THz mesh backhaul networks, the efficient cross-layer routing and long-term resource allocation is yet an open problem due to dynamic traffic demands as well as possible link failures caused by the high directivity and high non-line-of-sight (NLoS) path loss of THz spectrum. In addition, unpredictable data traffic and the mixed integer programming property with the NP-hard nature further challenge the effective routing and long-term resource allocation design. In this paper, a deep reinforcement learning (DRL) based cross-layer design in THz mesh backhaul networks (DEFLECT) is proposed, by considering dynamic traffic demands and possible sudden link failures. In DEFLECT, a heuristic routing metric is first devised to facilitate resource efficiency (RE) enhancement regarding energy and sub-array usages. Furthermore, a DRL based resource allocation algorithm is developed to realize long-term RE maximization and fast recovery from broken links. Specifically in the DRL method, the exploited multi-task structure cooperatively benefits joint power and sub-array allocation. Additionally, the leveraged hierarchical architecture realizes tailored resource allocation for each base station and learned knowledge transfer for fast recovery. Simulation results show that DEFLECT routing consumes less resource, compared to the minimal hop-count metric. Moreover, unlike conventional DRL methods causing packet loss and second-level latency, DEFLECT DRL realizes the long-term RE maximization with no packet loss and millisecond-level latency, and recovers resource-efficient backhaul from broken links within 1s.
1810.01997
Guna Prasaad
Guna Prasaad, Alvin Cheung, Dan Suciu
Improving High Contention OLTP Performance via Transaction Scheduling
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research in transaction processing has made significant progress in improving the performance of multi-core in-memory transactional systems. However, the focus has mainly been on low-contention workloads. Modern transactional systems perform poorly on workloads with transactions accessing a few highly contended data items. We observe that most transactional workloads, including those with high contention, can be divided into clusters of data conflict-free transactions and a small set of residuals. In this paper, we introduce a new concurrency control protocol called Strife that leverages the above observation. Strife executes transactions in batches, where each batch is partitioned into clusters of conflict-free transactions and a small set of residual transactions. The conflict-free clusters are executed in parallel without any concurrency control, followed by executing the residual cluster either serially or with concurrency control. We present a low-overhead algorithm that partitions a batch of transactions into clusters that do not have cross-cluster conflicts and a small residual cluster. We evaluate Strife against the optimistic concurrency control protocol and several variants of two-phase locking, where the latter is known to perform better than other concurrency protocols under high contention, and show that Strife can improve transactional throughput by up to 2x. We also perform an in-depth micro-benchmark analysis to empirically characterize the performance and quality of our clustering algorithm
[ { "created": "Wed, 3 Oct 2018 22:53:29 GMT", "version": "v1" } ]
2018-10-05
[ [ "Prasaad", "Guna", "" ], [ "Cheung", "Alvin", "" ], [ "Suciu", "Dan", "" ] ]
Research in transaction processing has made significant progress in improving the performance of multi-core in-memory transactional systems. However, the focus has mainly been on low-contention workloads. Modern transactional systems perform poorly on workloads with transactions accessing a few highly contended data items. We observe that most transactional workloads, including those with high contention, can be divided into clusters of data conflict-free transactions and a small set of residuals. In this paper, we introduce a new concurrency control protocol called Strife that leverages the above observation. Strife executes transactions in batches, where each batch is partitioned into clusters of conflict-free transactions and a small set of residual transactions. The conflict-free clusters are executed in parallel without any concurrency control, followed by executing the residual cluster either serially or with concurrency control. We present a low-overhead algorithm that partitions a batch of transactions into clusters that do not have cross-cluster conflicts and a small residual cluster. We evaluate Strife against the optimistic concurrency control protocol and several variants of two-phase locking, where the latter is known to perform better than other concurrency protocols under high contention, and show that Strife can improve transactional throughput by up to 2x. We also perform an in-depth micro-benchmark analysis to empirically characterize the performance and quality of our clustering algorithm
2312.12244
Mohammadali Mohammadi
Mohamed Elfiatoure and Mohammadali Mohammadi and Hien Quoc Ngo and Peter J. Smith and Michail Matthaiou
Protecting Massive MIMO-Radar Coexistence: Precoding Design and Power Control
10 Figures, IEEE Open Journal of the Communication society
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
This paper studies the coexistence between a downlink multiuser massive multi-input-multi-output (MIMO) communication system and MIMO radar. The performance of the massive MIMO system with maximum ratio ($\MR$), zero-forcing ($\ZF$), and protective $\ZF$ ($\PZF$) precoding designs is characterized in terms of spectral efficiency (SE) and by taking the channel estimation errors and power control into account. The idea of $\PZF$ precoding relies on the projection of the information-bearing signal onto the null space of the radar channel to protect the radar against communication signals. We further derive closed-form expressions for the detection probability of the radar system for the considered precoding designs. By leveraging the closed-form expressions for the SE and detection probability, we formulate a power control problem at the radar and base station (BS) to maximize the detection probability while satisfying the per-user SE requirements. This optimization problem can be efficiently tackled via the bisection method by solving a linear feasibility problem. Our analysis and simulations show that the $\PZF$ design has the highest detection probability performance among all designs, with intermediate SE performance compared to the other two designs. Moreover, by optimally selecting the power control coefficients at the BS and radar, the detection probability improves significantly.
[ { "created": "Tue, 19 Dec 2023 15:28:00 GMT", "version": "v1" } ]
2023-12-20
[ [ "Elfiatoure", "Mohamed", "" ], [ "Mohammadi", "Mohammadali", "" ], [ "Ngo", "Hien Quoc", "" ], [ "Smith", "Peter J.", "" ], [ "Matthaiou", "Michail", "" ] ]
This paper studies the coexistence between a downlink multiuser massive multi-input-multi-output (MIMO) communication system and MIMO radar. The performance of the massive MIMO system with maximum ratio ($\MR$), zero-forcing ($\ZF$), and protective $\ZF$ ($\PZF$) precoding designs is characterized in terms of spectral efficiency (SE) and by taking the channel estimation errors and power control into account. The idea of $\PZF$ precoding relies on the projection of the information-bearing signal onto the null space of the radar channel to protect the radar against communication signals. We further derive closed-form expressions for the detection probability of the radar system for the considered precoding designs. By leveraging the closed-form expressions for the SE and detection probability, we formulate a power control problem at the radar and base station (BS) to maximize the detection probability while satisfying the per-user SE requirements. This optimization problem can be efficiently tackled via the bisection method by solving a linear feasibility problem. Our analysis and simulations show that the $\PZF$ design has the highest detection probability performance among all designs, with intermediate SE performance compared to the other two designs. Moreover, by optimally selecting the power control coefficients at the BS and radar, the detection probability improves significantly.
2309.15817
Yangjun Ruan
Yangjun Ruan, Honghua Dong, Andrew Wang, Silviu Pitis, Yongchao Zhou, Jimmy Ba, Yann Dubois, Chris J. Maddison, Tatsunori Hashimoto
Identifying the Risks of LM Agents with an LM-Emulated Sandbox
null
null
null
null
cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent advances in Language Model (LM) agents and tool use, exemplified by applications like ChatGPT Plugins, enable a rich set of capabilities but also amplify potential risks - such as leaking private data or causing financial losses. Identifying these risks is labor-intensive, necessitating implementing the tools, setting up the environment for each test scenario manually, and finding risky cases. As tools and agents become more complex, the high cost of testing these agents will make it increasingly difficult to find high-stakes, long-tailed risks. To address these challenges, we introduce ToolEmu: a framework that uses an LM to emulate tool execution and enables the testing of LM agents against a diverse range of tools and scenarios, without manual instantiation. Alongside the emulator, we develop an LM-based automatic safety evaluator that examines agent failures and quantifies associated risks. We test both the tool emulator and evaluator through human evaluation and find that 68.8% of failures identified with ToolEmu would be valid real-world agent failures. Using our curated initial benchmark consisting of 36 high-stakes tools and 144 test cases, we provide a quantitative risk analysis of current LM agents and identify numerous failures with potentially severe outcomes. Notably, even the safest LM agent exhibits such failures 23.9% of the time according to our evaluator, underscoring the need to develop safer LM agents for real-world deployment.
[ { "created": "Mon, 25 Sep 2023 17:08:02 GMT", "version": "v1" }, { "created": "Fri, 17 May 2024 17:17:45 GMT", "version": "v2" } ]
2024-05-20
[ [ "Ruan", "Yangjun", "" ], [ "Dong", "Honghua", "" ], [ "Wang", "Andrew", "" ], [ "Pitis", "Silviu", "" ], [ "Zhou", "Yongchao", "" ], [ "Ba", "Jimmy", "" ], [ "Dubois", "Yann", "" ], [ "Maddison", "Chris J.", "" ], [ "Hashimoto", "Tatsunori", "" ] ]
Recent advances in Language Model (LM) agents and tool use, exemplified by applications like ChatGPT Plugins, enable a rich set of capabilities but also amplify potential risks - such as leaking private data or causing financial losses. Identifying these risks is labor-intensive, necessitating implementing the tools, setting up the environment for each test scenario manually, and finding risky cases. As tools and agents become more complex, the high cost of testing these agents will make it increasingly difficult to find high-stakes, long-tailed risks. To address these challenges, we introduce ToolEmu: a framework that uses an LM to emulate tool execution and enables the testing of LM agents against a diverse range of tools and scenarios, without manual instantiation. Alongside the emulator, we develop an LM-based automatic safety evaluator that examines agent failures and quantifies associated risks. We test both the tool emulator and evaluator through human evaluation and find that 68.8% of failures identified with ToolEmu would be valid real-world agent failures. Using our curated initial benchmark consisting of 36 high-stakes tools and 144 test cases, we provide a quantitative risk analysis of current LM agents and identify numerous failures with potentially severe outcomes. Notably, even the safest LM agent exhibits such failures 23.9% of the time according to our evaluator, underscoring the need to develop safer LM agents for real-world deployment.
1703.00632
Divya Padmanabhan
Divya Padmanabhan, Satyanath Bhat, Prabuchandran K.J., Shirish Shevade and Y. Narahari
A Dominant Strategy Truthful, Deterministic Multi-Armed Bandit Mechanism with Logarithmic Regret
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic multi-armed bandit (MAB) mechanisms are widely used in sponsored search auctions, crowdsourcing, online procurement, etc. Existing stochastic MAB mechanisms with a deterministic payment rule, proposed in the literature, necessarily suffer a regret of $\Omega(T^{2/3})$, where $T$ is the number of time steps. This happens because the existing mechanisms consider the worst case scenario where the means of the agents' stochastic rewards are separated by a very small amount that depends on $T$. We make, and, exploit the crucial observation that in most scenarios, the separation between the agents' rewards is rarely a function of $T$. Moreover, in the case that the rewards of the arms are arbitrarily close, the regret contributed by such sub-optimal arms is minimal. Our idea is to allow the center to indicate the resolution, $\Delta$, with which the agents must be distinguished. This immediately leads us to introduce the notion of $\Delta$-Regret. Using sponsored search auctions as a concrete example (the same idea applies for other applications as well), we propose a dominant strategy incentive compatible (DSIC) and individually rational (IR), deterministic MAB mechanism, based on ideas from the Upper Confidence Bound (UCB) family of MAB algorithms. Remarkably, the proposed mechanism $\Delta$-UCB achieves a $\Delta$-regret of $O(\log T)$ for the case of sponsored search auctions. We first establish the results for single slot sponsored search auctions and then non-trivially extend the results to the case where multiple slots are to be allocated.
[ { "created": "Thu, 2 Mar 2017 05:36:16 GMT", "version": "v1" }, { "created": "Fri, 29 May 2020 08:43:39 GMT", "version": "v2" } ]
2020-06-01
[ [ "Padmanabhan", "Divya", "" ], [ "Bhat", "Satyanath", "" ], [ "J.", "Prabuchandran K.", "" ], [ "Shevade", "Shirish", "" ], [ "Narahari", "Y.", "" ] ]
Stochastic multi-armed bandit (MAB) mechanisms are widely used in sponsored search auctions, crowdsourcing, online procurement, etc. Existing stochastic MAB mechanisms with a deterministic payment rule, proposed in the literature, necessarily suffer a regret of $\Omega(T^{2/3})$, where $T$ is the number of time steps. This happens because the existing mechanisms consider the worst case scenario where the means of the agents' stochastic rewards are separated by a very small amount that depends on $T$. We make, and, exploit the crucial observation that in most scenarios, the separation between the agents' rewards is rarely a function of $T$. Moreover, in the case that the rewards of the arms are arbitrarily close, the regret contributed by such sub-optimal arms is minimal. Our idea is to allow the center to indicate the resolution, $\Delta$, with which the agents must be distinguished. This immediately leads us to introduce the notion of $\Delta$-Regret. Using sponsored search auctions as a concrete example (the same idea applies for other applications as well), we propose a dominant strategy incentive compatible (DSIC) and individually rational (IR), deterministic MAB mechanism, based on ideas from the Upper Confidence Bound (UCB) family of MAB algorithms. Remarkably, the proposed mechanism $\Delta$-UCB achieves a $\Delta$-regret of $O(\log T)$ for the case of sponsored search auctions. We first establish the results for single slot sponsored search auctions and then non-trivially extend the results to the case where multiple slots are to be allocated.
2212.11211
Fahmida Tasnim Lisa
Fahmida Tasnim Lisa, Md. Zarif Hossain, Sharmin Naj Mou, Shahriar Ivan, and Md. Hasanul Kabir (Islamic University of Technology, Gazipur, Bangladesh)
Land Cover and Land Use Detection using Semi-Supervised Learning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semi-supervised learning (SSL) has made significant strides in the field of remote sensing. Finding a large number of labeled datasets for SSL methods is uncommon, and manually labeling datasets is expensive and time-consuming. Furthermore, accurately identifying remote sensing satellite images is more complicated than it is for conventional images. Class-imbalanced datasets are another prevalent phenomenon, and models trained on these become biased towards the majority classes. This becomes a critical issue with an SSL model's subpar performance. We aim to address the issue of labeling unlabeled data and also solve the model bias problem due to imbalanced datasets while achieving better accuracy. To accomplish this, we create "artificial" labels and train a model to have reasonable accuracy. We iteratively redistribute the classes through resampling using a distribution alignment technique. We use a variety of class imbalanced satellite image datasets: EuroSAT, UCM, and WHU-RS19. On UCM balanced dataset, our method outperforms previous methods MSMatch and FixMatch by 1.21% and 0.6%, respectively. For imbalanced EuroSAT, our method outperforms MSMatch and FixMatch by 1.08% and 1%, respectively. Our approach significantly lessens the requirement for labeled data, consistently outperforms alternative approaches, and resolves the issue of model bias caused by class imbalance in datasets.
[ { "created": "Wed, 21 Dec 2022 17:36:28 GMT", "version": "v1" } ]
2022-12-22
[ [ "Lisa", "Fahmida Tasnim", "", "Islamic University of Technology, Gazipur,\n Bangladesh" ], [ "Hossain", "Md. Zarif", "", "Islamic University of Technology, Gazipur,\n Bangladesh" ], [ "Mou", "Sharmin Naj", "", "Islamic University of Technology, Gazipur,\n Bangladesh" ], [ "Ivan", "Shahriar", "", "Islamic University of Technology, Gazipur,\n Bangladesh" ], [ "Kabir", "Md. Hasanul", "", "Islamic University of Technology, Gazipur,\n Bangladesh" ] ]
Semi-supervised learning (SSL) has made significant strides in the field of remote sensing. Finding a large number of labeled datasets for SSL methods is uncommon, and manually labeling datasets is expensive and time-consuming. Furthermore, accurately identifying remote sensing satellite images is more complicated than it is for conventional images. Class-imbalanced datasets are another prevalent phenomenon, and models trained on these become biased towards the majority classes. This becomes a critical issue with an SSL model's subpar performance. We aim to address the issue of labeling unlabeled data and also solve the model bias problem due to imbalanced datasets while achieving better accuracy. To accomplish this, we create "artificial" labels and train a model to have reasonable accuracy. We iteratively redistribute the classes through resampling using a distribution alignment technique. We use a variety of class imbalanced satellite image datasets: EuroSAT, UCM, and WHU-RS19. On UCM balanced dataset, our method outperforms previous methods MSMatch and FixMatch by 1.21% and 0.6%, respectively. For imbalanced EuroSAT, our method outperforms MSMatch and FixMatch by 1.08% and 1%, respectively. Our approach significantly lessens the requirement for labeled data, consistently outperforms alternative approaches, and resolves the issue of model bias caused by class imbalance in datasets.
2103.11144
Carmel Rabinovitz
Carmel Rabinovitz, Niko Grupen and Aviv Tamar
Unsupervised Feature Learning for Manipulation with Contrastive Domain Randomization
Accepted to ICRA 2021, code can be found at https://github.com/carmelrabinov/cdr
null
null
null
cs.LG cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
Robotic tasks such as manipulation with visual inputs require image features that capture the physical properties of the scene, e.g., the position and configuration of objects. Recently, it has been suggested to learn such features in an unsupervised manner from simulated, self-supervised, robot interaction; the idea being that high-level physical properties are well captured by modern physical simulators, and their representation from visual inputs may transfer well to the real world. In particular, learning methods based on noise contrastive estimation have shown promising results. To robustify the simulation-to-real transfer, domain randomization (DR) was suggested for learning features that are invariant to irrelevant visual properties such as textures or lighting. In this work, however, we show that a naive application of DR to unsupervised learning based on contrastive estimation does not promote invariance, as the loss function maximizes mutual information between the features and both the relevant and irrelevant visual properties. We propose a simple modification of the contrastive loss to fix this, exploiting the fact that we can control the simulated randomization of visual properties. Our approach learns physical features that are significantly more robust to visual domain variation, as we demonstrate using both rigid and non-rigid objects.
[ { "created": "Sat, 20 Mar 2021 09:54:45 GMT", "version": "v1" }, { "created": "Tue, 8 Jun 2021 12:52:56 GMT", "version": "v2" } ]
2021-06-09
[ [ "Rabinovitz", "Carmel", "" ], [ "Grupen", "Niko", "" ], [ "Tamar", "Aviv", "" ] ]
Robotic tasks such as manipulation with visual inputs require image features that capture the physical properties of the scene, e.g., the position and configuration of objects. Recently, it has been suggested to learn such features in an unsupervised manner from simulated, self-supervised, robot interaction; the idea being that high-level physical properties are well captured by modern physical simulators, and their representation from visual inputs may transfer well to the real world. In particular, learning methods based on noise contrastive estimation have shown promising results. To robustify the simulation-to-real transfer, domain randomization (DR) was suggested for learning features that are invariant to irrelevant visual properties such as textures or lighting. In this work, however, we show that a naive application of DR to unsupervised learning based on contrastive estimation does not promote invariance, as the loss function maximizes mutual information between the features and both the relevant and irrelevant visual properties. We propose a simple modification of the contrastive loss to fix this, exploiting the fact that we can control the simulated randomization of visual properties. Our approach learns physical features that are significantly more robust to visual domain variation, as we demonstrate using both rigid and non-rigid objects.
2110.05727
Yilun Zhu
Yilun Zhu, Sameer Pradhan, Amir Zeldes
Anatomy of OntoGUM--Adapting GUM to the OntoNotes Scheme to Evaluate Robustness of SOTA Coreference Algorithms
CRAC 2021. arXiv admin note: substantial text overlap with arXiv:2106.00933
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
SOTA coreference resolution produces increasingly impressive scores on the OntoNotes benchmark. However lack of comparable data following the same scheme for more genres makes it difficult to evaluate generalizability to open domain data. Zhu et al. (2021) introduced the creation of the OntoGUM corpus for evaluating geralizability of the latest neural LM-based end-to-end systems. This paper covers details of the mapping process which is a set of deterministic rules applied to the rich syntactic and discourse annotations manually annotated in the GUM corpus. Out-of-domain evaluation across 12 genres shows nearly 15-20% degradation for both deterministic and deep learning systems, indicating a lack of generalizability or covert overfitting in existing coreference resolution models.
[ { "created": "Tue, 12 Oct 2021 03:52:49 GMT", "version": "v1" } ]
2021-10-13
[ [ "Zhu", "Yilun", "" ], [ "Pradhan", "Sameer", "" ], [ "Zeldes", "Amir", "" ] ]
SOTA coreference resolution produces increasingly impressive scores on the OntoNotes benchmark. However lack of comparable data following the same scheme for more genres makes it difficult to evaluate generalizability to open domain data. Zhu et al. (2021) introduced the creation of the OntoGUM corpus for evaluating geralizability of the latest neural LM-based end-to-end systems. This paper covers details of the mapping process which is a set of deterministic rules applied to the rich syntactic and discourse annotations manually annotated in the GUM corpus. Out-of-domain evaluation across 12 genres shows nearly 15-20% degradation for both deterministic and deep learning systems, indicating a lack of generalizability or covert overfitting in existing coreference resolution models.
2002.08103
Pierre Monnin
Pierre Monnin, Miguel Couceiro, Amedeo Napoli, Adrien Coulet
Knowledge-Based Matching of $n$-ary Tuples
null
null
10.1007/978-3-030-57855-8_4
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An increasing number of data and knowledge sources are accessible by human and software agents in the expanding Semantic Web. Sources may differ in granularity or completeness, and thus be complementary. Consequently, they should be reconciled in order to unlock the full potential of their conjoint knowledge. In particular, units should be matched within and across sources, and their level of relatedness should be classified into equivalent, more specific, or similar. This task is challenging since knowledge units can be heterogeneously represented in sources (e.g., in terms of vocabularies). In this paper, we focus on matching n-ary tuples in a knowledge base with a rule-based methodology. To alleviate heterogeneity issues, we rely on domain knowledge expressed by ontologies. We tested our method on the biomedical domain of pharmacogenomics by searching alignments among 50,435 n-ary tuples from four different real-world sources. Results highlight noteworthy agreements and particularities within and across sources.
[ { "created": "Wed, 19 Feb 2020 11:01:33 GMT", "version": "v1" }, { "created": "Thu, 14 May 2020 18:51:53 GMT", "version": "v2" } ]
2020-11-12
[ [ "Monnin", "Pierre", "" ], [ "Couceiro", "Miguel", "" ], [ "Napoli", "Amedeo", "" ], [ "Coulet", "Adrien", "" ] ]
An increasing number of data and knowledge sources are accessible by human and software agents in the expanding Semantic Web. Sources may differ in granularity or completeness, and thus be complementary. Consequently, they should be reconciled in order to unlock the full potential of their conjoint knowledge. In particular, units should be matched within and across sources, and their level of relatedness should be classified into equivalent, more specific, or similar. This task is challenging since knowledge units can be heterogeneously represented in sources (e.g., in terms of vocabularies). In this paper, we focus on matching n-ary tuples in a knowledge base with a rule-based methodology. To alleviate heterogeneity issues, we rely on domain knowledge expressed by ontologies. We tested our method on the biomedical domain of pharmacogenomics by searching alignments among 50,435 n-ary tuples from four different real-world sources. Results highlight noteworthy agreements and particularities within and across sources.
2102.11917
Edmon Begoli
Jeremiah Duncan, Fabian Fallas, Chris Gropp, Emily Herron, Maria Mahbub, Paula Olaya, Eduardo Ponce, Tabitha K. Samuel, Daniel Schultz, Sudarshan Srinivasan, Maofeng Tang, Viktor Zenkov, Quan Zhou, Edmon Begoli
The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Authorship analysis is an important subject in the field of natural language processing. It allows the detection of the most likely writer of articles, news, books, or messages. This technique has multiple uses in tasks related to authorship attribution, detection of plagiarism, style analysis, sources of misinformation, etc. The focus of this paper is to explore the limitations and sensitiveness of established approaches to adversarial manipulations of inputs. To this end, and using those established techniques, we first developed an experimental frame-work for author detection and input perturbations. Next, we experimentally evaluated the performance of the authorship detection model to a collection of semantic-preserving adversarial perturbations of input narratives. Finally, we compare and analyze the effects of different perturbation strategies, input and model configurations, and the effects of these on the author detection model.
[ { "created": "Tue, 23 Feb 2021 19:55:45 GMT", "version": "v1" } ]
2021-02-25
[ [ "Duncan", "Jeremiah", "" ], [ "Fallas", "Fabian", "" ], [ "Gropp", "Chris", "" ], [ "Herron", "Emily", "" ], [ "Mahbub", "Maria", "" ], [ "Olaya", "Paula", "" ], [ "Ponce", "Eduardo", "" ], [ "Samuel", "Tabitha K.", "" ], [ "Schultz", "Daniel", "" ], [ "Srinivasan", "Sudarshan", "" ], [ "Tang", "Maofeng", "" ], [ "Zenkov", "Viktor", "" ], [ "Zhou", "Quan", "" ], [ "Begoli", "Edmon", "" ] ]
Authorship analysis is an important subject in the field of natural language processing. It allows the detection of the most likely writer of articles, news, books, or messages. This technique has multiple uses in tasks related to authorship attribution, detection of plagiarism, style analysis, sources of misinformation, etc. The focus of this paper is to explore the limitations and sensitiveness of established approaches to adversarial manipulations of inputs. To this end, and using those established techniques, we first developed an experimental frame-work for author detection and input perturbations. Next, we experimentally evaluated the performance of the authorship detection model to a collection of semantic-preserving adversarial perturbations of input narratives. Finally, we compare and analyze the effects of different perturbation strategies, input and model configurations, and the effects of these on the author detection model.
2212.00291
Zachary Ankner
Zachary Ankner, Alex Renda, Gintare Karolina Dziugaite, Jonathan Frankle, Tian Jin
The Effect of Data Dimensionality on Neural Network Prunability
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Practitioners prune neural networks for efficiency gains and generalization improvements, but few scrutinize the factors determining the prunability of a neural network the maximum fraction of weights that pruning can remove without compromising the model's test accuracy. In this work, we study the properties of input data that may contribute to the prunability of a neural network. For high dimensional input data such as images, text, and audio, the manifold hypothesis suggests that these high dimensional inputs approximately lie on or near a significantly lower dimensional manifold. Prior work demonstrates that the underlying low dimensional structure of the input data may affect the sample efficiency of learning. In this paper, we investigate whether the low dimensional structure of the input data affects the prunability of a neural network.
[ { "created": "Thu, 1 Dec 2022 05:33:25 GMT", "version": "v1" } ]
2022-12-02
[ [ "Ankner", "Zachary", "" ], [ "Renda", "Alex", "" ], [ "Dziugaite", "Gintare Karolina", "" ], [ "Frankle", "Jonathan", "" ], [ "Jin", "Tian", "" ] ]
Practitioners prune neural networks for efficiency gains and generalization improvements, but few scrutinize the factors determining the prunability of a neural network the maximum fraction of weights that pruning can remove without compromising the model's test accuracy. In this work, we study the properties of input data that may contribute to the prunability of a neural network. For high dimensional input data such as images, text, and audio, the manifold hypothesis suggests that these high dimensional inputs approximately lie on or near a significantly lower dimensional manifold. Prior work demonstrates that the underlying low dimensional structure of the input data may affect the sample efficiency of learning. In this paper, we investigate whether the low dimensional structure of the input data affects the prunability of a neural network.
0709.0599
Igal Sason
Igal Sason
On Universal Properties of Capacity-Approaching LDPC Ensembles
Published in the IEEE Trans. on Information Theory, vol. 55, no. 7, pp. 2956 - 2990, July 2009
null
10.1109/TIT.2009.2021305
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is focused on the derivation of some universal properties of capacity-approaching low-density parity-check (LDPC) code ensembles whose transmission takes place over memoryless binary-input output-symmetric (MBIOS) channels. Properties of the degree distributions, graphical complexity and the number of fundamental cycles in the bipartite graphs are considered via the derivation of information-theoretic bounds. These bounds are expressed in terms of the target block/ bit error probability and the gap (in rate) to capacity. Most of the bounds are general for any decoding algorithm, and some others are proved under belief propagation (BP) decoding. Proving these bounds under a certain decoding algorithm, validates them automatically also under any sub-optimal decoding algorithm. A proper modification of these bounds makes them universal for the set of all MBIOS channels which exhibit a given capacity. Bounds on the degree distributions and graphical complexity apply to finite-length LDPC codes and to the asymptotic case of an infinite block length. The bounds are compared with capacity-approaching LDPC code ensembles under BP decoding, and they are shown to be informative and are easy to calculate. Finally, some interesting open problems are considered.
[ { "created": "Wed, 5 Sep 2007 12:25:58 GMT", "version": "v1" }, { "created": "Sun, 9 Sep 2007 14:10:41 GMT", "version": "v2" }, { "created": "Tue, 18 Sep 2007 07:49:42 GMT", "version": "v3" }, { "created": "Sun, 21 Oct 2007 11:52:19 GMT", "version": "v4" }, { "created": "Thu, 26 Feb 2015 07:40:50 GMT", "version": "v5" } ]
2016-11-17
[ [ "Sason", "Igal", "" ] ]
This paper is focused on the derivation of some universal properties of capacity-approaching low-density parity-check (LDPC) code ensembles whose transmission takes place over memoryless binary-input output-symmetric (MBIOS) channels. Properties of the degree distributions, graphical complexity and the number of fundamental cycles in the bipartite graphs are considered via the derivation of information-theoretic bounds. These bounds are expressed in terms of the target block/ bit error probability and the gap (in rate) to capacity. Most of the bounds are general for any decoding algorithm, and some others are proved under belief propagation (BP) decoding. Proving these bounds under a certain decoding algorithm, validates them automatically also under any sub-optimal decoding algorithm. A proper modification of these bounds makes them universal for the set of all MBIOS channels which exhibit a given capacity. Bounds on the degree distributions and graphical complexity apply to finite-length LDPC codes and to the asymptotic case of an infinite block length. The bounds are compared with capacity-approaching LDPC code ensembles under BP decoding, and they are shown to be informative and are easy to calculate. Finally, some interesting open problems are considered.
2310.11689
Jiefeng Chen
Jiefeng Chen, Jinsung Yoon, Sayna Ebrahimi, Sercan O Arik, Tomas Pfister, Somesh Jha
Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs
Paper published at Findings of the Association for Computational Linguistics: EMNLP, 2023
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) have recently shown great advances in a variety of tasks, including natural language understanding and generation. However, their use in high-stakes decision-making scenarios is still limited due to the potential for errors. Selective prediction is a technique that can be used to improve the reliability of the LLMs by allowing them to abstain from making predictions when they are unsure of the answer. In this work, we propose a novel framework for adaptation with self-evaluation to improve the selective prediction performance of LLMs. Our framework is based on the idea of using parameter-efficient tuning to adapt the LLM to the specific task at hand while improving its ability to perform self-evaluation. We evaluate our method on a variety of question-answering (QA) datasets and show that it outperforms state-of-the-art selective prediction methods. For example, on the CoQA benchmark, our method improves the AUACC from 91.23% to 92.63% and improves the AUROC from 74.61% to 80.25%.
[ { "created": "Wed, 18 Oct 2023 03:34:59 GMT", "version": "v1" }, { "created": "Sat, 11 Nov 2023 19:29:42 GMT", "version": "v2" } ]
2023-11-14
[ [ "Chen", "Jiefeng", "" ], [ "Yoon", "Jinsung", "" ], [ "Ebrahimi", "Sayna", "" ], [ "Arik", "Sercan O", "" ], [ "Pfister", "Tomas", "" ], [ "Jha", "Somesh", "" ] ]
Large language models (LLMs) have recently shown great advances in a variety of tasks, including natural language understanding and generation. However, their use in high-stakes decision-making scenarios is still limited due to the potential for errors. Selective prediction is a technique that can be used to improve the reliability of the LLMs by allowing them to abstain from making predictions when they are unsure of the answer. In this work, we propose a novel framework for adaptation with self-evaluation to improve the selective prediction performance of LLMs. Our framework is based on the idea of using parameter-efficient tuning to adapt the LLM to the specific task at hand while improving its ability to perform self-evaluation. We evaluate our method on a variety of question-answering (QA) datasets and show that it outperforms state-of-the-art selective prediction methods. For example, on the CoQA benchmark, our method improves the AUACC from 91.23% to 92.63% and improves the AUROC from 74.61% to 80.25%.
2306.05106
Tao Gu
Alexander V. Gheorghiu, Tao Gu, David J. Pym
Proof-theoretic Semantics for Intuitionistic Multiplicative Linear Logic
27 pages
null
null
null
cs.LO math.LO
http://creativecommons.org/licenses/by/4.0/
This work is the first exploration of proof-theoretic semantics for a substructural logic. It focuses on the base-extension semantics (B-eS) for intuitionistic multiplicative linear logic (IMLL). The starting point is a review of Sandqvist's B-eS for intuitionistic propositional logic (IPL), for which we propose an alternative treatment of conjunction that takes the form of the generalized elimination rule for the connective. The resulting semantics is shown to be sound and complete. This motivates our main contribution, a B-eS for IMLL, in which the definitions of the logical constants all take the form of their elimination rule and for which soundness and completeness are established.
[ { "created": "Thu, 8 Jun 2023 11:13:57 GMT", "version": "v1" }, { "created": "Tue, 15 Aug 2023 16:12:24 GMT", "version": "v2" } ]
2024-03-20
[ [ "Gheorghiu", "Alexander V.", "" ], [ "Gu", "Tao", "" ], [ "Pym", "David J.", "" ] ]
This work is the first exploration of proof-theoretic semantics for a substructural logic. It focuses on the base-extension semantics (B-eS) for intuitionistic multiplicative linear logic (IMLL). The starting point is a review of Sandqvist's B-eS for intuitionistic propositional logic (IPL), for which we propose an alternative treatment of conjunction that takes the form of the generalized elimination rule for the connective. The resulting semantics is shown to be sound and complete. This motivates our main contribution, a B-eS for IMLL, in which the definitions of the logical constants all take the form of their elimination rule and for which soundness and completeness are established.
2107.11768
Linhao Zhang
Linhao Zhang, Yu Shi, Linjun Shou, Ming Gong, Houfeng Wang, Michael Zeng
A Joint and Domain-Adaptive Approach to Spoken Language Understanding
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Spoken Language Understanding (SLU) is composed of two subtasks: intent detection (ID) and slot filling (SF). There are two lines of research on SLU. One jointly tackles these two subtasks to improve their prediction accuracy, and the other focuses on the domain-adaptation ability of one of the subtasks. In this paper, we attempt to bridge these two lines of research and propose a joint and domain adaptive approach to SLU. We formulate SLU as a constrained generation task and utilize a dynamic vocabulary based on domain-specific ontology. We conduct experiments on the ASMixed and MTOD datasets and achieve competitive performance with previous state-of-the-art joint models. Besides, results show that our joint model can be effectively adapted to a new domain.
[ { "created": "Sun, 25 Jul 2021 09:38:42 GMT", "version": "v1" } ]
2021-07-27
[ [ "Zhang", "Linhao", "" ], [ "Shi", "Yu", "" ], [ "Shou", "Linjun", "" ], [ "Gong", "Ming", "" ], [ "Wang", "Houfeng", "" ], [ "Zeng", "Michael", "" ] ]
Spoken Language Understanding (SLU) is composed of two subtasks: intent detection (ID) and slot filling (SF). There are two lines of research on SLU. One jointly tackles these two subtasks to improve their prediction accuracy, and the other focuses on the domain-adaptation ability of one of the subtasks. In this paper, we attempt to bridge these two lines of research and propose a joint and domain adaptive approach to SLU. We formulate SLU as a constrained generation task and utilize a dynamic vocabulary based on domain-specific ontology. We conduct experiments on the ASMixed and MTOD datasets and achieve competitive performance with previous state-of-the-art joint models. Besides, results show that our joint model can be effectively adapted to a new domain.
cs/0310047
Simona Perri
Simona Perri, Francesco Scarcello, Nicola Leone
Abductive Logic Programs with Penalization: Semantics, Complexity and Implementation
36 pages; will be published in Theory and Practice of Logic Programming
null
null
null
cs.AI
null
Abduction, first proposed in the setting of classical logics, has been studied with growing interest in the logic programming area during the last years. In this paper we study abduction with penalization in the logic programming framework. This form of abductive reasoning, which has not been previously analyzed in logic programming, turns out to represent several relevant problems, including optimization problems, very naturally. We define a formal model for abduction with penalization over logic programs, which extends the abductive framework proposed by Kakas and Mancarella. We address knowledge representation issues, encoding a number of problems in our abductive framework. In particular, we consider some relevant problems, taken from different domains, ranging from optimization theory to diagnosis and planning; their encodings turn out to be simple and elegant in our formalism. We thoroughly analyze the computational complexity of the main problems arising in the context of abduction with penalization from logic programs. Finally, we implement a system supporting the proposed abductive framework on top of the DLV engine. To this end, we design a translation from abduction problems with penalties into logic programs with weak constraints. We prove that this approach is sound and complete.
[ { "created": "Fri, 24 Oct 2003 18:03:06 GMT", "version": "v1" } ]
2007-05-23
[ [ "Perri", "Simona", "" ], [ "Scarcello", "Francesco", "" ], [ "Leone", "Nicola", "" ] ]
Abduction, first proposed in the setting of classical logics, has been studied with growing interest in the logic programming area during the last years. In this paper we study abduction with penalization in the logic programming framework. This form of abductive reasoning, which has not been previously analyzed in logic programming, turns out to represent several relevant problems, including optimization problems, very naturally. We define a formal model for abduction with penalization over logic programs, which extends the abductive framework proposed by Kakas and Mancarella. We address knowledge representation issues, encoding a number of problems in our abductive framework. In particular, we consider some relevant problems, taken from different domains, ranging from optimization theory to diagnosis and planning; their encodings turn out to be simple and elegant in our formalism. We thoroughly analyze the computational complexity of the main problems arising in the context of abduction with penalization from logic programs. Finally, we implement a system supporting the proposed abductive framework on top of the DLV engine. To this end, we design a translation from abduction problems with penalties into logic programs with weak constraints. We prove that this approach is sound and complete.
2205.04977
Marios Constantinides
Marios Constantinides, Daniele Quercia
The Future of Hybrid Meetings
10 pages, 1 figure
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Meetings are typically considered to be the fuel of an organization's productivity -- a place where employees discuss ideas and make collective decisions. However, it is no secret that meetings are also often perceived as wasteful vacuums, depleting employee morale and productivity, likely due to the fact that current technologies fall short in fully supporting physical or virtual meeting experience. In this position paper, we discuss the three key elements that make a meeting successful (i.e., execution, psychological safety, and physical comfort), and present new tools for hybrid meetings that incorporate those elements. As past research has focused on supporting meeting execution (the first element), we set the roadmap for future research on the two other elements: on psychological safety by articulating how new technologies could make meeting useful for all participants, ensure all participants give and receive appropriate levels of attention, and enable all participants to feel and make others feel comfortable; and on physical comfort by dwelling on how new technologies could make the meeting experience comfortable by integrating all human senses. We also discuss the potential danger of these technologies inadvertently becoming surveillance tools.
[ { "created": "Tue, 10 May 2022 15:33:49 GMT", "version": "v1" } ]
2022-05-11
[ [ "Constantinides", "Marios", "" ], [ "Quercia", "Daniele", "" ] ]
Meetings are typically considered to be the fuel of an organization's productivity -- a place where employees discuss ideas and make collective decisions. However, it is no secret that meetings are also often perceived as wasteful vacuums, depleting employee morale and productivity, likely due to the fact that current technologies fall short in fully supporting physical or virtual meeting experience. In this position paper, we discuss the three key elements that make a meeting successful (i.e., execution, psychological safety, and physical comfort), and present new tools for hybrid meetings that incorporate those elements. As past research has focused on supporting meeting execution (the first element), we set the roadmap for future research on the two other elements: on psychological safety by articulating how new technologies could make meeting useful for all participants, ensure all participants give and receive appropriate levels of attention, and enable all participants to feel and make others feel comfortable; and on physical comfort by dwelling on how new technologies could make the meeting experience comfortable by integrating all human senses. We also discuss the potential danger of these technologies inadvertently becoming surveillance tools.
0906.3231
EPTCS
Sergey Verlan, Yurii Rogozhin
New Choice for Small Universal Devices: Symport/Antiport P Systems
null
EPTCS 1, 2009, pp. 235-242
10.4204/EPTCS.1.23
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Symport/antiport P systems provide a very simple machinery inspired by corresponding operations in the living cell. It turns out that systems of small descriptional complexity are needed to achieve the universality by these systems. This makes them a good candidate for small universal devices replacing register machines for different simulations, especially when a simulating parallel machinery is involved. This article contains survey of these systems and presents different trade-offs between parameters.
[ { "created": "Wed, 17 Jun 2009 16:14:54 GMT", "version": "v1" } ]
2009-06-18
[ [ "Verlan", "Sergey", "" ], [ "Rogozhin", "Yurii", "" ] ]
Symport/antiport P systems provide a very simple machinery inspired by corresponding operations in the living cell. It turns out that systems of small descriptional complexity are needed to achieve the universality by these systems. This makes them a good candidate for small universal devices replacing register machines for different simulations, especially when a simulating parallel machinery is involved. This article contains survey of these systems and presents different trade-offs between parameters.
2403.15209
Taeheon Kim
Taeheon Kim, Sangyun Chung, Damin Yeom, Youngjoon Yu, Hak Gu Kim, Yong Man Ro
MSCoTDet: Language-driven Multi-modal Fusion for Improved Multispectral Pedestrian Detection
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Multispectral pedestrian detection is attractive for around-the-clock applications due to the complementary information between RGB and thermal modalities. However, current models often fail to detect pedestrians in certain cases (e.g., thermal-obscured pedestrians), particularly due to the modality bias learned from statistically biased datasets. In this paper, we investigate how to mitigate modality bias in multispectral pedestrian detection using Large Language Models (LLMs). Accordingly, we design a Multispectral Chain-of-Thought (MSCoT) prompting strategy, which prompts the LLM to perform multispectral pedestrian detection. Moreover, we propose a novel Multispectral Chain-of-Thought Detection (MSCoTDet) framework that integrates MSCoT prompting into multispectral pedestrian detection. To this end, we design a Language-driven Multi-modal Fusion (LMF) strategy that enables fusing the outputs of MSCoT prompting with the detection results of vision-based multispectral pedestrian detection models. Extensive experiments validate that MSCoTDet effectively mitigates modality biases and improves multispectral pedestrian detection.
[ { "created": "Fri, 22 Mar 2024 13:50:27 GMT", "version": "v1" }, { "created": "Wed, 29 May 2024 12:53:17 GMT", "version": "v2" } ]
2024-05-30
[ [ "Kim", "Taeheon", "" ], [ "Chung", "Sangyun", "" ], [ "Yeom", "Damin", "" ], [ "Yu", "Youngjoon", "" ], [ "Kim", "Hak Gu", "" ], [ "Ro", "Yong Man", "" ] ]
Multispectral pedestrian detection is attractive for around-the-clock applications due to the complementary information between RGB and thermal modalities. However, current models often fail to detect pedestrians in certain cases (e.g., thermal-obscured pedestrians), particularly due to the modality bias learned from statistically biased datasets. In this paper, we investigate how to mitigate modality bias in multispectral pedestrian detection using Large Language Models (LLMs). Accordingly, we design a Multispectral Chain-of-Thought (MSCoT) prompting strategy, which prompts the LLM to perform multispectral pedestrian detection. Moreover, we propose a novel Multispectral Chain-of-Thought Detection (MSCoTDet) framework that integrates MSCoT prompting into multispectral pedestrian detection. To this end, we design a Language-driven Multi-modal Fusion (LMF) strategy that enables fusing the outputs of MSCoT prompting with the detection results of vision-based multispectral pedestrian detection models. Extensive experiments validate that MSCoTDet effectively mitigates modality biases and improves multispectral pedestrian detection.
2404.08675
Yabin Zhang
Yabin Zhang, Wenhui Yu, Erhan Zhang, Xu Chen, Lantao Hu, Peng Jiang, Kun Gai
RecGPT: Generative Personalized Prompts for Sequential Recommendation via ChatGPT Training Paradigm
null
null
null
null
cs.IR cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
ChatGPT has achieved remarkable success in natural language understanding. Considering that recommendation is indeed a conversation between users and the system with items as words, which has similar underlying pattern with ChatGPT, we design a new chat framework in item index level for the recommendation task. Our novelty mainly contains three parts: model, training and inference. For the model part, we adopt Generative Pre-training Transformer (GPT) as the sequential recommendation model and design a user modular to capture personalized information. For the training part, we adopt the two-stage paradigm of ChatGPT, including pre-training and fine-tuning. In the pre-training stage, we train GPT model by auto-regression. In the fine-tuning stage, we train the model with prompts, which include both the newly-generated results from the model and the user's feedback. For the inference part, we predict several user interests as user representations in an autoregressive manner. For each interest vector, we recall several items with the highest similarity and merge the items recalled by all interest vectors into the final result. We conduct experiments with both offline public datasets and online A/B test to demonstrate the effectiveness of our proposed method.
[ { "created": "Sat, 6 Apr 2024 12:38:54 GMT", "version": "v1" } ]
2024-04-16
[ [ "Zhang", "Yabin", "" ], [ "Yu", "Wenhui", "" ], [ "Zhang", "Erhan", "" ], [ "Chen", "Xu", "" ], [ "Hu", "Lantao", "" ], [ "Jiang", "Peng", "" ], [ "Gai", "Kun", "" ] ]
ChatGPT has achieved remarkable success in natural language understanding. Considering that recommendation is indeed a conversation between users and the system with items as words, which has similar underlying pattern with ChatGPT, we design a new chat framework in item index level for the recommendation task. Our novelty mainly contains three parts: model, training and inference. For the model part, we adopt Generative Pre-training Transformer (GPT) as the sequential recommendation model and design a user modular to capture personalized information. For the training part, we adopt the two-stage paradigm of ChatGPT, including pre-training and fine-tuning. In the pre-training stage, we train GPT model by auto-regression. In the fine-tuning stage, we train the model with prompts, which include both the newly-generated results from the model and the user's feedback. For the inference part, we predict several user interests as user representations in an autoregressive manner. For each interest vector, we recall several items with the highest similarity and merge the items recalled by all interest vectors into the final result. We conduct experiments with both offline public datasets and online A/B test to demonstrate the effectiveness of our proposed method.
2404.10780
Muhammad Shoaib Farooq
Muhammad Shoaib Farooq, Hina jabbar
Phishing Website Detection Using a Combined Model of ANN and LSTM
Pages 9, Figures 5
null
null
null
cs.CR cs.CY
http://creativecommons.org/licenses/by/4.0/
In this digital era, our lives highly depend on the internet and worldwide technology. Wide usage of technology and platforms of communication makes our lives better and easier. But on the other side it carries out some security issues and cruel activities, phishing is one activity of these cruel activities. It is a type of cybercrime, which has the purpose of stealing the personal information of the computer user, and enterprises, which carry out fake websites that are the copy of the original websites. The attackers used personal information like account IDs, passwords, and usernames for the purpose of some fraudulent activities against the user of the computer. To overcome this problem researchers focused on the machine learning and deep learning approaches. In our study, we are going to use machine learning and deep learning models to identify the fake web pages on the secondary dataset.
[ { "created": "Sun, 24 Mar 2024 14:46:02 GMT", "version": "v1" } ]
2024-04-18
[ [ "Farooq", "Muhammad Shoaib", "" ], [ "jabbar", "Hina", "" ] ]
In this digital era, our lives highly depend on the internet and worldwide technology. Wide usage of technology and platforms of communication makes our lives better and easier. But on the other side it carries out some security issues and cruel activities, phishing is one activity of these cruel activities. It is a type of cybercrime, which has the purpose of stealing the personal information of the computer user, and enterprises, which carry out fake websites that are the copy of the original websites. The attackers used personal information like account IDs, passwords, and usernames for the purpose of some fraudulent activities against the user of the computer. To overcome this problem researchers focused on the machine learning and deep learning approaches. In our study, we are going to use machine learning and deep learning models to identify the fake web pages on the secondary dataset.
2301.12695
Jiahao He
Jiahao He, Shuangyin Li, Xinming Wang, Shing-Chi Cheung, Gansen Zhao and Jinji Yang
Neural-FEBI: Accurate Function Identification in Ethereum Virtual Machine Bytecode
19 pages, 13 figures
null
10.1016/j.jss.2023.111627
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Millions of smart contracts have been deployed onto the Ethereum platform, posing potential attack subjects. Therefore, analyzing contract binaries is vital since their sources are unavailable, involving identification comprising function entry identification and detecting its boundaries. Such boundaries are critical to many smart contract applications, e.g. reverse engineering and profiling. Unfortunately, it is challenging to identify functions from these stripped contract binaries due to the lack of internal function call statements and the compiler-inducing instruction reshuffling. Recently, several existing works excessively relied on a set of handcrafted heuristic rules which impose several faults. To address this issue, we propose a novel neural network-based framework for EVM bytecode Function Entries and Boundaries Identification (neural-FEBI) that does not rely on a fixed set of handcrafted rules. Instead, it used a two-level bi-Long Short-Term Memory network and a Conditional Random Field network to locate the function entries. The suggested framework also devises a control flow traversal algorithm to determine the code segments reachable from the function entry as its boundary. Several experiments on 38,996 publicly available smart contracts collected as binary demonstrate that neural-FEBI confirms the lowest and highest F1-scores for the function entries identification task across different datasets of 88.3 to 99.7, respectively. Its performance on the function boundary identification task is also increased from 79.4% to 97.1% compared with state-of-the-art. We further demonstrate that the identified function information can be used to construct more accurate intra-procedural CFGs and call graphs. The experimental results confirm that the proposed framework significantly outperforms state-of-the-art, often based on handcrafted heuristic rules.
[ { "created": "Mon, 30 Jan 2023 07:02:44 GMT", "version": "v1" }, { "created": "Wed, 1 Feb 2023 08:53:03 GMT", "version": "v2" } ]
2023-02-02
[ [ "He", "Jiahao", "" ], [ "Li", "Shuangyin", "" ], [ "Wang", "Xinming", "" ], [ "Cheung", "Shing-Chi", "" ], [ "Zhao", "Gansen", "" ], [ "Yang", "Jinji", "" ] ]
Millions of smart contracts have been deployed onto the Ethereum platform, posing potential attack subjects. Therefore, analyzing contract binaries is vital since their sources are unavailable, involving identification comprising function entry identification and detecting its boundaries. Such boundaries are critical to many smart contract applications, e.g. reverse engineering and profiling. Unfortunately, it is challenging to identify functions from these stripped contract binaries due to the lack of internal function call statements and the compiler-inducing instruction reshuffling. Recently, several existing works excessively relied on a set of handcrafted heuristic rules which impose several faults. To address this issue, we propose a novel neural network-based framework for EVM bytecode Function Entries and Boundaries Identification (neural-FEBI) that does not rely on a fixed set of handcrafted rules. Instead, it used a two-level bi-Long Short-Term Memory network and a Conditional Random Field network to locate the function entries. The suggested framework also devises a control flow traversal algorithm to determine the code segments reachable from the function entry as its boundary. Several experiments on 38,996 publicly available smart contracts collected as binary demonstrate that neural-FEBI confirms the lowest and highest F1-scores for the function entries identification task across different datasets of 88.3 to 99.7, respectively. Its performance on the function boundary identification task is also increased from 79.4% to 97.1% compared with state-of-the-art. We further demonstrate that the identified function information can be used to construct more accurate intra-procedural CFGs and call graphs. The experimental results confirm that the proposed framework significantly outperforms state-of-the-art, often based on handcrafted heuristic rules.
1902.01031
Md Ashraful Alam Milton
Md Ashraful Alam Milton
Towards Pedestrian Detection Using RetinaNet in ECCV 2018 Wider Pedestrian Detection Challenge
ECCV Wider pedestrian detection challenege submission
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
The main essence of this paper is to investigate the performance of RetinaNet based object detectors on pedestrian detection. Pedestrian detection is an important research topic as it provides a baseline for general object detection and has a great number of practical applications like autonomous car, robotics and Security camera. Though extensive research has made huge progress in pedestrian detection, there are still many issues and open for more research and improvement. Recent deep learning based methods have shown state-of-the-art performance in computer vision tasks such as image classification, object detection, and segmentation. Wider pedestrian detection challenge aims at finding improve solutions for pedestrian detection problem. In this paper, We propose a pedestrian detection system based on RetinaNet. Our solution has scored 0.4061 mAP. The code is available at https://github.com/miltonbd/ECCV_2018_pedestrian_detection_challenege.
[ { "created": "Mon, 4 Feb 2019 04:49:59 GMT", "version": "v1" } ]
2019-02-05
[ [ "Milton", "Md Ashraful Alam", "" ] ]
The main essence of this paper is to investigate the performance of RetinaNet based object detectors on pedestrian detection. Pedestrian detection is an important research topic as it provides a baseline for general object detection and has a great number of practical applications like autonomous car, robotics and Security camera. Though extensive research has made huge progress in pedestrian detection, there are still many issues and open for more research and improvement. Recent deep learning based methods have shown state-of-the-art performance in computer vision tasks such as image classification, object detection, and segmentation. Wider pedestrian detection challenge aims at finding improve solutions for pedestrian detection problem. In this paper, We propose a pedestrian detection system based on RetinaNet. Our solution has scored 0.4061 mAP. The code is available at https://github.com/miltonbd/ECCV_2018_pedestrian_detection_challenege.
2206.09977
Mohamad Kazem Shirani Faradonbeh
Mohamad Kazem Shirani Faradonbeh, Mohamad Sadegh Shirani Faradonbeh, Mohsen Bayati
Thompson Sampling Efficiently Learns to Control Diffusion Processes
null
null
null
null
cs.LG cs.AI cs.SY eess.SY math.DS math.OC
http://creativecommons.org/licenses/by/4.0/
Diffusion processes that evolve according to linear stochastic differential equations are an important family of continuous-time dynamic decision-making models. Optimal policies are well-studied for them, under full certainty about the drift matrices. However, little is known about data-driven control of diffusion processes with uncertain drift matrices as conventional discrete-time analysis techniques are not applicable. In addition, while the task can be viewed as a reinforcement learning problem involving exploration and exploitation trade-off, ensuring system stability is a fundamental component of designing optimal policies. We establish that the popular Thompson sampling algorithm learns optimal actions fast, incurring only a square-root of time regret, and also stabilizes the system in a short time period. To the best of our knowledge, this is the first such result for Thompson sampling in a diffusion process control problem. We validate our theoretical results through empirical simulations with real parameter matrices from two settings of airplane and blood glucose control. Moreover, we observe that Thompson sampling significantly improves (worst-case) regret, compared to the state-of-the-art algorithms, suggesting Thompson sampling explores in a more guarded fashion. Our theoretical analysis involves characterization of a certain optimality manifold that ties the local geometry of the drift parameters to the optimal control of the diffusion process. We expect this technique to be of broader interest.
[ { "created": "Mon, 20 Jun 2022 19:42:49 GMT", "version": "v1" } ]
2022-06-22
[ [ "Faradonbeh", "Mohamad Kazem Shirani", "" ], [ "Faradonbeh", "Mohamad Sadegh Shirani", "" ], [ "Bayati", "Mohsen", "" ] ]
Diffusion processes that evolve according to linear stochastic differential equations are an important family of continuous-time dynamic decision-making models. Optimal policies are well-studied for them, under full certainty about the drift matrices. However, little is known about data-driven control of diffusion processes with uncertain drift matrices as conventional discrete-time analysis techniques are not applicable. In addition, while the task can be viewed as a reinforcement learning problem involving exploration and exploitation trade-off, ensuring system stability is a fundamental component of designing optimal policies. We establish that the popular Thompson sampling algorithm learns optimal actions fast, incurring only a square-root of time regret, and also stabilizes the system in a short time period. To the best of our knowledge, this is the first such result for Thompson sampling in a diffusion process control problem. We validate our theoretical results through empirical simulations with real parameter matrices from two settings of airplane and blood glucose control. Moreover, we observe that Thompson sampling significantly improves (worst-case) regret, compared to the state-of-the-art algorithms, suggesting Thompson sampling explores in a more guarded fashion. Our theoretical analysis involves characterization of a certain optimality manifold that ties the local geometry of the drift parameters to the optimal control of the diffusion process. We expect this technique to be of broader interest.
1711.10713
Lse Lse
Damien Pollet (1), St\'ephane Ducasse (1) ((1) RMOD)
A critical analysis of string APIs: The case of Pharo
Science of Computer Programming, Elsevier, 2017
null
10.1016/j.scico.2017.11.005
null
cs.PL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most programming languages, besides C, provide a native abstraction for character strings, but string APIs vary widely in size, expressiveness, and subjective convenience across languages. In Pharo, while at first glance the API of the String class seems rich, it often feels cumbersome in practice; to improve its usability, we faced the challenge of assessing its design. However, we found hardly any guideline about design forces and how they structure the design space, and no comprehensive analysis of the expected string operations and their different variations. In this article, we first analyse the Pharo 4 String library, then contrast it with its Haskell, Java, Python, Ruby, and Rust counterparts. We harvest criteria to describe a string API, and reflect on features and design tensions. This analysis should help language designers in understanding the design space of strings, and will serve as a basis for a future redesign of the string library in Pharo.
[ { "created": "Wed, 29 Nov 2017 07:48:12 GMT", "version": "v1" } ]
2019-04-30
[ [ "Pollet", "Damien", "", "RMOD" ], [ "Ducasse", "Stéphane", "", "RMOD" ] ]
Most programming languages, besides C, provide a native abstraction for character strings, but string APIs vary widely in size, expressiveness, and subjective convenience across languages. In Pharo, while at first glance the API of the String class seems rich, it often feels cumbersome in practice; to improve its usability, we faced the challenge of assessing its design. However, we found hardly any guideline about design forces and how they structure the design space, and no comprehensive analysis of the expected string operations and their different variations. In this article, we first analyse the Pharo 4 String library, then contrast it with its Haskell, Java, Python, Ruby, and Rust counterparts. We harvest criteria to describe a string API, and reflect on features and design tensions. This analysis should help language designers in understanding the design space of strings, and will serve as a basis for a future redesign of the string library in Pharo.
2106.15335
Pierre Tholoniat
Tao Luo, Mingen Pan, Pierre Tholoniat, Asaf Cidon, Roxana Geambasu, Mathias L\'ecuyer
Privacy Budget Scheduling
Extended version of a paper presented at the 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI '21)
null
null
null
cs.CR cs.DC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning (ML) models trained on personal data have been shown to leak information about users. Differential privacy (DP) enables model training with a guaranteed bound on this leakage. Each new model trained with DP increases the bound on data leakage and can be seen as consuming part of a global privacy budget that should not be exceeded. This budget is a scarce resource that must be carefully managed to maximize the number of successfully trained models. We describe PrivateKube, an extension to the popular Kubernetes datacenter orchestrator that adds privacy as a new type of resource to be managed alongside other traditional compute resources, such as CPU, GPU, and memory. The abstractions we design for the privacy resource mirror those defined by Kubernetes for traditional resources, but there are also major differences. For example, traditional compute resources are replenishable while privacy is not: a CPU can be regained after a model finishes execution while privacy budget cannot. This distinction forces a re-design of the scheduler. We present DPF (Dominant Private Block Fairness) -- a variant of the popular Dominant Resource Fairness (DRF) algorithm -- that is geared toward the non-replenishable privacy resource but enjoys similar theoretical properties as DRF. We evaluate PrivateKube and DPF on microbenchmarks and an ML workload on Amazon Reviews data. Compared to existing baselines, DPF allows training more models under the same global privacy guarantee. This is especially true for DPF over R\'enyi DP, a highly composable form of DP.
[ { "created": "Tue, 29 Jun 2021 12:43:47 GMT", "version": "v1" } ]
2021-06-30
[ [ "Luo", "Tao", "" ], [ "Pan", "Mingen", "" ], [ "Tholoniat", "Pierre", "" ], [ "Cidon", "Asaf", "" ], [ "Geambasu", "Roxana", "" ], [ "Lécuyer", "Mathias", "" ] ]
Machine learning (ML) models trained on personal data have been shown to leak information about users. Differential privacy (DP) enables model training with a guaranteed bound on this leakage. Each new model trained with DP increases the bound on data leakage and can be seen as consuming part of a global privacy budget that should not be exceeded. This budget is a scarce resource that must be carefully managed to maximize the number of successfully trained models. We describe PrivateKube, an extension to the popular Kubernetes datacenter orchestrator that adds privacy as a new type of resource to be managed alongside other traditional compute resources, such as CPU, GPU, and memory. The abstractions we design for the privacy resource mirror those defined by Kubernetes for traditional resources, but there are also major differences. For example, traditional compute resources are replenishable while privacy is not: a CPU can be regained after a model finishes execution while privacy budget cannot. This distinction forces a re-design of the scheduler. We present DPF (Dominant Private Block Fairness) -- a variant of the popular Dominant Resource Fairness (DRF) algorithm -- that is geared toward the non-replenishable privacy resource but enjoys similar theoretical properties as DRF. We evaluate PrivateKube and DPF on microbenchmarks and an ML workload on Amazon Reviews data. Compared to existing baselines, DPF allows training more models under the same global privacy guarantee. This is especially true for DPF over R\'enyi DP, a highly composable form of DP.
2203.10217
Stefan Scherzinger
Stefan Scherzinger and Jakob Weinland and Robert Wilbrandt and Pascal Becker and Arne Roennau and R\"udiger Dillmann
A Walking Space Robot for On-Orbit Satellite Servicing: The ReCoBot
7 pages, 9 figures, submitted to the 18th IEEE International Conference on Automation Science and Engineering (CASE)
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
A key factor in the economic efficiency of satellites is their availability in orbit. Replacing standardized building blocks, such as empty fuel tanks or outdated electronic modules, could greatly extend the satellites' lifetime. This, however, requires flexible robots that can locomote on the surface of these satellites for optimal accessibility and manipulation. This paper introduces ReCoBot, a 7-axis walking space manipulator for locomotion and manipulation. The robot can connect to compatible structures with its symmetric ends and provides interfaces for manual teleoperation and motion planning with a constantly changing base and tip. We build on open-source robotics software and easily available components to evaluate the overall concept with an early stage demonstrator. The proposed manipulator has a length of 1.20 m and a weight of 10.4 kg and successfully locomotes over a satellite mockup in our lab environment.
[ { "created": "Sat, 19 Mar 2022 02:29:11 GMT", "version": "v1" } ]
2022-03-22
[ [ "Scherzinger", "Stefan", "" ], [ "Weinland", "Jakob", "" ], [ "Wilbrandt", "Robert", "" ], [ "Becker", "Pascal", "" ], [ "Roennau", "Arne", "" ], [ "Dillmann", "Rüdiger", "" ] ]
A key factor in the economic efficiency of satellites is their availability in orbit. Replacing standardized building blocks, such as empty fuel tanks or outdated electronic modules, could greatly extend the satellites' lifetime. This, however, requires flexible robots that can locomote on the surface of these satellites for optimal accessibility and manipulation. This paper introduces ReCoBot, a 7-axis walking space manipulator for locomotion and manipulation. The robot can connect to compatible structures with its symmetric ends and provides interfaces for manual teleoperation and motion planning with a constantly changing base and tip. We build on open-source robotics software and easily available components to evaluate the overall concept with an early stage demonstrator. The proposed manipulator has a length of 1.20 m and a weight of 10.4 kg and successfully locomotes over a satellite mockup in our lab environment.
1411.2883
Namita Jain Mrs
Namita Jain and C.A. Murthy
A new estimate of mutual information based measure of dependence between two variables: properties and fast implementation
International Journal of Machine Learning and Cybernetics, Springer Berlin Heidelberg, 10-Sep-2015
null
10.1007/s13042-015-0418-6
null
cs.IT cs.LG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article proposes a new method to estimate an existing mutual information based dependence measure using histogram density estimates. Finding a suitable bin length for histogram is an open problem. We propose a new way of computing the bin length for histogram using a function of maximum separation between points. The chosen bin length leads to consistent density estimates for histogram method. The values of density thus obtained are used to calculate an estimate of an existing dependence measure. The proposed estimate is named as Mutual Information Based Dependence Index (MIDI). Some important properties of MIDI have also been stated. The performance of the proposed method has been compared to generally accepted measures like Distance Correlation (dcor), Maximal Information Coefficient (MINE) in terms of accuracy and computational complexity with the help of several artificial data sets with different amounts of noise. The proposed method is able to detect many types of relationships between variables, without making any assumption about the functional form of the relationship. The power statistics of proposed method illustrate their effectiveness in detecting non linear relationship. Thus, it is able to achieve generality without a high rate of false positive cases. MIDI is found to work better on a real life data set than competing methods. The proposed method is found to overcome some of the limitations which occur with dcor and MINE. Computationally, MIDI is found to be better than dcor and MINE, in terms of time and memory, making it suitable for large data sets.
[ { "created": "Tue, 28 Oct 2014 12:49:24 GMT", "version": "v1" }, { "created": "Fri, 9 Jan 2015 02:31:35 GMT", "version": "v2" }, { "created": "Fri, 21 Aug 2015 09:04:18 GMT", "version": "v3" }, { "created": "Mon, 14 Sep 2015 02:36:58 GMT", "version": "v4" } ]
2015-09-15
[ [ "Jain", "Namita", "" ], [ "Murthy", "C. A.", "" ] ]
This article proposes a new method to estimate an existing mutual information based dependence measure using histogram density estimates. Finding a suitable bin length for histogram is an open problem. We propose a new way of computing the bin length for histogram using a function of maximum separation between points. The chosen bin length leads to consistent density estimates for histogram method. The values of density thus obtained are used to calculate an estimate of an existing dependence measure. The proposed estimate is named as Mutual Information Based Dependence Index (MIDI). Some important properties of MIDI have also been stated. The performance of the proposed method has been compared to generally accepted measures like Distance Correlation (dcor), Maximal Information Coefficient (MINE) in terms of accuracy and computational complexity with the help of several artificial data sets with different amounts of noise. The proposed method is able to detect many types of relationships between variables, without making any assumption about the functional form of the relationship. The power statistics of proposed method illustrate their effectiveness in detecting non linear relationship. Thus, it is able to achieve generality without a high rate of false positive cases. MIDI is found to work better on a real life data set than competing methods. The proposed method is found to overcome some of the limitations which occur with dcor and MINE. Computationally, MIDI is found to be better than dcor and MINE, in terms of time and memory, making it suitable for large data sets.
2003.05691
Yiduo Wang
Milad Ramezani, Yiduo Wang, Marco Camurri, David Wisth, Matias Mattamala and Maurice Fallon
The Newer College Dataset: Handheld LiDAR, Inertial and Vision with Ground Truth
null
null
10.1109/IROS45743.2020.9340849
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a large dataset with a variety of mobile mapping sensors collected using a handheld device carried at typical walking speeds for nearly 2.2 km through New College, Oxford. The dataset includes data from two commercially available devices - a stereoscopic-inertial camera and a multi-beam 3D LiDAR, which also provides inertial measurements. Additionally, we used a tripod-mounted survey grade LiDAR scanner to capture a detailed millimeter-accurate 3D map of the test location (containing $\sim$290 million points). Using the map we inferred centimeter-accurate 6 Degree of Freedom (DoF) ground truth for the position of the device for each LiDAR scan to enable better evaluation of LiDAR and vision localisation, mapping and reconstruction systems. This ground truth is the particular novel contribution of this dataset and we believe that it will enable systematic evaluation which many similar datasets have lacked. The dataset combines both built environments, open spaces and vegetated areas so as to test localization and mapping systems such as vision-based navigation, visual and LiDAR SLAM, 3D LIDAR reconstruction and appearance-based place recognition. The dataset is available at: ori.ox.ac.uk/datasets/newer-college-dataset
[ { "created": "Thu, 12 Mar 2020 10:17:16 GMT", "version": "v1" }, { "created": "Thu, 30 Jun 2022 14:33:50 GMT", "version": "v2" } ]
2022-07-01
[ [ "Ramezani", "Milad", "" ], [ "Wang", "Yiduo", "" ], [ "Camurri", "Marco", "" ], [ "Wisth", "David", "" ], [ "Mattamala", "Matias", "" ], [ "Fallon", "Maurice", "" ] ]
In this paper we present a large dataset with a variety of mobile mapping sensors collected using a handheld device carried at typical walking speeds for nearly 2.2 km through New College, Oxford. The dataset includes data from two commercially available devices - a stereoscopic-inertial camera and a multi-beam 3D LiDAR, which also provides inertial measurements. Additionally, we used a tripod-mounted survey grade LiDAR scanner to capture a detailed millimeter-accurate 3D map of the test location (containing $\sim$290 million points). Using the map we inferred centimeter-accurate 6 Degree of Freedom (DoF) ground truth for the position of the device for each LiDAR scan to enable better evaluation of LiDAR and vision localisation, mapping and reconstruction systems. This ground truth is the particular novel contribution of this dataset and we believe that it will enable systematic evaluation which many similar datasets have lacked. The dataset combines both built environments, open spaces and vegetated areas so as to test localization and mapping systems such as vision-based navigation, visual and LiDAR SLAM, 3D LIDAR reconstruction and appearance-based place recognition. The dataset is available at: ori.ox.ac.uk/datasets/newer-college-dataset
2104.14421
Andrew Wilson
Pavel Izmailov, Sharad Vikram, Matthew D. Hoffman, Andrew Gordon Wilson
What Are Bayesian Neural Network Posteriors Really Like?
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The posterior over Bayesian neural network (BNN) parameters is extremely high-dimensional and non-convex. For computational reasons, researchers approximate this posterior using inexpensive mini-batch methods such as mean-field variational inference or stochastic-gradient Markov chain Monte Carlo (SGMCMC). To investigate foundational questions in Bayesian deep learning, we instead use full-batch Hamiltonian Monte Carlo (HMC) on modern architectures. We show that (1) BNNs can achieve significant performance gains over standard training and deep ensembles; (2) a single long HMC chain can provide a comparable representation of the posterior to multiple shorter chains; (3) in contrast to recent studies, we find posterior tempering is not needed for near-optimal performance, with little evidence for a "cold posterior" effect, which we show is largely an artifact of data augmentation; (4) BMA performance is robust to the choice of prior scale, and relatively similar for diagonal Gaussian, mixture of Gaussian, and logistic priors; (5) Bayesian neural networks show surprisingly poor generalization under domain shift; (6) while cheaper alternatives such as deep ensembles and SGMCMC methods can provide good generalization, they provide distinct predictive distributions from HMC. Notably, deep ensemble predictive distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference.
[ { "created": "Thu, 29 Apr 2021 15:38:46 GMT", "version": "v1" } ]
2021-04-30
[ [ "Izmailov", "Pavel", "" ], [ "Vikram", "Sharad", "" ], [ "Hoffman", "Matthew D.", "" ], [ "Wilson", "Andrew Gordon", "" ] ]
The posterior over Bayesian neural network (BNN) parameters is extremely high-dimensional and non-convex. For computational reasons, researchers approximate this posterior using inexpensive mini-batch methods such as mean-field variational inference or stochastic-gradient Markov chain Monte Carlo (SGMCMC). To investigate foundational questions in Bayesian deep learning, we instead use full-batch Hamiltonian Monte Carlo (HMC) on modern architectures. We show that (1) BNNs can achieve significant performance gains over standard training and deep ensembles; (2) a single long HMC chain can provide a comparable representation of the posterior to multiple shorter chains; (3) in contrast to recent studies, we find posterior tempering is not needed for near-optimal performance, with little evidence for a "cold posterior" effect, which we show is largely an artifact of data augmentation; (4) BMA performance is robust to the choice of prior scale, and relatively similar for diagonal Gaussian, mixture of Gaussian, and logistic priors; (5) Bayesian neural networks show surprisingly poor generalization under domain shift; (6) while cheaper alternatives such as deep ensembles and SGMCMC methods can provide good generalization, they provide distinct predictive distributions from HMC. Notably, deep ensemble predictive distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference.
2404.05001
Ping Wang
Gang Qu, Ping Wang, and Xin Yuan
Dual-Scale Transformer for Large-Scale Single-Pixel Imaging
CVPR 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Single-pixel imaging (SPI) is a potential computational imaging technique which produces image by solving an illposed reconstruction problem from few measurements captured by a single-pixel detector. Deep learning has achieved impressive success on SPI reconstruction. However, previous poor reconstruction performance and impractical imaging model limit its real-world applications. In this paper, we propose a deep unfolding network with hybrid-attention Transformer on Kronecker SPI model, dubbed HATNet, to improve the imaging quality of real SPI cameras. Specifically, we unfold the computation graph of the iterative shrinkagethresholding algorithm (ISTA) into two alternative modules: efficient tensor gradient descent and hybrid-attention multiscale denoising. By virtue of Kronecker SPI, the gradient descent module can avoid high computational overheads rooted in previous gradient descent modules based on vectorized SPI. The denoising module is an encoder-decoder architecture powered by dual-scale spatial attention for high- and low-frequency aggregation and channel attention for global information recalibration. Moreover, we build a SPI prototype to verify the effectiveness of the proposed method. Extensive experiments on synthetic and real data demonstrate that our method achieves the state-of-the-art performance. The source code and pre-trained models are available at https://github.com/Gang-Qu/HATNet-SPI.
[ { "created": "Sun, 7 Apr 2024 15:53:21 GMT", "version": "v1" } ]
2024-04-09
[ [ "Qu", "Gang", "" ], [ "Wang", "Ping", "" ], [ "Yuan", "Xin", "" ] ]
Single-pixel imaging (SPI) is a potential computational imaging technique which produces image by solving an illposed reconstruction problem from few measurements captured by a single-pixel detector. Deep learning has achieved impressive success on SPI reconstruction. However, previous poor reconstruction performance and impractical imaging model limit its real-world applications. In this paper, we propose a deep unfolding network with hybrid-attention Transformer on Kronecker SPI model, dubbed HATNet, to improve the imaging quality of real SPI cameras. Specifically, we unfold the computation graph of the iterative shrinkagethresholding algorithm (ISTA) into two alternative modules: efficient tensor gradient descent and hybrid-attention multiscale denoising. By virtue of Kronecker SPI, the gradient descent module can avoid high computational overheads rooted in previous gradient descent modules based on vectorized SPI. The denoising module is an encoder-decoder architecture powered by dual-scale spatial attention for high- and low-frequency aggregation and channel attention for global information recalibration. Moreover, we build a SPI prototype to verify the effectiveness of the proposed method. Extensive experiments on synthetic and real data demonstrate that our method achieves the state-of-the-art performance. The source code and pre-trained models are available at https://github.com/Gang-Qu/HATNet-SPI.
1302.2671
Yoon-Sik Cho
Yoon-Sik Cho, Aram Galstyan, P. Jeffrey Brantingham, George Tita
Latent Self-Exciting Point Process Model for Spatial-Temporal Networks
20 pages, 6 figures (v3); 11 pages, 6 figures (v2); previous version appeared in the 9th Bayesian Modeling Applications Workshop, UAI'12
DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS SERIES B, Vol. 19, pp. 1335-1354, 2014
10.3934/dcdsb.2014.19.1335
null
cs.SI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a latent self-exciting point process model that describes geographically distributed interactions between pairs of entities. In contrast to most existing approaches that assume fully observable interactions, here we consider a scenario where certain interaction events lack information about participants. Instead, this information needs to be inferred from the available observations. We develop an efficient approximate algorithm based on variational expectation-maximization to infer unknown participants in an event given the location and the time of the event. We validate the model on synthetic as well as real-world data, and obtain very promising results on the identity-inference task. We also use our model to predict the timing and participants of future events, and demonstrate that it compares favorably with baseline approaches.
[ { "created": "Tue, 12 Feb 2013 00:01:02 GMT", "version": "v1" }, { "created": "Fri, 15 Feb 2013 18:02:36 GMT", "version": "v2" }, { "created": "Wed, 30 Apr 2014 23:42:52 GMT", "version": "v3" } ]
2014-05-02
[ [ "Cho", "Yoon-Sik", "" ], [ "Galstyan", "Aram", "" ], [ "Brantingham", "P. Jeffrey", "" ], [ "Tita", "George", "" ] ]
We propose a latent self-exciting point process model that describes geographically distributed interactions between pairs of entities. In contrast to most existing approaches that assume fully observable interactions, here we consider a scenario where certain interaction events lack information about participants. Instead, this information needs to be inferred from the available observations. We develop an efficient approximate algorithm based on variational expectation-maximization to infer unknown participants in an event given the location and the time of the event. We validate the model on synthetic as well as real-world data, and obtain very promising results on the identity-inference task. We also use our model to predict the timing and participants of future events, and demonstrate that it compares favorably with baseline approaches.
2405.08547
Zhiwei Wang
Zhiwei Wang, Jun Huang, Longhua Ma, Chengyu Wu, Hongyu Ma
Exploring Graph-based Knowledge: Multi-Level Feature Distillation via Channels Relational Graph
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
In visual tasks, large teacher models capture essential features and deep information, enhancing performance. However, distilling this information into smaller student models often leads to performance loss due to structural differences and capacity limitations. To tackle this, we propose a distillation framework based on graph knowledge, including a multi-level feature alignment strategy and an attention-guided mechanism to provide a targeted learning trajectory for the student model. We emphasize spectral embedding (SE) as a key technique in our distillation process, which merges the student's feature space with the relational knowledge and structural complexities similar to the teacher network. This method captures the teacher's understanding in a graph-based representation, enabling the student model to more accurately mimic the complex structural dependencies present in the teacher model. Compared to methods that focus only on specific distillation areas, our strategy not only considers key features within the teacher model but also endeavors to capture the relationships and interactions among feature sets, encoding these complex pieces of information into a graph structure to understand and utilize the dynamic relationships among these pieces of information from a global perspective. Experiments show that our method outperforms previous feature distillation methods on the CIFAR-100, MS-COCO, and Pascal VOC datasets, proving its efficiency and applicability.
[ { "created": "Tue, 14 May 2024 12:37:05 GMT", "version": "v1" }, { "created": "Thu, 16 May 2024 05:25:01 GMT", "version": "v2" } ]
2024-05-17
[ [ "Wang", "Zhiwei", "" ], [ "Huang", "Jun", "" ], [ "Ma", "Longhua", "" ], [ "Wu", "Chengyu", "" ], [ "Ma", "Hongyu", "" ] ]
In visual tasks, large teacher models capture essential features and deep information, enhancing performance. However, distilling this information into smaller student models often leads to performance loss due to structural differences and capacity limitations. To tackle this, we propose a distillation framework based on graph knowledge, including a multi-level feature alignment strategy and an attention-guided mechanism to provide a targeted learning trajectory for the student model. We emphasize spectral embedding (SE) as a key technique in our distillation process, which merges the student's feature space with the relational knowledge and structural complexities similar to the teacher network. This method captures the teacher's understanding in a graph-based representation, enabling the student model to more accurately mimic the complex structural dependencies present in the teacher model. Compared to methods that focus only on specific distillation areas, our strategy not only considers key features within the teacher model but also endeavors to capture the relationships and interactions among feature sets, encoding these complex pieces of information into a graph structure to understand and utilize the dynamic relationships among these pieces of information from a global perspective. Experiments show that our method outperforms previous feature distillation methods on the CIFAR-100, MS-COCO, and Pascal VOC datasets, proving its efficiency and applicability.
2307.15715
Gregor Stiglic
Gregor Stiglic, Leon Kopitar, Lucija Gosak, Primoz Kocbek, Zhe He, Prithwish Chakraborty, Pablo Meyer, Jiang Bian
Improving Primary Healthcare Workflow Using Extreme Summarization of Scientific Literature Based on Generative AI
5 pages, 5 figures
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Primary care professionals struggle to keep up to date with the latest scientific literature critical in guiding evidence-based practice related to their daily work. To help solve the above-mentioned problem, we employed generative artificial intelligence techniques based on large-scale language models to summarize abstracts of scientific papers. Our objective is to investigate the potential of generative artificial intelligence in diminishing the cognitive load experienced by practitioners, thus exploring its ability to alleviate mental effort and burden. The study participants were provided with two use cases related to preventive care and behavior change, simulating a search for new scientific literature. The study included 113 university students from Slovenia and the United States randomized into three distinct study groups. The first group was assigned to the full abstracts. The second group was assigned to the short abstracts generated by AI. The third group had the option to select a full abstract in addition to the AI-generated short summary. Each use case study included ten retrieved abstracts. Our research demonstrates that the use of generative AI for literature review is efficient and effective. The time needed to answer questions related to the content of abstracts was significantly lower in groups two and three compared to the first group using full abstracts. The results, however, also show significantly lower accuracy in extracted knowledge in cases where full abstract was not available. Such a disruptive technology could significantly reduce the time required for healthcare professionals to keep up with the most recent scientific literature; nevertheless, further developments are needed to help them comprehend the knowledge accurately.
[ { "created": "Mon, 24 Jul 2023 21:42:27 GMT", "version": "v1" } ]
2023-08-01
[ [ "Stiglic", "Gregor", "" ], [ "Kopitar", "Leon", "" ], [ "Gosak", "Lucija", "" ], [ "Kocbek", "Primoz", "" ], [ "He", "Zhe", "" ], [ "Chakraborty", "Prithwish", "" ], [ "Meyer", "Pablo", "" ], [ "Bian", "Jiang", "" ] ]
Primary care professionals struggle to keep up to date with the latest scientific literature critical in guiding evidence-based practice related to their daily work. To help solve the above-mentioned problem, we employed generative artificial intelligence techniques based on large-scale language models to summarize abstracts of scientific papers. Our objective is to investigate the potential of generative artificial intelligence in diminishing the cognitive load experienced by practitioners, thus exploring its ability to alleviate mental effort and burden. The study participants were provided with two use cases related to preventive care and behavior change, simulating a search for new scientific literature. The study included 113 university students from Slovenia and the United States randomized into three distinct study groups. The first group was assigned to the full abstracts. The second group was assigned to the short abstracts generated by AI. The third group had the option to select a full abstract in addition to the AI-generated short summary. Each use case study included ten retrieved abstracts. Our research demonstrates that the use of generative AI for literature review is efficient and effective. The time needed to answer questions related to the content of abstracts was significantly lower in groups two and three compared to the first group using full abstracts. The results, however, also show significantly lower accuracy in extracted knowledge in cases where full abstract was not available. Such a disruptive technology could significantly reduce the time required for healthcare professionals to keep up with the most recent scientific literature; nevertheless, further developments are needed to help them comprehend the knowledge accurately.
2406.17216
Ayush Sekhari
Martin Pawelczyk, Jimmy Z. Di, Yiwei Lu, Gautam Kamath, Ayush Sekhari, Seth Neel
Machine Unlearning Fails to Remove Data Poisoning Attacks
null
null
null
null
cs.LG cs.AI cs.CR cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
We revisit the efficacy of several practical methods for approximate machine unlearning developed for large-scale deep learning. In addition to complying with data deletion requests, one often-cited potential application for unlearning methods is to remove the effects of training on poisoned data. We experimentally demonstrate that, while existing unlearning methods have been demonstrated to be effective in a number of evaluation settings (e.g., alleviating membership inference attacks), they fail to remove the effects of data poisoning, across a variety of types of poisoning attacks (indiscriminate, targeted, and a newly-introduced Gaussian poisoning attack) and models (image classifiers and LLMs); even when granted a relatively large compute budget. In order to precisely characterize unlearning efficacy, we introduce new evaluation metrics for unlearning based on data poisoning. Our results suggest that a broader perspective, including a wider variety of evaluations, is required to avoid a false sense of confidence in machine unlearning procedures for deep learning without provable guarantees. Moreover, while unlearning methods show some signs of being useful to efficiently remove poisoned datapoints without having to retrain, our work suggests that these methods are not yet "ready for prime time", and currently provide limited benefit over retraining.
[ { "created": "Tue, 25 Jun 2024 02:05:29 GMT", "version": "v1" } ]
2024-06-26
[ [ "Pawelczyk", "Martin", "" ], [ "Di", "Jimmy Z.", "" ], [ "Lu", "Yiwei", "" ], [ "Kamath", "Gautam", "" ], [ "Sekhari", "Ayush", "" ], [ "Neel", "Seth", "" ] ]
We revisit the efficacy of several practical methods for approximate machine unlearning developed for large-scale deep learning. In addition to complying with data deletion requests, one often-cited potential application for unlearning methods is to remove the effects of training on poisoned data. We experimentally demonstrate that, while existing unlearning methods have been demonstrated to be effective in a number of evaluation settings (e.g., alleviating membership inference attacks), they fail to remove the effects of data poisoning, across a variety of types of poisoning attacks (indiscriminate, targeted, and a newly-introduced Gaussian poisoning attack) and models (image classifiers and LLMs); even when granted a relatively large compute budget. In order to precisely characterize unlearning efficacy, we introduce new evaluation metrics for unlearning based on data poisoning. Our results suggest that a broader perspective, including a wider variety of evaluations, is required to avoid a false sense of confidence in machine unlearning procedures for deep learning without provable guarantees. Moreover, while unlearning methods show some signs of being useful to efficiently remove poisoned datapoints without having to retrain, our work suggests that these methods are not yet "ready for prime time", and currently provide limited benefit over retraining.
2304.08577
Artsiom Sanakoyeu
Yuming Du, Robin Kips, Albert Pumarola, Sebastian Starke, Ali Thabet, Artsiom Sanakoyeu
Avatars Grow Legs: Generating Smooth Human Motion from Sparse Tracking Inputs with Diffusion Model
CVPR 2023, project page: https://dulucas.github.io/agrol/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the recent surge in popularity of AR/VR applications, realistic and accurate control of 3D full-body avatars has become a highly demanded feature. A particular challenge is that only a sparse tracking signal is available from standalone HMDs (Head Mounted Devices), often limited to tracking the user's head and wrists. While this signal is resourceful for reconstructing the upper body motion, the lower body is not tracked and must be synthesized from the limited information provided by the upper body joints. In this paper, we present AGRoL, a novel conditional diffusion model specifically designed to track full bodies given sparse upper-body tracking signals. Our model is based on a simple multi-layer perceptron (MLP) architecture and a novel conditioning scheme for motion data. It can predict accurate and smooth full-body motion, particularly the challenging lower body movement. Unlike common diffusion architectures, our compact architecture can run in real-time, making it suitable for online body-tracking applications. We train and evaluate our model on AMASS motion capture dataset, and demonstrate that our approach outperforms state-of-the-art methods in generated motion accuracy and smoothness. We further justify our design choices through extensive experiments and ablation studies.
[ { "created": "Mon, 17 Apr 2023 19:35:13 GMT", "version": "v1" } ]
2023-04-19
[ [ "Du", "Yuming", "" ], [ "Kips", "Robin", "" ], [ "Pumarola", "Albert", "" ], [ "Starke", "Sebastian", "" ], [ "Thabet", "Ali", "" ], [ "Sanakoyeu", "Artsiom", "" ] ]
With the recent surge in popularity of AR/VR applications, realistic and accurate control of 3D full-body avatars has become a highly demanded feature. A particular challenge is that only a sparse tracking signal is available from standalone HMDs (Head Mounted Devices), often limited to tracking the user's head and wrists. While this signal is resourceful for reconstructing the upper body motion, the lower body is not tracked and must be synthesized from the limited information provided by the upper body joints. In this paper, we present AGRoL, a novel conditional diffusion model specifically designed to track full bodies given sparse upper-body tracking signals. Our model is based on a simple multi-layer perceptron (MLP) architecture and a novel conditioning scheme for motion data. It can predict accurate and smooth full-body motion, particularly the challenging lower body movement. Unlike common diffusion architectures, our compact architecture can run in real-time, making it suitable for online body-tracking applications. We train and evaluate our model on AMASS motion capture dataset, and demonstrate that our approach outperforms state-of-the-art methods in generated motion accuracy and smoothness. We further justify our design choices through extensive experiments and ablation studies.
2405.10077
Jacopo Bonari
Jacopo Bonari, Lisa K\"uhn, Max von Danwitz, Alexander Popp
Towards Real-Time Urban Physics Simulations with Digital Twins
null
null
null
null
cs.CE
http://creativecommons.org/licenses/by/4.0/
Urban populations continue to grow, highlighting the critical need to safeguard civilians against potential disruptions, such as dangerous gas contaminant dispersion. The digital twin (DT) framework offers promise in analyzing and predicting such events. This study presents a computational framework for modelling airborne contaminant dispersion in built environments. Leveraging automatic generation of computational domains and solution processes, the proposed framework solves the underlying physical model equations with the finite element method (FEM) for numerical solutions. Model order reduction (MOR) methods are investigated to enhance computational efficiency without compromising accuracy. The study outlines the automatic model generation process, the details of the employed model, and the future perspectives for the realization of a DT. Throughout this research, the aim is to develop a reliable predictive model combining physics and data in a hybrid DT to provide informed real-time support within evacuation scenarios.
[ { "created": "Thu, 16 May 2024 13:16:48 GMT", "version": "v1" }, { "created": "Mon, 12 Aug 2024 15:23:47 GMT", "version": "v2" } ]
2024-08-13
[ [ "Bonari", "Jacopo", "" ], [ "Kühn", "Lisa", "" ], [ "von Danwitz", "Max", "" ], [ "Popp", "Alexander", "" ] ]
Urban populations continue to grow, highlighting the critical need to safeguard civilians against potential disruptions, such as dangerous gas contaminant dispersion. The digital twin (DT) framework offers promise in analyzing and predicting such events. This study presents a computational framework for modelling airborne contaminant dispersion in built environments. Leveraging automatic generation of computational domains and solution processes, the proposed framework solves the underlying physical model equations with the finite element method (FEM) for numerical solutions. Model order reduction (MOR) methods are investigated to enhance computational efficiency without compromising accuracy. The study outlines the automatic model generation process, the details of the employed model, and the future perspectives for the realization of a DT. Throughout this research, the aim is to develop a reliable predictive model combining physics and data in a hybrid DT to provide informed real-time support within evacuation scenarios.
2104.12203
Muhammad Saif Ullah Khan
Muhammad Saif Ullah Khan
A novel segmentation dataset for signatures on bank checks
null
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
The dataset presented provides high-resolution images of real, filled out bank checks containing various complex backgrounds, and handwritten text and signatures in the respective fields, along with both pixel-level and patch-level segmentation masks for the signatures on the checks. The images of bank checks were obtained from different sources, including other publicly available check datasets, publicly available images on the internet, as well as scans and images of real checks. Using the GIMP graphics software, pixel-level segmentation masks for signatures on these checks were manually generated as binary images. An automated script was then used to generate patch-level masks. The dataset was created to train and test networks for extracting signatures from bank checks and other similar documents with very complex backgrounds.
[ { "created": "Sun, 25 Apr 2021 16:56:09 GMT", "version": "v1" }, { "created": "Wed, 28 Apr 2021 11:06:40 GMT", "version": "v2" } ]
2021-04-29
[ [ "Khan", "Muhammad Saif Ullah", "" ] ]
The dataset presented provides high-resolution images of real, filled out bank checks containing various complex backgrounds, and handwritten text and signatures in the respective fields, along with both pixel-level and patch-level segmentation masks for the signatures on the checks. The images of bank checks were obtained from different sources, including other publicly available check datasets, publicly available images on the internet, as well as scans and images of real checks. Using the GIMP graphics software, pixel-level segmentation masks for signatures on these checks were manually generated as binary images. An automated script was then used to generate patch-level masks. The dataset was created to train and test networks for extracting signatures from bank checks and other similar documents with very complex backgrounds.
2406.19016
Yaojie Zhang
Yaojie Zhang, Haowen Luo, Weijun Wang, Wei Feng
Robust Multi-Robot Global Localization with Unknown Initial Pose based on Neighbor Constraints
7 pages (6+1), accepted by ICRA 2024
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-robot global localization (MR-GL) with unknown initial positions in a large scale environment is a challenging task. The key point is the data association between different robots' viewpoints. It also makes traditional Appearance-based localization methods unusable. Recently, researchers have utilized the object's semantic invariance to generate a semantic graph to address this issue. However, previous works lack robustness and are sensitive to overlap rate of maps, resulting in unpredictable performance in real-world environments. In this paper, we propose a data association algorithm based on neighbor constraints to improve the robustness of the system. We demonstrate the effectiveness of our method on three different datasets, indicating a significant improvement in robustness compared to previous works.
[ { "created": "Thu, 27 Jun 2024 09:02:02 GMT", "version": "v1" } ]
2024-06-28
[ [ "Zhang", "Yaojie", "" ], [ "Luo", "Haowen", "" ], [ "Wang", "Weijun", "" ], [ "Feng", "Wei", "" ] ]
Multi-robot global localization (MR-GL) with unknown initial positions in a large scale environment is a challenging task. The key point is the data association between different robots' viewpoints. It also makes traditional Appearance-based localization methods unusable. Recently, researchers have utilized the object's semantic invariance to generate a semantic graph to address this issue. However, previous works lack robustness and are sensitive to overlap rate of maps, resulting in unpredictable performance in real-world environments. In this paper, we propose a data association algorithm based on neighbor constraints to improve the robustness of the system. We demonstrate the effectiveness of our method on three different datasets, indicating a significant improvement in robustness compared to previous works.
2106.09623
Anirudh Som
Anirudh Som, Sujeong Kim, Bladimir Lopez-Prado, Svati Dhamija, Nonye Alozie, Amir Tamrakar
Towards Explainable Student Group Collaboration Assessment Models Using Temporal Representations of Individual Student Roles
Accepted in the poster session at the 14th International Conference on Educational Data Mining
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Collaboration is identified as a required and necessary skill for students to be successful in the fields of Science, Technology, Engineering and Mathematics (STEM). However, due to growing student population and limited teaching staff it is difficult for teachers to provide constructive feedback and instill collaborative skills using instructional methods. Development of simple and easily explainable machine-learning-based automated systems can help address this problem. Improving upon our previous work, in this paper we propose using simple temporal-CNN deep-learning models to assess student group collaboration that take in temporal representations of individual student roles as input. We check the applicability of dynamically changing feature representations for student group collaboration assessment and how they impact the overall performance. We also use Grad-CAM visualizations to better understand and interpret the important temporal indices that led to the deep-learning model's decision.
[ { "created": "Thu, 17 Jun 2021 16:00:08 GMT", "version": "v1" } ]
2021-06-18
[ [ "Som", "Anirudh", "" ], [ "Kim", "Sujeong", "" ], [ "Lopez-Prado", "Bladimir", "" ], [ "Dhamija", "Svati", "" ], [ "Alozie", "Nonye", "" ], [ "Tamrakar", "Amir", "" ] ]
Collaboration is identified as a required and necessary skill for students to be successful in the fields of Science, Technology, Engineering and Mathematics (STEM). However, due to growing student population and limited teaching staff it is difficult for teachers to provide constructive feedback and instill collaborative skills using instructional methods. Development of simple and easily explainable machine-learning-based automated systems can help address this problem. Improving upon our previous work, in this paper we propose using simple temporal-CNN deep-learning models to assess student group collaboration that take in temporal representations of individual student roles as input. We check the applicability of dynamically changing feature representations for student group collaboration assessment and how they impact the overall performance. We also use Grad-CAM visualizations to better understand and interpret the important temporal indices that led to the deep-learning model's decision.
2209.11591
Vadim Indelman
Gilad Rotman, Vadim Indelman
involve-MI: Informative Planning with High-Dimensional Non-Parametric Beliefs
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
One of the most complex tasks of decision making and planning is to gather information. This task becomes even more complex when the state is high-dimensional and its belief cannot be expressed with a parametric distribution. Although the state is high-dimensional, in many problems only a small fraction of it might be involved in transitioning the state and generating observations. We exploit this fact to calculate an information-theoretic expected reward, mutual information (MI), over a much lower-dimensional subset of the state, to improve efficiency and without sacrificing accuracy. A similar approach was used in previous works, yet specifically for Gaussian distributions, and we here extend it for general distributions. Moreover, we apply the dimensionality reduction for cases in which the new states are augmented to the previous, yet again without sacrificing accuracy. We then continue by developing an estimator for the MI which works in a Sequential Monte Carlo (SMC) manner, and avoids the reconstruction of future belief's surfaces. Finally, we show how this work is applied to the informative planning optimization problem. This work is then evaluated in a simulation of an active SLAM problem, where the improvement in both accuracy and timing is demonstrated.
[ { "created": "Fri, 23 Sep 2022 13:51:36 GMT", "version": "v1" } ]
2022-09-26
[ [ "Rotman", "Gilad", "" ], [ "Indelman", "Vadim", "" ] ]
One of the most complex tasks of decision making and planning is to gather information. This task becomes even more complex when the state is high-dimensional and its belief cannot be expressed with a parametric distribution. Although the state is high-dimensional, in many problems only a small fraction of it might be involved in transitioning the state and generating observations. We exploit this fact to calculate an information-theoretic expected reward, mutual information (MI), over a much lower-dimensional subset of the state, to improve efficiency and without sacrificing accuracy. A similar approach was used in previous works, yet specifically for Gaussian distributions, and we here extend it for general distributions. Moreover, we apply the dimensionality reduction for cases in which the new states are augmented to the previous, yet again without sacrificing accuracy. We then continue by developing an estimator for the MI which works in a Sequential Monte Carlo (SMC) manner, and avoids the reconstruction of future belief's surfaces. Finally, we show how this work is applied to the informative planning optimization problem. This work is then evaluated in a simulation of an active SLAM problem, where the improvement in both accuracy and timing is demonstrated.
2105.11683
Yuan Xie
Yanbo Wang, Shaohui Lin, Yanyun Qu, Haiyan Wu, Zhizhong Zhang, Yuan Xie, Angela Yao
Towards Compact Single Image Super-Resolution via Contrastive Self-distillation
Accepted by IJCAI-21
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Convolutional neural networks (CNNs) are highly successful for super-resolution (SR) but often require sophisticated architectures with heavy memory cost and computational overhead, significantly restricts their practical deployments on resource-limited devices. In this paper, we proposed a novel contrastive self-distillation (CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models. In particular, a channel-splitting super-resolution network can first be constructed from a target teacher network as a compact student network. Then, we propose a novel contrastive loss to improve the quality of SR images and PSNR/SSIM via explicit knowledge transfer. Extensive experiments demonstrate that the proposed CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN. Code is available at https://github.com/Booooooooooo/CSD.
[ { "created": "Tue, 25 May 2021 05:44:11 GMT", "version": "v1" } ]
2021-05-26
[ [ "Wang", "Yanbo", "" ], [ "Lin", "Shaohui", "" ], [ "Qu", "Yanyun", "" ], [ "Wu", "Haiyan", "" ], [ "Zhang", "Zhizhong", "" ], [ "Xie", "Yuan", "" ], [ "Yao", "Angela", "" ] ]
Convolutional neural networks (CNNs) are highly successful for super-resolution (SR) but often require sophisticated architectures with heavy memory cost and computational overhead, significantly restricts their practical deployments on resource-limited devices. In this paper, we proposed a novel contrastive self-distillation (CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models. In particular, a channel-splitting super-resolution network can first be constructed from a target teacher network as a compact student network. Then, we propose a novel contrastive loss to improve the quality of SR images and PSNR/SSIM via explicit knowledge transfer. Extensive experiments demonstrate that the proposed CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN. Code is available at https://github.com/Booooooooooo/CSD.
2111.01528
Shengcai Liu
Shengcai Liu, Ning Lu, Wenjing Hong, Chao Qian, Ke Tang
Effective and Imperceptible Adversarial Textual Attack via Multi-objectivization
null
null
null
null
cs.CL cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The field of adversarial textual attack has significantly grown over the last few years, where the commonly considered objective is to craft adversarial examples (AEs) that can successfully fool the target model. However, the imperceptibility of attacks, which is also essential for practical attackers, is often left out by previous studies. In consequence, the crafted AEs tend to have obvious structural and semantic differences from the original human-written text, making them easily perceptible. In this work, we advocate leveraging multi-objectivization to address such issue. Specifically, we reformulate the problem of crafting AEs as a multi-objective optimization problem, where the attack imperceptibility is considered as an auxiliary objective. Then, we propose a simple yet effective evolutionary algorithm, dubbed HydraText, to solve this problem. To the best of our knowledge, HydraText is currently the only approach that can be effectively applied to both score-based and decision-based attack settings. Exhaustive experiments involving 44237 instances demonstrate that HydraText consistently achieves competitive attack success rates and better attack imperceptibility than the recently proposed attack approaches. A human evaluation study also shows that the AEs crafted by HydraText are more indistinguishable from human-written text. Finally, these AEs exhibit good transferability and can bring notable robustness improvement to the target model by adversarial training.
[ { "created": "Tue, 2 Nov 2021 12:10:58 GMT", "version": "v1" }, { "created": "Thu, 6 Jan 2022 06:43:51 GMT", "version": "v2" }, { "created": "Fri, 9 Dec 2022 03:15:35 GMT", "version": "v3" }, { "created": "Fri, 15 Dec 2023 03:08:59 GMT", "version": "v4" } ]
2023-12-18
[ [ "Liu", "Shengcai", "" ], [ "Lu", "Ning", "" ], [ "Hong", "Wenjing", "" ], [ "Qian", "Chao", "" ], [ "Tang", "Ke", "" ] ]
The field of adversarial textual attack has significantly grown over the last few years, where the commonly considered objective is to craft adversarial examples (AEs) that can successfully fool the target model. However, the imperceptibility of attacks, which is also essential for practical attackers, is often left out by previous studies. In consequence, the crafted AEs tend to have obvious structural and semantic differences from the original human-written text, making them easily perceptible. In this work, we advocate leveraging multi-objectivization to address such issue. Specifically, we reformulate the problem of crafting AEs as a multi-objective optimization problem, where the attack imperceptibility is considered as an auxiliary objective. Then, we propose a simple yet effective evolutionary algorithm, dubbed HydraText, to solve this problem. To the best of our knowledge, HydraText is currently the only approach that can be effectively applied to both score-based and decision-based attack settings. Exhaustive experiments involving 44237 instances demonstrate that HydraText consistently achieves competitive attack success rates and better attack imperceptibility than the recently proposed attack approaches. A human evaluation study also shows that the AEs crafted by HydraText are more indistinguishable from human-written text. Finally, these AEs exhibit good transferability and can bring notable robustness improvement to the target model by adversarial training.
1902.01439
Chenge Li
Chenge Li, Weixi Zhang, Yong Liu, Yao Wang
Very Long Term Field of View Prediction for 360-degree Video Streaming
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
360-degree videos have gained increasing popularity in recent years with the developments and advances in Virtual Reality (VR) and Augmented Reality (AR) technologies. In such applications, a user only watches a video scene within a field of view (FoV) centered in a certain direction. Predicting the future FoV in a long time horizon (more than seconds ahead) can help save bandwidth resources in on-demand video streaming while minimizing video freezing in networks with significant bandwidth variations. In this work, we treat the FoV prediction as a sequence learning problem, and propose to predict the target user's future FoV not only based on the user's own past FoV center trajectory but also other users' future FoV locations. We propose multiple prediction models based on two different FoV representations: one using FoV center trajectories and another using equirectangular heatmaps that represent the FoV center distributions. Extensive evaluations with two public datasets demonstrate that the proposed models can significantly outperform benchmark models, and other users' FoVs are very helpful for improving long-term predictions.
[ { "created": "Mon, 4 Feb 2019 19:43:40 GMT", "version": "v1" } ]
2019-02-06
[ [ "Li", "Chenge", "" ], [ "Zhang", "Weixi", "" ], [ "Liu", "Yong", "" ], [ "Wang", "Yao", "" ] ]
360-degree videos have gained increasing popularity in recent years with the developments and advances in Virtual Reality (VR) and Augmented Reality (AR) technologies. In such applications, a user only watches a video scene within a field of view (FoV) centered in a certain direction. Predicting the future FoV in a long time horizon (more than seconds ahead) can help save bandwidth resources in on-demand video streaming while minimizing video freezing in networks with significant bandwidth variations. In this work, we treat the FoV prediction as a sequence learning problem, and propose to predict the target user's future FoV not only based on the user's own past FoV center trajectory but also other users' future FoV locations. We propose multiple prediction models based on two different FoV representations: one using FoV center trajectories and another using equirectangular heatmaps that represent the FoV center distributions. Extensive evaluations with two public datasets demonstrate that the proposed models can significantly outperform benchmark models, and other users' FoVs are very helpful for improving long-term predictions.
1712.08291
Vivek Kulkarni
Vivek Kulkarni and William Yang Wang
TFW, DamnGina, Juvie, and Hotsie-Totsie: On the Linguistic and Social Aspects of Internet Slang
10 pages, 11 figures,4 tables
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Slang is ubiquitous on the Internet. The emergence of new social contexts like micro-blogs, question-answering forums, and social networks has enabled slang and non-standard expressions to abound on the web. Despite this, slang has been traditionally viewed as a form of non-standard language -- a form of language that is not the focus of linguistic analysis and has largely been neglected. In this work, we use UrbanDictionary to conduct the first large-scale linguistic analysis of slang and its social aspects on the Internet to yield insights into this variety of language that is increasingly used all over the world online. We begin by computationally analyzing the phonological, morphological and syntactic properties of slang. We then study linguistic patterns in four specific categories of slang namely alphabetisms, blends, clippings, and reduplicatives. Our analysis reveals that slang demonstrates extra-grammatical rules of phonological and morphological formation that markedly distinguish it from the standard form shedding insight into its generative patterns. Next, we analyze the social aspects of slang by studying subject restriction and stereotyping in slang usage. Analyzing tens of thousands of such slang words reveals that the majority of slang on the Internet belongs to two major categories: sex and drugs. We also noted that not only is slang usage not immune to prevalent social biases and prejudices but also reflects such biases and stereotypes more intensely than the standard variety.
[ { "created": "Fri, 22 Dec 2017 03:21:05 GMT", "version": "v1" } ]
2017-12-25
[ [ "Kulkarni", "Vivek", "" ], [ "Wang", "William Yang", "" ] ]
Slang is ubiquitous on the Internet. The emergence of new social contexts like micro-blogs, question-answering forums, and social networks has enabled slang and non-standard expressions to abound on the web. Despite this, slang has been traditionally viewed as a form of non-standard language -- a form of language that is not the focus of linguistic analysis and has largely been neglected. In this work, we use UrbanDictionary to conduct the first large-scale linguistic analysis of slang and its social aspects on the Internet to yield insights into this variety of language that is increasingly used all over the world online. We begin by computationally analyzing the phonological, morphological and syntactic properties of slang. We then study linguistic patterns in four specific categories of slang namely alphabetisms, blends, clippings, and reduplicatives. Our analysis reveals that slang demonstrates extra-grammatical rules of phonological and morphological formation that markedly distinguish it from the standard form shedding insight into its generative patterns. Next, we analyze the social aspects of slang by studying subject restriction and stereotyping in slang usage. Analyzing tens of thousands of such slang words reveals that the majority of slang on the Internet belongs to two major categories: sex and drugs. We also noted that not only is slang usage not immune to prevalent social biases and prejudices but also reflects such biases and stereotypes more intensely than the standard variety.
1610.07304
Bernhard C. Geiger
Roy Timo, Shirin Saeedi Bidokhti, Mich\`ele Wigger, Bernhard C. Geiger
A Rate-Distortion Approach to Caching
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper takes a rate-distortion approach to understanding the information-theoretic laws governing cache-aided communications systems. Specifically, we characterise the optimal tradeoffs between the delivery rate, cache capacity and reconstruction distortions for a single-user problem and some special cases of a two-user problem. Our analysis considers discrete memoryless sources, expected- and excess-distortion constraints, and separable and f-separable distortion functions. We also establish a strong converse for separable-distortion functions, and we show that lossy versions of common information (G\'{a}cs-K\"{o}rner and Wyner) play an important role in caching. Finally, we illustrate and explicitly evaluate these laws for multivariate Gaussian sources and binary symmetric sources.
[ { "created": "Mon, 24 Oct 2016 07:18:28 GMT", "version": "v1" } ]
2016-10-25
[ [ "Timo", "Roy", "" ], [ "Bidokhti", "Shirin Saeedi", "" ], [ "Wigger", "Michèle", "" ], [ "Geiger", "Bernhard C.", "" ] ]
This paper takes a rate-distortion approach to understanding the information-theoretic laws governing cache-aided communications systems. Specifically, we characterise the optimal tradeoffs between the delivery rate, cache capacity and reconstruction distortions for a single-user problem and some special cases of a two-user problem. Our analysis considers discrete memoryless sources, expected- and excess-distortion constraints, and separable and f-separable distortion functions. We also establish a strong converse for separable-distortion functions, and we show that lossy versions of common information (G\'{a}cs-K\"{o}rner and Wyner) play an important role in caching. Finally, we illustrate and explicitly evaluate these laws for multivariate Gaussian sources and binary symmetric sources.
1405.7519
Deepali Virmani
Deepali Virmani, Vikrant Malhotra, Ridhi Tyagi
Aspect Based Sentiment Analysis to Extract Meticulous Opinion Value
IJCSIT, MAY 2014
null
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Opinion Mining and Sentiment Analysis is a process of identifying opinions in large unstructured/structured data and then analysing polarity of those opinions. Opinion mining and sentiment analysis have found vast application in analysing online ratings, analysing product based reviews, e-governance, and managing hostile content over the internet. This paper proposes an algorithm to implement aspect level sentiment analysis. The algorithm takes input from the remarks submitted by various teachers of a student. An aspect tree is formed which has various levels and weights are assigned to each branch to identify level of aspect. Aspect value is calculated by the algorithm by means of the proposed aspect tree. Dictionary based method is implemented to evaluate the polarity of the remark. The algorithm returns the aspect value clubbed with opinion value and sentiment value which helps in concluding the summarized value of remark.
[ { "created": "Thu, 29 May 2014 11:05:29 GMT", "version": "v1" } ]
2014-05-30
[ [ "Virmani", "Deepali", "" ], [ "Malhotra", "Vikrant", "" ], [ "Tyagi", "Ridhi", "" ] ]
Opinion Mining and Sentiment Analysis is a process of identifying opinions in large unstructured/structured data and then analysing polarity of those opinions. Opinion mining and sentiment analysis have found vast application in analysing online ratings, analysing product based reviews, e-governance, and managing hostile content over the internet. This paper proposes an algorithm to implement aspect level sentiment analysis. The algorithm takes input from the remarks submitted by various teachers of a student. An aspect tree is formed which has various levels and weights are assigned to each branch to identify level of aspect. Aspect value is calculated by the algorithm by means of the proposed aspect tree. Dictionary based method is implemented to evaluate the polarity of the remark. The algorithm returns the aspect value clubbed with opinion value and sentiment value which helps in concluding the summarized value of remark.
2309.03308
Christoph Neuhauser
Christoph Neuhauser, Josef Stumpfegger, and R\"udiger Westermann
Adaptive Sampling of 3D Spatial Correlations for Focus+Context Visualization
Copyright 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
IEEE Transactions on Visualization and Computer Graphics, vol. 30, no. 2, pp. 1608-1623, Feb. 2024
10.1109/TVCG.2023.3326855
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visualizing spatial correlations in 3D ensembles is challenging due to the vast amounts of information that need to be conveyed. Memory and time constraints make it unfeasible to pre-compute and store the correlations between all pairs of domain points. We propose the embedding of adaptive correlation sampling into chord diagrams with hierarchical edge bundling to alleviate these constraints. Entities representing spatial regions are arranged along the circular chord layout via a space-filling curve, and Bayesian optimal sampling is used to efficiently estimate the maximum occurring correlation between any two points from different regions. Hierarchical edge bundling reduces visual clutter and emphasizes the major correlation structures. By selecting an edge, the user triggers a focus diagram in which only the two regions connected via this edge are refined and arranged in a specific way in a second chord layout. For visualizing correlations between two different variables, which are not symmetric anymore, we switch to showing a full correlation matrix. This avoids drawing the same edges twice with different correlation values. We introduce GPU implementations of both linear and non-linear correlation measures to further reduce the time that is required to generate the context and focus views, and to even enable the analysis of correlations in a 1000-member ensemble.
[ { "created": "Wed, 6 Sep 2023 18:39:30 GMT", "version": "v1" }, { "created": "Wed, 18 Oct 2023 18:34:28 GMT", "version": "v2" }, { "created": "Wed, 3 Jan 2024 20:54:18 GMT", "version": "v3" } ]
2024-01-05
[ [ "Neuhauser", "Christoph", "" ], [ "Stumpfegger", "Josef", "" ], [ "Westermann", "Rüdiger", "" ] ]
Visualizing spatial correlations in 3D ensembles is challenging due to the vast amounts of information that need to be conveyed. Memory and time constraints make it unfeasible to pre-compute and store the correlations between all pairs of domain points. We propose the embedding of adaptive correlation sampling into chord diagrams with hierarchical edge bundling to alleviate these constraints. Entities representing spatial regions are arranged along the circular chord layout via a space-filling curve, and Bayesian optimal sampling is used to efficiently estimate the maximum occurring correlation between any two points from different regions. Hierarchical edge bundling reduces visual clutter and emphasizes the major correlation structures. By selecting an edge, the user triggers a focus diagram in which only the two regions connected via this edge are refined and arranged in a specific way in a second chord layout. For visualizing correlations between two different variables, which are not symmetric anymore, we switch to showing a full correlation matrix. This avoids drawing the same edges twice with different correlation values. We introduce GPU implementations of both linear and non-linear correlation measures to further reduce the time that is required to generate the context and focus views, and to even enable the analysis of correlations in a 1000-member ensemble.
2103.15591
Vanlin Sathya
Vanlin Sathya, Muhammad Iqbal Rochman, and Monisha Ghosh
Hidden-nodes in coexisting LAA & Wi-Fi: a measurement study of real deployments
IEEE ICC 2021 Workshop on Spectrum Sharing Technology for Next-Generation Communications
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
LTE-Licensed Assisted Access (LAA) networks are beginning to be deployed widely in major metropolitan areas in the US in the unlicensed 5 GHz bands, which have existing dense deployments of Wi-Fi. This provides a real-world opportunity to study the problems due to hidden-node scenarios between LAA and Wi-Fi. The hidden node problem has been well studied in the context of overlapping Wi-Fi APs. However, when Wi-Fi coexists with LAA, the hidden node problem is exacerbated since LAA cannot use the well-known Request-to-Send (RTS)/Clear to-Send (CTS) mechanism to resolve contentions, resulting in throughput degradation for Wi-Fi. In this paper, we describe detailed measurements and conclusions from experiments on the campus of the University of Chicago which presents a perfect hidden node scenario where Wi-Fi access points (APs) controlled by us and an LAA base-station (BS) deployed by AT&T are hidden from each other, but the clients are not. We performed careful experiments in three different regions of the coexistence area: (i) clients midway between LAA & Wi-Fi; (ii) clients close to the Wi-Fi AP; and (iii) clients close to the LAA BS. Our results show that in a situation where LAA uses an aggregate of three unlicensed channels (60 MHz bandwidth) which overlap with an 80 MHz Wi-Fi transmission, the Wi-Fi throughput at client devices suffers considerably. Overall, Wi-Fi performance is impacted by the hidden node problem more severely than LAA. In the best outdoor conditions, the throughput of LAA and Wi-Fi is reduced by 35% and 97% respectively when coexisting with each other as compared when the other system is not present. Furthermore, we conclude that when both LAA and Wi-Fi use multiple 20 MHz channels and there are multiple Wi-Fi APs coexisting with LAA on the same set of channels, the choice of Wi-Fi primary channels can have a significant impact on LAA throughput.
[ { "created": "Mon, 29 Mar 2021 13:13:45 GMT", "version": "v1" }, { "created": "Wed, 31 Mar 2021 19:13:02 GMT", "version": "v2" } ]
2021-04-02
[ [ "Sathya", "Vanlin", "" ], [ "Rochman", "Muhammad Iqbal", "" ], [ "Ghosh", "Monisha", "" ] ]
LTE-Licensed Assisted Access (LAA) networks are beginning to be deployed widely in major metropolitan areas in the US in the unlicensed 5 GHz bands, which have existing dense deployments of Wi-Fi. This provides a real-world opportunity to study the problems due to hidden-node scenarios between LAA and Wi-Fi. The hidden node problem has been well studied in the context of overlapping Wi-Fi APs. However, when Wi-Fi coexists with LAA, the hidden node problem is exacerbated since LAA cannot use the well-known Request-to-Send (RTS)/Clear to-Send (CTS) mechanism to resolve contentions, resulting in throughput degradation for Wi-Fi. In this paper, we describe detailed measurements and conclusions from experiments on the campus of the University of Chicago which presents a perfect hidden node scenario where Wi-Fi access points (APs) controlled by us and an LAA base-station (BS) deployed by AT&T are hidden from each other, but the clients are not. We performed careful experiments in three different regions of the coexistence area: (i) clients midway between LAA & Wi-Fi; (ii) clients close to the Wi-Fi AP; and (iii) clients close to the LAA BS. Our results show that in a situation where LAA uses an aggregate of three unlicensed channels (60 MHz bandwidth) which overlap with an 80 MHz Wi-Fi transmission, the Wi-Fi throughput at client devices suffers considerably. Overall, Wi-Fi performance is impacted by the hidden node problem more severely than LAA. In the best outdoor conditions, the throughput of LAA and Wi-Fi is reduced by 35% and 97% respectively when coexisting with each other as compared when the other system is not present. Furthermore, we conclude that when both LAA and Wi-Fi use multiple 20 MHz channels and there are multiple Wi-Fi APs coexisting with LAA on the same set of channels, the choice of Wi-Fi primary channels can have a significant impact on LAA throughput.
1808.04161
Hector Garcia de Marina Dr.
Zhiyong Sun and Hector Garcia de Marina and Brian D. O. Anderson and Ming Cao
Quantization effects and convergence properties of rigid formation control systems with quantized distance measurements
29 pages, International Journal of Robust and Nonlinear Control 2018
null
10.1002/rnc.4288
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we discuss quantization effects in rigid formation control systems when target formations are described by inter-agent distances. Because of practical sensing and measurement constraints, we consider in this paper distance measurements in their quantized forms. We show that under gradient-based formation control, in the case of uniform quantization, the distance errors converge locally to a bounded set whose size depends on the quantization error, while in the case of logarithmic quantization, all distance errors converge locally to zero. A special quantizer involving the signum function is then considered with which all agents can only measure coarse distances in terms of binary information. In this case, the formation converges locally to a target formation within a finite time. Lastly, we discuss the effect of asymmetric uniform quantization on rigid formation control.
[ { "created": "Mon, 13 Aug 2018 12:02:15 GMT", "version": "v1" } ]
2018-08-14
[ [ "Sun", "Zhiyong", "" ], [ "de Marina", "Hector Garcia", "" ], [ "Anderson", "Brian D. O.", "" ], [ "Cao", "Ming", "" ] ]
In this paper, we discuss quantization effects in rigid formation control systems when target formations are described by inter-agent distances. Because of practical sensing and measurement constraints, we consider in this paper distance measurements in their quantized forms. We show that under gradient-based formation control, in the case of uniform quantization, the distance errors converge locally to a bounded set whose size depends on the quantization error, while in the case of logarithmic quantization, all distance errors converge locally to zero. A special quantizer involving the signum function is then considered with which all agents can only measure coarse distances in terms of binary information. In this case, the formation converges locally to a target formation within a finite time. Lastly, we discuss the effect of asymmetric uniform quantization on rigid formation control.
2303.16485
Tao Hu
Tao Hu, Xiaogang Xu, Ruihang Chu, Jiaya Jia
TriVol: Point Cloud Rendering via Triple Volumes
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Existing learning-based methods for point cloud rendering adopt various 3D representations and feature querying mechanisms to alleviate the sparsity problem of point clouds. However, artifacts still appear in rendered images, due to the challenges in extracting continuous and discriminative 3D features from point clouds. In this paper, we present a dense while lightweight 3D representation, named TriVol, that can be combined with NeRF to render photo-realistic images from point clouds. Our TriVol consists of triple slim volumes, each of which is encoded from the point cloud. TriVol has two advantages. First, it fuses respective fields at different scales and thus extracts local and non-local features for discriminative representation. Second, since the volume size is greatly reduced, our 3D decoder can be efficiently inferred, allowing us to increase the resolution of the 3D space to render more point details. Extensive experiments on different benchmarks with varying kinds of scenes/objects demonstrate our framework's effectiveness compared with current approaches. Moreover, our framework has excellent generalization ability to render a category of scenes/objects without fine-tuning.
[ { "created": "Wed, 29 Mar 2023 06:34:12 GMT", "version": "v1" } ]
2023-03-30
[ [ "Hu", "Tao", "" ], [ "Xu", "Xiaogang", "" ], [ "Chu", "Ruihang", "" ], [ "Jia", "Jiaya", "" ] ]
Existing learning-based methods for point cloud rendering adopt various 3D representations and feature querying mechanisms to alleviate the sparsity problem of point clouds. However, artifacts still appear in rendered images, due to the challenges in extracting continuous and discriminative 3D features from point clouds. In this paper, we present a dense while lightweight 3D representation, named TriVol, that can be combined with NeRF to render photo-realistic images from point clouds. Our TriVol consists of triple slim volumes, each of which is encoded from the point cloud. TriVol has two advantages. First, it fuses respective fields at different scales and thus extracts local and non-local features for discriminative representation. Second, since the volume size is greatly reduced, our 3D decoder can be efficiently inferred, allowing us to increase the resolution of the 3D space to render more point details. Extensive experiments on different benchmarks with varying kinds of scenes/objects demonstrate our framework's effectiveness compared with current approaches. Moreover, our framework has excellent generalization ability to render a category of scenes/objects without fine-tuning.
1912.09084
Jiali Zeng
Jiali Zeng, Linfeng Song, Jinsong Su, Jun Xie, Wei Song, Jiebo Luo
Neural Simile Recognition with Cyclic Multitask Learning and Local Attention
AAAI 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simile recognition is to detect simile sentences and to extract simile components, i.e., tenors and vehicles. It involves two subtasks: {\it simile sentence classification} and {\it simile component extraction}. Recent work has shown that standard multitask learning is effective for Chinese simile recognition, but it is still uncertain whether the mutual effects between the subtasks have been well captured by simple parameter sharing. We propose a novel cyclic multitask learning framework for neural simile recognition, which stacks the subtasks and makes them into a loop by connecting the last to the first. It iteratively performs each subtask, taking the outputs of the previous subtask as additional inputs to the current one, so that the interdependence between the subtasks can be better explored. Extensive experiments show that our framework significantly outperforms the current state-of-the-art model and our carefully designed baselines, and the gains are still remarkable using BERT.
[ { "created": "Thu, 19 Dec 2019 09:40:19 GMT", "version": "v1" } ]
2019-12-20
[ [ "Zeng", "Jiali", "" ], [ "Song", "Linfeng", "" ], [ "Su", "Jinsong", "" ], [ "Xie", "Jun", "" ], [ "Song", "Wei", "" ], [ "Luo", "Jiebo", "" ] ]
Simile recognition is to detect simile sentences and to extract simile components, i.e., tenors and vehicles. It involves two subtasks: {\it simile sentence classification} and {\it simile component extraction}. Recent work has shown that standard multitask learning is effective for Chinese simile recognition, but it is still uncertain whether the mutual effects between the subtasks have been well captured by simple parameter sharing. We propose a novel cyclic multitask learning framework for neural simile recognition, which stacks the subtasks and makes them into a loop by connecting the last to the first. It iteratively performs each subtask, taking the outputs of the previous subtask as additional inputs to the current one, so that the interdependence between the subtasks can be better explored. Extensive experiments show that our framework significantly outperforms the current state-of-the-art model and our carefully designed baselines, and the gains are still remarkable using BERT.
1907.09977
Fabian Eckermann
Fabian Eckermann, Moritz Kahlert, Christian Wietfeld
Performance Analysis of C-V2X Mode 4 Communication Introducing an Open-Source C-V2X Simulator
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous vehicles, on the ground and in the air, are the next big evolution in human mobility. While autonomous driving in highway scenarios is already possible using only the vehicles sensors, the complex scenarios of big cities with all its different traffic participants is still a vision. Cellular Vehicle-to-Everything (C-V2X) communication is a necessary enabler of this vision and and an emerging field of interest in today's research. However, to the best of our knowledge open source simulators essential for open research do not exist yet. In this work we present our open source C-V2X mode 4 simulator based on the discrete-event network simulator ns-3. To analyze the performance of C-V2X mode 4 using our simulator, we created a worst case scenario and the 3GPP reference Manhattan grid scenario using the microscopic traffic simulator SUMO. We also added the WINNER+ B1 channel model to ns-3, as this is also used by 3GPP. Our results show, that C-V2X is scalable to 250 vehicles within a worst case scenario on a playground of 100 m x 100 m, with respect to the LTE rel. 14 V2X requirements. For the more realistic Manhattan grid scenario, the performance is better, as to be expected. We also analyzed the Packet Inter-Reception time with an outcome of max. 100 ms for more than 99 % of all transmissions. In addition, we investigated the impact of the Resource Reservation Period and the Resource Reselection Probability on the system's Packet Reception Ratio.
[ { "created": "Tue, 23 Jul 2019 16:09:34 GMT", "version": "v1" } ]
2019-07-24
[ [ "Eckermann", "Fabian", "" ], [ "Kahlert", "Moritz", "" ], [ "Wietfeld", "Christian", "" ] ]
Autonomous vehicles, on the ground and in the air, are the next big evolution in human mobility. While autonomous driving in highway scenarios is already possible using only the vehicles sensors, the complex scenarios of big cities with all its different traffic participants is still a vision. Cellular Vehicle-to-Everything (C-V2X) communication is a necessary enabler of this vision and and an emerging field of interest in today's research. However, to the best of our knowledge open source simulators essential for open research do not exist yet. In this work we present our open source C-V2X mode 4 simulator based on the discrete-event network simulator ns-3. To analyze the performance of C-V2X mode 4 using our simulator, we created a worst case scenario and the 3GPP reference Manhattan grid scenario using the microscopic traffic simulator SUMO. We also added the WINNER+ B1 channel model to ns-3, as this is also used by 3GPP. Our results show, that C-V2X is scalable to 250 vehicles within a worst case scenario on a playground of 100 m x 100 m, with respect to the LTE rel. 14 V2X requirements. For the more realistic Manhattan grid scenario, the performance is better, as to be expected. We also analyzed the Packet Inter-Reception time with an outcome of max. 100 ms for more than 99 % of all transmissions. In addition, we investigated the impact of the Resource Reservation Period and the Resource Reselection Probability on the system's Packet Reception Ratio.
1807.11219
Katsuki Chousa
Katsuki Chousa, Katsuhito Sudoh, Satoshi Nakamura
Training Neural Machine Translation using Word Embedding-based Loss
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In neural machine translation (NMT), the computational cost at the output layer increases with the size of the target-side vocabulary. Using a limited-size vocabulary instead may cause a significant decrease in translation quality. This trade-off is derived from a softmax-based loss function that handles in-dictionary words independently, in which word similarity is not considered. In this paper, we propose a novel NMT loss function that includes word similarity in forms of distances in a word embedding space. The proposed loss function encourages an NMT decoder to generate words close to their references in the embedding space; this helps the decoder to choose similar acceptable words when the actual best candidates are not included in the vocabulary due to its size limitation. In experiments using ASPEC Japanese-to-English and IWSLT17 English-to-French data sets, the proposed method showed improvements against a standard NMT baseline in both datasets; especially with IWSLT17 En-Fr, it achieved up to +1.72 in BLEU and +1.99 in METEOR. When the target-side vocabulary was very limited to 1,000 words, the proposed method demonstrated a substantial gain, +1.72 in METEOR with ASPEC Ja-En.
[ { "created": "Mon, 30 Jul 2018 08:11:52 GMT", "version": "v1" } ]
2018-07-31
[ [ "Chousa", "Katsuki", "" ], [ "Sudoh", "Katsuhito", "" ], [ "Nakamura", "Satoshi", "" ] ]
In neural machine translation (NMT), the computational cost at the output layer increases with the size of the target-side vocabulary. Using a limited-size vocabulary instead may cause a significant decrease in translation quality. This trade-off is derived from a softmax-based loss function that handles in-dictionary words independently, in which word similarity is not considered. In this paper, we propose a novel NMT loss function that includes word similarity in forms of distances in a word embedding space. The proposed loss function encourages an NMT decoder to generate words close to their references in the embedding space; this helps the decoder to choose similar acceptable words when the actual best candidates are not included in the vocabulary due to its size limitation. In experiments using ASPEC Japanese-to-English and IWSLT17 English-to-French data sets, the proposed method showed improvements against a standard NMT baseline in both datasets; especially with IWSLT17 En-Fr, it achieved up to +1.72 in BLEU and +1.99 in METEOR. When the target-side vocabulary was very limited to 1,000 words, the proposed method demonstrated a substantial gain, +1.72 in METEOR with ASPEC Ja-En.
2403.09880
Riccardo Marchesin
Dario Maddaloni, Riccardo Marchesin, Roberto Zunino
How To Save Fees in Bitcoin Smart Contracts: a Simple Optimistic Off-chain Protocol
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
We consider the execution of smart contracts on Bitcoin. There, every contract step corresponds to appending to the blockchain a new transaction that spends the output representing the old contract state, creating a new one for the updated state. This standard procedure requires the contract participants to pay transaction fees for every execution step. In this paper, we introduce a protocol that moves most of the execution of a Bitcoin contract off-chain. When all participants follow this protocol, they are able to save on transaction fees. By contrast, in the presence of adversaries, any honest participant is still able to enforce the correct execution of the contract, according to its original semantics.
[ { "created": "Thu, 14 Mar 2024 21:20:36 GMT", "version": "v1" }, { "created": "Mon, 29 Apr 2024 11:47:27 GMT", "version": "v2" } ]
2024-04-30
[ [ "Maddaloni", "Dario", "" ], [ "Marchesin", "Riccardo", "" ], [ "Zunino", "Roberto", "" ] ]
We consider the execution of smart contracts on Bitcoin. There, every contract step corresponds to appending to the blockchain a new transaction that spends the output representing the old contract state, creating a new one for the updated state. This standard procedure requires the contract participants to pay transaction fees for every execution step. In this paper, we introduce a protocol that moves most of the execution of a Bitcoin contract off-chain. When all participants follow this protocol, they are able to save on transaction fees. By contrast, in the presence of adversaries, any honest participant is still able to enforce the correct execution of the contract, according to its original semantics.
2405.09946
Hugo Herbelin
Hugo Herbelin (PICUBE, IRIF)
On the logical structure of some maximality and well-foundedness principles equivalent to choice principles
null
null
null
null
cs.LO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the logical structure of Teichm{\"u}ller-Tukey lemma, a maximality principle equivalent to the axiom of choice and show that it corresponds to the generalisation to arbitrary cardinals of update induction, a well-foundedness principle from constructive mathematics classically equivalent to the axiom of dependent choice.From there, we state general forms of maximality and well-foundedness principles equivalent to the axiom of choice, including a variant of Zorn's lemma. A comparison with the general class of choice and bar induction principles given by Brede and the first author is initiated.
[ { "created": "Thu, 16 May 2024 09:51:41 GMT", "version": "v1" } ]
2024-05-17
[ [ "Herbelin", "Hugo", "", "PICUBE, IRIF" ] ]
We study the logical structure of Teichm{\"u}ller-Tukey lemma, a maximality principle equivalent to the axiom of choice and show that it corresponds to the generalisation to arbitrary cardinals of update induction, a well-foundedness principle from constructive mathematics classically equivalent to the axiom of dependent choice.From there, we state general forms of maximality and well-foundedness principles equivalent to the axiom of choice, including a variant of Zorn's lemma. A comparison with the general class of choice and bar induction principles given by Brede and the first author is initiated.
2307.07167
Olukorede Fakorede
Olukorede Fakorede, Ashutosh Kumar Nirala, Modeste Atsague, Jin Tian
Vulnerability-Aware Instance Reweighting For Adversarial Training
null
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Adversarial Training (AT) has been found to substantially improve the robustness of deep learning classifiers against adversarial attacks. AT involves obtaining robustness by including adversarial examples in training a classifier. Most variants of AT algorithms treat every training example equally. However, recent works have shown that better performance is achievable by treating them unequally. In addition, it has been observed that AT exerts an uneven influence on different classes in a training set and unfairly hurts examples corresponding to classes that are inherently harder to classify. Consequently, various reweighting schemes have been proposed that assign unequal weights to robust losses of individual examples in a training set. In this work, we propose a novel instance-wise reweighting scheme. It considers the vulnerability of each natural example and the resulting information loss on its adversarial counterpart occasioned by adversarial attacks. Through extensive experiments, we show that our proposed method significantly improves over existing reweighting schemes, especially against strong white and black-box attacks.
[ { "created": "Fri, 14 Jul 2023 05:31:32 GMT", "version": "v1" } ]
2023-07-17
[ [ "Fakorede", "Olukorede", "" ], [ "Nirala", "Ashutosh Kumar", "" ], [ "Atsague", "Modeste", "" ], [ "Tian", "Jin", "" ] ]
Adversarial Training (AT) has been found to substantially improve the robustness of deep learning classifiers against adversarial attacks. AT involves obtaining robustness by including adversarial examples in training a classifier. Most variants of AT algorithms treat every training example equally. However, recent works have shown that better performance is achievable by treating them unequally. In addition, it has been observed that AT exerts an uneven influence on different classes in a training set and unfairly hurts examples corresponding to classes that are inherently harder to classify. Consequently, various reweighting schemes have been proposed that assign unequal weights to robust losses of individual examples in a training set. In this work, we propose a novel instance-wise reweighting scheme. It considers the vulnerability of each natural example and the resulting information loss on its adversarial counterpart occasioned by adversarial attacks. Through extensive experiments, we show that our proposed method significantly improves over existing reweighting schemes, especially against strong white and black-box attacks.
2102.00697
Milad Sefidgaran
Milad Sefidgaran and Aslan Tchamkerten
Zero-Error Sum Modulo Two with a Common Observation
Accepted for presentation at IEEE ITW 2020
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
This paper investigates the classical modulo two sum problem in source coding, but with a common observation: a transmitter observes $(X,Z)$, the other transmitter observes $(Y,Z)$, and the receiver wants to compute $X \oplus Y$ without error. Through a coupling argument, this paper establishes a new lower bound on the sum-rate when $X-Z-Y$ forms a Markov chain.
[ { "created": "Mon, 1 Feb 2021 08:31:00 GMT", "version": "v1" }, { "created": "Mon, 22 Mar 2021 12:53:07 GMT", "version": "v2" } ]
2021-03-23
[ [ "Sefidgaran", "Milad", "" ], [ "Tchamkerten", "Aslan", "" ] ]
This paper investigates the classical modulo two sum problem in source coding, but with a common observation: a transmitter observes $(X,Z)$, the other transmitter observes $(Y,Z)$, and the receiver wants to compute $X \oplus Y$ without error. Through a coupling argument, this paper establishes a new lower bound on the sum-rate when $X-Z-Y$ forms a Markov chain.
1904.01575
Cheng-I Lai
Cheng-I Lai
Contrastive Predictive Coding Based Feature for Automatic Speaker Verification
null
null
null
null
cs.CL cs.LG cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This thesis describes our ongoing work on Contrastive Predictive Coding (CPC) features for speaker verification. CPC is a recently proposed representation learning framework based on predictive coding and noise contrastive estimation. We focus on incorporating CPC features into the standard automatic speaker verification systems, and we present our methods, experiments, and analysis. This thesis also details necessary background knowledge in past and recent work on automatic speaker verification systems, conventional speech features, and the motivation and techniques behind CPC.
[ { "created": "Mon, 1 Apr 2019 23:54:08 GMT", "version": "v1" } ]
2019-04-04
[ [ "Lai", "Cheng-I", "" ] ]
This thesis describes our ongoing work on Contrastive Predictive Coding (CPC) features for speaker verification. CPC is a recently proposed representation learning framework based on predictive coding and noise contrastive estimation. We focus on incorporating CPC features into the standard automatic speaker verification systems, and we present our methods, experiments, and analysis. This thesis also details necessary background knowledge in past and recent work on automatic speaker verification systems, conventional speech features, and the motivation and techniques behind CPC.
2202.03347
Yonghyun Jeong Mr
Yonghyun Jeong, Doyeon Kim, Youngmin Ro, Jongwon Choi
FrePGAN: Robust Deepfake Detection Using Frequency-level Perturbations
null
null
null
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Various deepfake detectors have been proposed, but challenges still exist to detect images of unknown categories or GAN models outside of the training settings. Such issues arise from the overfitting issue, which we discover from our own analysis and the previous studies to originate from the frequency-level artifacts in generated images. We find that ignoring the frequency-level artifacts can improve the detector's generalization across various GAN models, but it can reduce the model's performance for the trained GAN models. Thus, we design a framework to generalize the deepfake detector for both the known and unseen GAN models. Our framework generates the frequency-level perturbation maps to make the generated images indistinguishable from the real images. By updating the deepfake detector along with the training of the perturbation generator, our model is trained to detect the frequency-level artifacts at the initial iterations and consider the image-level irregularities at the last iterations. For experiments, we design new test scenarios varying from the training settings in GAN models, color manipulations, and object categories. Numerous experiments validate the state-of-the-art performance of our deepfake detector.
[ { "created": "Mon, 7 Feb 2022 16:45:11 GMT", "version": "v1" } ]
2022-02-08
[ [ "Jeong", "Yonghyun", "" ], [ "Kim", "Doyeon", "" ], [ "Ro", "Youngmin", "" ], [ "Choi", "Jongwon", "" ] ]
Various deepfake detectors have been proposed, but challenges still exist to detect images of unknown categories or GAN models outside of the training settings. Such issues arise from the overfitting issue, which we discover from our own analysis and the previous studies to originate from the frequency-level artifacts in generated images. We find that ignoring the frequency-level artifacts can improve the detector's generalization across various GAN models, but it can reduce the model's performance for the trained GAN models. Thus, we design a framework to generalize the deepfake detector for both the known and unseen GAN models. Our framework generates the frequency-level perturbation maps to make the generated images indistinguishable from the real images. By updating the deepfake detector along with the training of the perturbation generator, our model is trained to detect the frequency-level artifacts at the initial iterations and consider the image-level irregularities at the last iterations. For experiments, we design new test scenarios varying from the training settings in GAN models, color manipulations, and object categories. Numerous experiments validate the state-of-the-art performance of our deepfake detector.
1610.00044
Oussama Habachi
Oussama Habachi, Yezekael Hayel and Rachid El-Azouzi
Optimal Energy-Delay Tradeoff for Opportunistic Spectrum Access in Cognitive Radio Networks
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cognitive radio (CR) has been considered as a promising technology to enhance spectrum efficiency via opportunistic transmission at link level. Basic CR features allow SUs to transmit only when the licensed primary channel is not occupied by PUs. However, waiting for idle time slot may include large packet delay and high energy consumption. We further consider that the SU may decide, at any moment, to use another dedicated way of communication (3G) in order to transmit its packets. Thus, we consider an Opportunistic Spectrum Access (OSA) mechanism that takes into account packet delay and energy consumption. We formulate the OSA problem as a Partially Observable Markov Decision Process (POMDP) by explicitly considering the energy consumption as well as packets' delay, which are often ignored in existing OSA solutions. Specifically, we consider a POMDP with an average reward criterion. We derive structural properties of the value function and we show the existence of optimal strategies in the class of the threshold strategies. For implementation purposes, we propose online learning mechanisms that estimate the PU activity and determine the appropriate threshold strategy on the fly. In particular, numerical illustrations validate our theoretical findings.
[ { "created": "Fri, 30 Sep 2016 22:06:59 GMT", "version": "v1" } ]
2016-10-04
[ [ "Habachi", "Oussama", "" ], [ "Hayel", "Yezekael", "" ], [ "El-Azouzi", "Rachid", "" ] ]
Cognitive radio (CR) has been considered as a promising technology to enhance spectrum efficiency via opportunistic transmission at link level. Basic CR features allow SUs to transmit only when the licensed primary channel is not occupied by PUs. However, waiting for idle time slot may include large packet delay and high energy consumption. We further consider that the SU may decide, at any moment, to use another dedicated way of communication (3G) in order to transmit its packets. Thus, we consider an Opportunistic Spectrum Access (OSA) mechanism that takes into account packet delay and energy consumption. We formulate the OSA problem as a Partially Observable Markov Decision Process (POMDP) by explicitly considering the energy consumption as well as packets' delay, which are often ignored in existing OSA solutions. Specifically, we consider a POMDP with an average reward criterion. We derive structural properties of the value function and we show the existence of optimal strategies in the class of the threshold strategies. For implementation purposes, we propose online learning mechanisms that estimate the PU activity and determine the appropriate threshold strategy on the fly. In particular, numerical illustrations validate our theoretical findings.
2310.14540
Yutaro Yamada
Yutaro Yamada, Yihan Bao, Andrew K. Lampinen, Jungo Kasai, Ilker Yildirim
Evaluating Spatial Understanding of Large Language Models
Accepted to TMLR 2024. Our code and data are available at https://github.com/runopti/SpatialEvalLLM, https://huggingface.co/datasets/yyamada/SpatialEvalLLM
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) show remarkable capabilities across a variety of tasks. Despite the models only seeing text in training, several recent studies suggest that LLM representations implicitly capture aspects of the underlying grounded concepts. Here, we explore LLM representations of a particularly salient kind of grounded knowledge -- spatial relationships. We design natural-language navigation tasks and evaluate the ability of LLMs, in particular GPT-3.5-turbo, GPT-4, and Llama2 series models, to represent and reason about spatial structures. These tasks reveal substantial variability in LLM performance across different spatial structures, including square, hexagonal, and triangular grids, rings, and trees. In extensive error analysis, we find that LLMs' mistakes reflect both spatial and non-spatial factors. These findings suggest that LLMs appear to capture certain aspects of spatial structure implicitly, but room for improvement remains.
[ { "created": "Mon, 23 Oct 2023 03:44:40 GMT", "version": "v1" }, { "created": "Tue, 5 Mar 2024 05:02:54 GMT", "version": "v2" }, { "created": "Sat, 13 Apr 2024 01:59:06 GMT", "version": "v3" } ]
2024-04-16
[ [ "Yamada", "Yutaro", "" ], [ "Bao", "Yihan", "" ], [ "Lampinen", "Andrew K.", "" ], [ "Kasai", "Jungo", "" ], [ "Yildirim", "Ilker", "" ] ]
Large language models (LLMs) show remarkable capabilities across a variety of tasks. Despite the models only seeing text in training, several recent studies suggest that LLM representations implicitly capture aspects of the underlying grounded concepts. Here, we explore LLM representations of a particularly salient kind of grounded knowledge -- spatial relationships. We design natural-language navigation tasks and evaluate the ability of LLMs, in particular GPT-3.5-turbo, GPT-4, and Llama2 series models, to represent and reason about spatial structures. These tasks reveal substantial variability in LLM performance across different spatial structures, including square, hexagonal, and triangular grids, rings, and trees. In extensive error analysis, we find that LLMs' mistakes reflect both spatial and non-spatial factors. These findings suggest that LLMs appear to capture certain aspects of spatial structure implicitly, but room for improvement remains.
1602.03350
Francesco Renna
Francesco Renna, Joseph Doyle, Vasileios Giotsas, Yiannis Andreopoulos
Query Processing For The Internet-of-Things: Coupling Of Device Energy Consumption And Cloud Infrastructure Billing
To be presented at the 1st IEEE International Conference on Internet-of-Things Design and Implementation (IoTDI 2016)
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Audio/visual recognition and retrieval applications have recently garnered significant attention within Internet-of-Things (IoT) oriented services, given that video cameras and audio processing chipsets are now ubiquitous even in low-end embedded systems. In the most typical scenario for such services, each device extracts audio/visual features and compacts them into feature descriptors, which comprise media queries. These queries are uploaded to a remote cloud computing service that performs content matching for classification or retrieval applications. Two of the most crucial aspects for such services are: (i) controlling the device energy consumption when using the service; (ii) reducing the billing cost incurred from the cloud infrastructure provider. In this paper we derive analytic conditions for the optimal coupling between the device energy consumption and the incurred cloud infrastructure billing. Our framework encapsulates: the energy consumption to produce and transmit audio/visual queries, the billing rates of the cloud infrastructure, the number of devices concurrently connected to the same cloud server, and the statistics of the query data production volume per device. Our analytic results are validated via a deployment with: (i) the device side comprising compact image descriptors (queries) computed on Beaglebone Linux embedded platforms and transmitted to Amazon Web Services (AWS) Simple Storage Service; (ii) the cloud side carrying out image similarity detection via AWS Elastic Compute Cloud (EC2) spot instances, with the AWS Auto Scaling being used to control the number of instances according to the demand.
[ { "created": "Wed, 10 Feb 2016 12:32:27 GMT", "version": "v1" } ]
2016-02-11
[ [ "Renna", "Francesco", "" ], [ "Doyle", "Joseph", "" ], [ "Giotsas", "Vasileios", "" ], [ "Andreopoulos", "Yiannis", "" ] ]
Audio/visual recognition and retrieval applications have recently garnered significant attention within Internet-of-Things (IoT) oriented services, given that video cameras and audio processing chipsets are now ubiquitous even in low-end embedded systems. In the most typical scenario for such services, each device extracts audio/visual features and compacts them into feature descriptors, which comprise media queries. These queries are uploaded to a remote cloud computing service that performs content matching for classification or retrieval applications. Two of the most crucial aspects for such services are: (i) controlling the device energy consumption when using the service; (ii) reducing the billing cost incurred from the cloud infrastructure provider. In this paper we derive analytic conditions for the optimal coupling between the device energy consumption and the incurred cloud infrastructure billing. Our framework encapsulates: the energy consumption to produce and transmit audio/visual queries, the billing rates of the cloud infrastructure, the number of devices concurrently connected to the same cloud server, and the statistics of the query data production volume per device. Our analytic results are validated via a deployment with: (i) the device side comprising compact image descriptors (queries) computed on Beaglebone Linux embedded platforms and transmitted to Amazon Web Services (AWS) Simple Storage Service; (ii) the cloud side carrying out image similarity detection via AWS Elastic Compute Cloud (EC2) spot instances, with the AWS Auto Scaling being used to control the number of instances according to the demand.
2009.09393
Densen Puthussery
Hrishikesh P.S., Densen Puthussery, Melvin Kuriakose, Jiji C.V
Transform Domain Pyramidal Dilated Convolution Networks For Restoration of Under Display Camera Images
Presented at RLQ-TOD workshop at ECCV 2020, 14 pages
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Under-display camera (UDC) is a novel technology that can make digital imaging experience in handheld devices seamless by providing large screen-to-body ratio. UDC images are severely degraded owing to their positioning under a display screen. This work addresses the restoration of images degraded as a result of UDC imaging. Two different networks are proposed for the restoration of images taken with two types of UDC technologies. The first method uses a pyramidal dilated convolution within a wavelet decomposed convolutional neural network for pentile-organic LED (P-OLED) based display system. The second method employs pyramidal dilated convolution within a discrete cosine transform based dual domain network to restore images taken using a transparent-organic LED (T-OLED) based UDC system. The first method produced very good quality restored images and was the winning entry in European Conference on Computer Vision (ECCV) 2020 challenge on image restoration for Under-display Camera - Track 2 - P-OLED evaluated based on PSNR and SSIM. The second method scored fourth position in Track-1 (T-OLED) of the challenge evaluated based on the same metrics.
[ { "created": "Sun, 20 Sep 2020 09:26:10 GMT", "version": "v1" } ]
2020-09-22
[ [ "S.", "Hrishikesh P.", "" ], [ "Puthussery", "Densen", "" ], [ "Kuriakose", "Melvin", "" ], [ "C.", "Jiji", "V" ] ]
Under-display camera (UDC) is a novel technology that can make digital imaging experience in handheld devices seamless by providing large screen-to-body ratio. UDC images are severely degraded owing to their positioning under a display screen. This work addresses the restoration of images degraded as a result of UDC imaging. Two different networks are proposed for the restoration of images taken with two types of UDC technologies. The first method uses a pyramidal dilated convolution within a wavelet decomposed convolutional neural network for pentile-organic LED (P-OLED) based display system. The second method employs pyramidal dilated convolution within a discrete cosine transform based dual domain network to restore images taken using a transparent-organic LED (T-OLED) based UDC system. The first method produced very good quality restored images and was the winning entry in European Conference on Computer Vision (ECCV) 2020 challenge on image restoration for Under-display Camera - Track 2 - P-OLED evaluated based on PSNR and SSIM. The second method scored fourth position in Track-1 (T-OLED) of the challenge evaluated based on the same metrics.
2312.13032
Weigang Lu
Weigang Lu, Ziyu Guan, Wei Zhao, Yaming Yang, Long Jin
NodeMixup: Tackling Under-Reaching for Graph Neural Networks
Accepted by AAAI-24
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Neural Networks (GNNs) have become mainstream methods for solving the semi-supervised node classification problem. However, due to the uneven location distribution of labeled nodes in the graph, labeled nodes are only accessible to a small portion of unlabeled nodes, leading to the \emph{under-reaching} issue. In this study, we firstly reveal under-reaching by conducting an empirical investigation on various well-known graphs. Then, we demonstrate that under-reaching results in unsatisfactory distribution alignment between labeled and unlabeled nodes through systematic experimental analysis, significantly degrading GNNs' performance. To tackle under-reaching for GNNs, we propose an architecture-agnostic method dubbed NodeMixup. The fundamental idea is to (1) increase the reachability of labeled nodes by labeled-unlabeled pairs mixup, (2) leverage graph structures via fusing the neighbor connections of intra-class node pairs to improve performance gains of mixup, and (3) use neighbor label distribution similarity incorporating node degrees to determine sampling weights for node mixup. Extensive experiments demonstrate the efficacy of NodeMixup in assisting GNNs in handling under-reaching. The source code is available at \url{https://github.com/WeigangLu/NodeMixup}.
[ { "created": "Wed, 20 Dec 2023 13:56:27 GMT", "version": "v1" }, { "created": "Thu, 21 Dec 2023 03:02:35 GMT", "version": "v2" } ]
2023-12-22
[ [ "Lu", "Weigang", "" ], [ "Guan", "Ziyu", "" ], [ "Zhao", "Wei", "" ], [ "Yang", "Yaming", "" ], [ "Jin", "Long", "" ] ]
Graph Neural Networks (GNNs) have become mainstream methods for solving the semi-supervised node classification problem. However, due to the uneven location distribution of labeled nodes in the graph, labeled nodes are only accessible to a small portion of unlabeled nodes, leading to the \emph{under-reaching} issue. In this study, we firstly reveal under-reaching by conducting an empirical investigation on various well-known graphs. Then, we demonstrate that under-reaching results in unsatisfactory distribution alignment between labeled and unlabeled nodes through systematic experimental analysis, significantly degrading GNNs' performance. To tackle under-reaching for GNNs, we propose an architecture-agnostic method dubbed NodeMixup. The fundamental idea is to (1) increase the reachability of labeled nodes by labeled-unlabeled pairs mixup, (2) leverage graph structures via fusing the neighbor connections of intra-class node pairs to improve performance gains of mixup, and (3) use neighbor label distribution similarity incorporating node degrees to determine sampling weights for node mixup. Extensive experiments demonstrate the efficacy of NodeMixup in assisting GNNs in handling under-reaching. The source code is available at \url{https://github.com/WeigangLu/NodeMixup}.
2307.04094
Lukasz Korycki
Lukasz Korycki, Bartosz Krawczyk
Class-Incremental Mixture of Gaussians for Deep Continual Learning
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continual learning models for stationary data focus on learning and retaining concepts coming to them in a sequential manner. In the most generic class-incremental environment, we have to be ready to deal with classes coming one by one, without any higher-level grouping. This requirement invalidates many previously proposed methods and forces researchers to look for more flexible alternative approaches. In this work, we follow the idea of centroid-driven methods and propose end-to-end incorporation of the mixture of Gaussians model into the continual learning framework. By employing the gradient-based approach and designing losses capable of learning discriminative features while avoiding degenerate solutions, we successfully combine the mixture model with a deep feature extractor allowing for joint optimization and adjustments in the latent space. Additionally, we show that our model can effectively learn in memory-free scenarios with fixed extractors. In the conducted experiments, we empirically demonstrate the effectiveness of the proposed solutions and exhibit the competitiveness of our model when compared with state-of-the-art continual learning baselines evaluated in the context of image classification problems.
[ { "created": "Sun, 9 Jul 2023 04:33:19 GMT", "version": "v1" } ]
2023-07-11
[ [ "Korycki", "Lukasz", "" ], [ "Krawczyk", "Bartosz", "" ] ]
Continual learning models for stationary data focus on learning and retaining concepts coming to them in a sequential manner. In the most generic class-incremental environment, we have to be ready to deal with classes coming one by one, without any higher-level grouping. This requirement invalidates many previously proposed methods and forces researchers to look for more flexible alternative approaches. In this work, we follow the idea of centroid-driven methods and propose end-to-end incorporation of the mixture of Gaussians model into the continual learning framework. By employing the gradient-based approach and designing losses capable of learning discriminative features while avoiding degenerate solutions, we successfully combine the mixture model with a deep feature extractor allowing for joint optimization and adjustments in the latent space. Additionally, we show that our model can effectively learn in memory-free scenarios with fixed extractors. In the conducted experiments, we empirically demonstrate the effectiveness of the proposed solutions and exhibit the competitiveness of our model when compared with state-of-the-art continual learning baselines evaluated in the context of image classification problems.
2003.10472
Alexander Beasley
Alexander E. Beasley
A distributed memory, local configuration technique for re-configurable logic designs
13 files, 19 images
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use and location of memory in integrated circuits plays a key factor in their performance. Memory requires large physical area, access times limit overall system performance and connectivity can result in large fan-out. Modern FPGA systems and ASICs contain an area of memory used to set the operation of the device from a series of commands set by a host. Implementing these settings registers requires a level of care otherwise the resulting implementation can result in a number of large fan-out nets that consume valuable resources complicating the placement of timing critical pathways. This paper presents an architecture for implementing and programming these settings registers in a distributed method across an FPGA and how the presented architecture works in both clock-domain crossing and dynamic partial re-configuration applications. The design is compared to that of a `global' settings register architecture. We implement the architectures using Intel FPGAs Quartus Prime software targeting an Intel FPGA Cyclone V. It is shown that the distributed memory architecture has a smaller resource cost (as small as 25% of the ALMs and 20% of the registers) compared to the global memory architectures.
[ { "created": "Mon, 23 Mar 2020 18:05:22 GMT", "version": "v1" } ]
2020-03-25
[ [ "Beasley", "Alexander E.", "" ] ]
The use and location of memory in integrated circuits plays a key factor in their performance. Memory requires large physical area, access times limit overall system performance and connectivity can result in large fan-out. Modern FPGA systems and ASICs contain an area of memory used to set the operation of the device from a series of commands set by a host. Implementing these settings registers requires a level of care otherwise the resulting implementation can result in a number of large fan-out nets that consume valuable resources complicating the placement of timing critical pathways. This paper presents an architecture for implementing and programming these settings registers in a distributed method across an FPGA and how the presented architecture works in both clock-domain crossing and dynamic partial re-configuration applications. The design is compared to that of a `global' settings register architecture. We implement the architectures using Intel FPGAs Quartus Prime software targeting an Intel FPGA Cyclone V. It is shown that the distributed memory architecture has a smaller resource cost (as small as 25% of the ALMs and 20% of the registers) compared to the global memory architectures.
2405.14133
Kai Wu
Xinyu Guo, Kai Wu, Xiaoyu Zhang, Jing Liu
Automated Loss function Search for Class-imbalanced Node Classification
ICML 2024
null
null
null
cs.LG cs.AI cs.SC
http://creativecommons.org/licenses/by/4.0/
Class-imbalanced node classification tasks are prevalent in real-world scenarios. Due to the uneven distribution of nodes across different classes, learning high-quality node representations remains a challenging endeavor. The engineering of loss functions has shown promising potential in addressing this issue. It involves the meticulous design of loss functions, utilizing information about the quantities of nodes in different categories and the network's topology to learn unbiased node representations. However, the design of these loss functions heavily relies on human expert knowledge and exhibits limited adaptability to specific target tasks. In this paper, we introduce a high-performance, flexible, and generalizable automated loss function search framework to tackle this challenge. Across 15 combinations of graph neural networks and datasets, our framework achieves a significant improvement in performance compared to state-of-the-art methods. Additionally, we observe that homophily in graph-structured data significantly contributes to the transferability of the proposed framework.
[ { "created": "Thu, 23 May 2024 03:12:49 GMT", "version": "v1" } ]
2024-05-24
[ [ "Guo", "Xinyu", "" ], [ "Wu", "Kai", "" ], [ "Zhang", "Xiaoyu", "" ], [ "Liu", "Jing", "" ] ]
Class-imbalanced node classification tasks are prevalent in real-world scenarios. Due to the uneven distribution of nodes across different classes, learning high-quality node representations remains a challenging endeavor. The engineering of loss functions has shown promising potential in addressing this issue. It involves the meticulous design of loss functions, utilizing information about the quantities of nodes in different categories and the network's topology to learn unbiased node representations. However, the design of these loss functions heavily relies on human expert knowledge and exhibits limited adaptability to specific target tasks. In this paper, we introduce a high-performance, flexible, and generalizable automated loss function search framework to tackle this challenge. Across 15 combinations of graph neural networks and datasets, our framework achieves a significant improvement in performance compared to state-of-the-art methods. Additionally, we observe that homophily in graph-structured data significantly contributes to the transferability of the proposed framework.
1210.7035
Fuyuki Ishikawa
Inna Pereverzeva, Elena Troubitsyna and Linas Laibinis
Development of Fault Tolerant MAS with Cooperative Error Recovery by Refinement in Event-B
In Proceedings of DS-Event-B 2012: Workshop on the experience of and advances in developing dependable systems in Event-B, in conjunction with ICFEM 2012 - Kyoto, Japan, November 13, 2012
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Designing fault tolerance mechanisms for multi-agent systems is a notoriously difficult task. In this paper we present an approach to formal development of a fault tolerant multi-agent system by refinement in Event-B. We demonstrate how to formally specify cooperative error recovery and dynamic reconfiguration in Event-B. Moreover, we discuss how to express and verify essential properties of a fault tolerant multi-agent system while refining it. The approach is illustrated by a case study - a multi-robotic system.
[ { "created": "Fri, 26 Oct 2012 01:11:59 GMT", "version": "v1" } ]
2012-10-29
[ [ "Pereverzeva", "Inna", "" ], [ "Troubitsyna", "Elena", "" ], [ "Laibinis", "Linas", "" ] ]
Designing fault tolerance mechanisms for multi-agent systems is a notoriously difficult task. In this paper we present an approach to formal development of a fault tolerant multi-agent system by refinement in Event-B. We demonstrate how to formally specify cooperative error recovery and dynamic reconfiguration in Event-B. Moreover, we discuss how to express and verify essential properties of a fault tolerant multi-agent system while refining it. The approach is illustrated by a case study - a multi-robotic system.
1502.03561
Abba Ari Ado Adamou
Ado Adamou Abba Ari (1 and 4), Abdelhak Gueroui (1), Nabila Labraoui (2) and Blaise Omer Yenke (3) ((1) PRiSM, University of Versailles St-Quentin-en-Yvelines France, (2) STIC University of Tlemcen Algeria, (3) LASE University of Ngaoundere Cameroon, (4) University of Maroua Cameroon)
Concepts and evolution of research in the field of wireless sensor networks
null
International Journal of Computer Networks & Communications. 7.1 (2015) 81-98
10.5121/ijcnc.2015.7106
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The field of Wireless Sensor Networks (WSNs) is experiencing a resurgence of interest and a continuous evolution in the scientific and industrial community. The use of this particular type of ad hoc network is becoming increasingly important in many contexts, regardless of geographical position and so, according to a set of possible application. WSNs offer interesting low cost and easily deployable solutions to perform a remote real time monitoring, target tracking and recognition of physical phenomenon. The uses of these sensors organized into a network continue to reveal a set of research questions according to particularities target applications. Despite difficulties introduced by sensor resources constraints, research contributions in this field are growing day by day. In this paper, we present a comprehensive review of most recent literature of WSNs and outline open research issues in this field.
[ { "created": "Thu, 12 Feb 2015 08:27:03 GMT", "version": "v1" }, { "created": "Wed, 13 May 2015 16:29:55 GMT", "version": "v2" } ]
2015-05-14
[ [ "Ari", "Ado Adamou Abba", "", "1 and 4" ], [ "Gueroui", "Abdelhak", "" ], [ "Labraoui", "Nabila", "" ], [ "Yenke", "Blaise Omer", "" ] ]
The field of Wireless Sensor Networks (WSNs) is experiencing a resurgence of interest and a continuous evolution in the scientific and industrial community. The use of this particular type of ad hoc network is becoming increasingly important in many contexts, regardless of geographical position and so, according to a set of possible application. WSNs offer interesting low cost and easily deployable solutions to perform a remote real time monitoring, target tracking and recognition of physical phenomenon. The uses of these sensors organized into a network continue to reveal a set of research questions according to particularities target applications. Despite difficulties introduced by sensor resources constraints, research contributions in this field are growing day by day. In this paper, we present a comprehensive review of most recent literature of WSNs and outline open research issues in this field.
1903.03680
Apostolos Syropoulos
Apostolos Syropoulos
Fuzzy Bigraphs: An Exercise in Fuzzy Communicating Agents
11 pages, 3 figures
null
10.1007/978-3-030-38565-1_13
null
cs.LO cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bigraphs and their algebra is a model of concurrency. Fuzzy bigraphs are a generalization of birgraphs intended to be a model of concurrency that incorporates vagueness. More specifically, this model assumes that agents are similar, communication is not perfect, and, in general, everything is or happens to some degree.
[ { "created": "Tue, 5 Mar 2019 14:28:59 GMT", "version": "v1" } ]
2020-02-18
[ [ "Syropoulos", "Apostolos", "" ] ]
Bigraphs and their algebra is a model of concurrency. Fuzzy bigraphs are a generalization of birgraphs intended to be a model of concurrency that incorporates vagueness. More specifically, this model assumes that agents are similar, communication is not perfect, and, in general, everything is or happens to some degree.
2201.01008
Kyungmoon Lee
Kyungmoon Lee, Sungyeon Kim, Seunghoon Hong, Suha Kwak
Learning to Generate Novel Classes for Deep Metric Learning
Accepted to BMVC 2021
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep metric learning aims to learn an embedding space where the distance between data reflects their class equivalence, even when their classes are unseen during training. However, the limited number of classes available in training precludes generalization of the learned embedding space. Motivated by this, we introduce a new data augmentation approach that synthesizes novel classes and their embedding vectors. Our approach can provide rich semantic information to an embedding model and improve its generalization by augmenting training data with novel classes unavailable in the original data. We implement this idea by learning and exploiting a conditional generative model, which, given a class label and a noise, produces a random embedding vector of the class. Our proposed generator allows the loss to use richer class relations by augmenting realistic and diverse classes, resulting in better generalization to unseen samples. Experimental results on public benchmark datasets demonstrate that our method clearly enhances the performance of proxy-based losses.
[ { "created": "Tue, 4 Jan 2022 06:55:19 GMT", "version": "v1" } ]
2022-01-05
[ [ "Lee", "Kyungmoon", "" ], [ "Kim", "Sungyeon", "" ], [ "Hong", "Seunghoon", "" ], [ "Kwak", "Suha", "" ] ]
Deep metric learning aims to learn an embedding space where the distance between data reflects their class equivalence, even when their classes are unseen during training. However, the limited number of classes available in training precludes generalization of the learned embedding space. Motivated by this, we introduce a new data augmentation approach that synthesizes novel classes and their embedding vectors. Our approach can provide rich semantic information to an embedding model and improve its generalization by augmenting training data with novel classes unavailable in the original data. We implement this idea by learning and exploiting a conditional generative model, which, given a class label and a noise, produces a random embedding vector of the class. Our proposed generator allows the loss to use richer class relations by augmenting realistic and diverse classes, resulting in better generalization to unseen samples. Experimental results on public benchmark datasets demonstrate that our method clearly enhances the performance of proxy-based losses.
cs/0508129
Esra Erdem
Esra Erdem, Vladimir Lifschitz, and Don Ringe
Temporal Phylogenetic Networks and Logic Programming
null
null
null
null
cs.LO cs.AI cs.PL
null
The concept of a temporal phylogenetic network is a mathematical model of evolution of a family of natural languages. It takes into account the fact that languages can trade their characteristics with each other when linguistic communities are in contact, and also that a contact is only possible when the languages are spoken at the same time. We show how computational methods of answer set programming and constraint logic programming can be used to generate plausible conjectures about contacts between prehistoric linguistic communities, and illustrate our approach by applying it to the evolutionary history of Indo-European languages. To appear in Theory and Practice of Logic Programming (TPLP).
[ { "created": "Tue, 30 Aug 2005 13:04:05 GMT", "version": "v1" } ]
2007-05-23
[ [ "Erdem", "Esra", "" ], [ "Lifschitz", "Vladimir", "" ], [ "Ringe", "Don", "" ] ]
The concept of a temporal phylogenetic network is a mathematical model of evolution of a family of natural languages. It takes into account the fact that languages can trade their characteristics with each other when linguistic communities are in contact, and also that a contact is only possible when the languages are spoken at the same time. We show how computational methods of answer set programming and constraint logic programming can be used to generate plausible conjectures about contacts between prehistoric linguistic communities, and illustrate our approach by applying it to the evolutionary history of Indo-European languages. To appear in Theory and Practice of Logic Programming (TPLP).
1901.06694
Jaya Prakash Champati Dr
Jaya Prakash Champati and Mohammad H. Mamduhi and Karl H. Johansson and James Gross
Performance Characterization Using AoI in a Single-loop Networked Control System
7 pages, IEEE Infocom AoI Workshop, 2019
null
null
null
cs.SY cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The joint design of control and communication scheduling in a Networked Control System (NCS) is known to be a hard problem. Several research works have successfully designed optimal sampling and/or control strategies under simplified communication models, where transmission delays/times are negligible or fixed. However, considering sophisticated communication models, with random transmission times, result in highly coupled and difficult-to-solve optimal design problems due to the parameter inter-dependencies between estimation/control and communication layers. To tackle this problem, in this work, we investigate the applicability of Age-of-Information (AoI) for solving control/estimation problems in an NCS under i.i.d. transmission times. Our motivation for this investigation stems from the following facts: 1) recent results indicate that AoI can be tackled under relatively sophisticated communication models, and 2) a lower AoI in an NCS may result in a lower estimation/control cost. We study a joint optimization of sampling and scheduling for a single-loop stochastic LTI networked system with the objective of minimizing the time-average squared norm of the estimation error. We first show that under mild assumptions on information structure the optimal control policy can be designed independently from the sampling and scheduling policies. We then derive a key result that minimizing the estimation error is equivalent to minimizing a function of AoI when the sampling decisions are independent of the state of the LTI system. Noting that minimizing the function of AoI is a stochastic combinatorial optimization problem and is hard to solve, we resort to heuristic algorithms obtained by extending existing algorithms in the AoI literature. We also identify a class of LTI system dynamics for which minimizing the estimation error is equivalent to minimizing the expected AoI.
[ { "created": "Sun, 20 Jan 2019 16:16:27 GMT", "version": "v1" }, { "created": "Mon, 18 Mar 2019 15:39:01 GMT", "version": "v2" }, { "created": "Wed, 3 Jul 2019 16:26:23 GMT", "version": "v3" }, { "created": "Fri, 5 Jul 2019 13:43:26 GMT", "version": "v4" } ]
2019-07-08
[ [ "Champati", "Jaya Prakash", "" ], [ "Mamduhi", "Mohammad H.", "" ], [ "Johansson", "Karl H.", "" ], [ "Gross", "James", "" ] ]
The joint design of control and communication scheduling in a Networked Control System (NCS) is known to be a hard problem. Several research works have successfully designed optimal sampling and/or control strategies under simplified communication models, where transmission delays/times are negligible or fixed. However, considering sophisticated communication models, with random transmission times, result in highly coupled and difficult-to-solve optimal design problems due to the parameter inter-dependencies between estimation/control and communication layers. To tackle this problem, in this work, we investigate the applicability of Age-of-Information (AoI) for solving control/estimation problems in an NCS under i.i.d. transmission times. Our motivation for this investigation stems from the following facts: 1) recent results indicate that AoI can be tackled under relatively sophisticated communication models, and 2) a lower AoI in an NCS may result in a lower estimation/control cost. We study a joint optimization of sampling and scheduling for a single-loop stochastic LTI networked system with the objective of minimizing the time-average squared norm of the estimation error. We first show that under mild assumptions on information structure the optimal control policy can be designed independently from the sampling and scheduling policies. We then derive a key result that minimizing the estimation error is equivalent to minimizing a function of AoI when the sampling decisions are independent of the state of the LTI system. Noting that minimizing the function of AoI is a stochastic combinatorial optimization problem and is hard to solve, we resort to heuristic algorithms obtained by extending existing algorithms in the AoI literature. We also identify a class of LTI system dynamics for which minimizing the estimation error is equivalent to minimizing the expected AoI.
2308.03269
Haodi Ma
Haodi Ma, Anthony Colas, Yuejie Wang, Ali Sadeghian, Daisy Zhe Wang
Simple Rule Injection for ComplEx Embeddings
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent works in neural knowledge graph inference attempt to combine logic rules with knowledge graph embeddings to benefit from prior knowledge. However, they usually cannot avoid rule grounding, and injecting a diverse set of rules has still not been thoroughly explored. In this work, we propose InjEx, a mechanism to inject multiple types of rules through simple constraints, which capture definite Horn rules. To start, we theoretically prove that InjEx can inject such rules. Next, to demonstrate that InjEx infuses interpretable prior knowledge into the embedding space, we evaluate InjEx on both the knowledge graph completion (KGC) and few-shot knowledge graph completion (FKGC) settings. Our experimental results reveal that InjEx outperforms both baseline KGC models as well as specialized few-shot models while maintaining its scalability and efficiency.
[ { "created": "Mon, 7 Aug 2023 03:19:59 GMT", "version": "v1" } ]
2023-08-08
[ [ "Ma", "Haodi", "" ], [ "Colas", "Anthony", "" ], [ "Wang", "Yuejie", "" ], [ "Sadeghian", "Ali", "" ], [ "Wang", "Daisy Zhe", "" ] ]
Recent works in neural knowledge graph inference attempt to combine logic rules with knowledge graph embeddings to benefit from prior knowledge. However, they usually cannot avoid rule grounding, and injecting a diverse set of rules has still not been thoroughly explored. In this work, we propose InjEx, a mechanism to inject multiple types of rules through simple constraints, which capture definite Horn rules. To start, we theoretically prove that InjEx can inject such rules. Next, to demonstrate that InjEx infuses interpretable prior knowledge into the embedding space, we evaluate InjEx on both the knowledge graph completion (KGC) and few-shot knowledge graph completion (FKGC) settings. Our experimental results reveal that InjEx outperforms both baseline KGC models as well as specialized few-shot models while maintaining its scalability and efficiency.
1001.2421
Mari Kobayashi
Mari Kobayashi, Sheng Yang, Merouane Debbah, Jean-Claude Belfiore
Outage Efficient Strategies for Network MIMO with Partial CSIT
26 pages, 8 figures, submitted to IEEE Trans. on Signal Processing
null
10.1109/ISIT.2009.5206071
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a multi-cell MIMO downlink (network MIMO) where $B$ base-stations (BS) with $M$ antennas connected to a central station (CS) serve $K$ single-antenna user terminals (UT). Although many works have shown the potential benefits of network MIMO, the conclusion critically depends on the underlying assumptions such as channel state information at transmitters (CSIT) and backhaul links. In this paper, by focusing on the impact of partial CSIT, we propose an outage-efficient strategy. Namely, with side information of all UT's messages and local CSIT, each BS applies zero-forcing (ZF) beamforming in a distributed manner. For a small number of UTs ($K\leq M$), the ZF beamforming creates $K$ parallel MISO channels. Based on the statistical knowledge of these parallel channels, the CS performs a robust power allocation that simultaneously minimizes the outage probability of all UTs and achieves a diversity gain of $B(M-K+1)$ per UT. With a large number of UTs ($K \geq M$), we propose a so-called distributed diversity scheduling (DDS) scheme to select a subset of $\Ks$ UTs with limited backhaul communication. It is proved that DDS achieves a diversity gain of $B\frac{K}{\Ks}(M-\Ks+1)$, which scales optimally with the number of cooperative BSs $B$ as well as UTs. Numerical results confirm that even under realistic assumptions such as partial CSIT and limited backhaul communications, network MIMO can offer high data rates with a sufficient reliability to individual UTs.
[ { "created": "Thu, 14 Jan 2010 11:10:59 GMT", "version": "v1" } ]
2016-11-17
[ [ "Kobayashi", "Mari", "" ], [ "Yang", "Sheng", "" ], [ "Debbah", "Merouane", "" ], [ "Belfiore", "Jean-Claude", "" ] ]
We consider a multi-cell MIMO downlink (network MIMO) where $B$ base-stations (BS) with $M$ antennas connected to a central station (CS) serve $K$ single-antenna user terminals (UT). Although many works have shown the potential benefits of network MIMO, the conclusion critically depends on the underlying assumptions such as channel state information at transmitters (CSIT) and backhaul links. In this paper, by focusing on the impact of partial CSIT, we propose an outage-efficient strategy. Namely, with side information of all UT's messages and local CSIT, each BS applies zero-forcing (ZF) beamforming in a distributed manner. For a small number of UTs ($K\leq M$), the ZF beamforming creates $K$ parallel MISO channels. Based on the statistical knowledge of these parallel channels, the CS performs a robust power allocation that simultaneously minimizes the outage probability of all UTs and achieves a diversity gain of $B(M-K+1)$ per UT. With a large number of UTs ($K \geq M$), we propose a so-called distributed diversity scheduling (DDS) scheme to select a subset of $\Ks$ UTs with limited backhaul communication. It is proved that DDS achieves a diversity gain of $B\frac{K}{\Ks}(M-\Ks+1)$, which scales optimally with the number of cooperative BSs $B$ as well as UTs. Numerical results confirm that even under realistic assumptions such as partial CSIT and limited backhaul communications, network MIMO can offer high data rates with a sufficient reliability to individual UTs.
2209.12499
Hyun Jae Lee
HyunJae Lee, Gihyeon Lee, Junhwan Kim, Sungjun Cho, Dohyun Kim, Donggeun Yoo
Improving Multi-fidelity Optimization with a Recurring Learning Rate for Hyperparameter Tuning
null
null
null
null
cs.CV cs.LG math.OC
http://creativecommons.org/licenses/by/4.0/
Despite the evolution of Convolutional Neural Networks (CNNs), their performance is surprisingly dependent on the choice of hyperparameters. However, it remains challenging to efficiently explore large hyperparameter search space due to the long training times of modern CNNs. Multi-fidelity optimization enables the exploration of more hyperparameter configurations given budget by early termination of unpromising configurations. However, it often results in selecting a sub-optimal configuration as training with the high-performing configuration typically converges slowly in an early phase. In this paper, we propose Multi-fidelity Optimization with a Recurring Learning rate (MORL) which incorporates CNNs' optimization process into multi-fidelity optimization. MORL alleviates the problem of slow-starter and achieves a more precise low-fidelity approximation. Our comprehensive experiments on general image classification, transfer learning, and semi-supervised learning demonstrate the effectiveness of MORL over other multi-fidelity optimization methods such as Successive Halving Algorithm (SHA) and Hyperband. Furthermore, it achieves significant performance improvements over hand-tuned hyperparameter configuration within a practical budget.
[ { "created": "Mon, 26 Sep 2022 08:16:31 GMT", "version": "v1" } ]
2022-09-27
[ [ "Lee", "HyunJae", "" ], [ "Lee", "Gihyeon", "" ], [ "Kim", "Junhwan", "" ], [ "Cho", "Sungjun", "" ], [ "Kim", "Dohyun", "" ], [ "Yoo", "Donggeun", "" ] ]
Despite the evolution of Convolutional Neural Networks (CNNs), their performance is surprisingly dependent on the choice of hyperparameters. However, it remains challenging to efficiently explore large hyperparameter search space due to the long training times of modern CNNs. Multi-fidelity optimization enables the exploration of more hyperparameter configurations given budget by early termination of unpromising configurations. However, it often results in selecting a sub-optimal configuration as training with the high-performing configuration typically converges slowly in an early phase. In this paper, we propose Multi-fidelity Optimization with a Recurring Learning rate (MORL) which incorporates CNNs' optimization process into multi-fidelity optimization. MORL alleviates the problem of slow-starter and achieves a more precise low-fidelity approximation. Our comprehensive experiments on general image classification, transfer learning, and semi-supervised learning demonstrate the effectiveness of MORL over other multi-fidelity optimization methods such as Successive Halving Algorithm (SHA) and Hyperband. Furthermore, it achieves significant performance improvements over hand-tuned hyperparameter configuration within a practical budget.
2001.05197
Xin Jin
Xin Jin, Cuiling Lan, Wenjun Zeng, Zhibo Chen
Uncertainty-Aware Multi-Shot Knowledge Distillation for Image-Based Object Re-Identification
Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object re-identification (re-id) aims to identify a specific object across times or camera views, with the person re-id and vehicle re-id as the most widely studied applications. Re-id is challenging because of the variations in viewpoints, (human) poses, and occlusions. Multi-shots of the same object can cover diverse viewpoints/poses and thus provide more comprehensive information. In this paper, we propose exploiting the multi-shots of the same identity to guide the feature learning of each individual image. Specifically, we design an Uncertainty-aware Multi-shot Teacher-Student (UMTS) Network. It consists of a teacher network (T-net) that learns the comprehensive features from multiple images of the same object, and a student network (S-net) that takes a single image as input. In particular, we take into account the data dependent heteroscedastic uncertainty for effectively transferring the knowledge from the T-net to S-net. To the best of our knowledge, we are the first to make use of multi-shots of an object in a teacher-student learning manner for effectively boosting the single image based re-id. We validate the effectiveness of our approach on the popular vehicle re-id and person re-id datasets. In inference, the S-net alone significantly outperforms the baselines and achieves the state-of-the-art performance.
[ { "created": "Wed, 15 Jan 2020 09:39:05 GMT", "version": "v1" }, { "created": "Tue, 21 Jan 2020 17:21:07 GMT", "version": "v2" } ]
2020-01-22
[ [ "Jin", "Xin", "" ], [ "Lan", "Cuiling", "" ], [ "Zeng", "Wenjun", "" ], [ "Chen", "Zhibo", "" ] ]
Object re-identification (re-id) aims to identify a specific object across times or camera views, with the person re-id and vehicle re-id as the most widely studied applications. Re-id is challenging because of the variations in viewpoints, (human) poses, and occlusions. Multi-shots of the same object can cover diverse viewpoints/poses and thus provide more comprehensive information. In this paper, we propose exploiting the multi-shots of the same identity to guide the feature learning of each individual image. Specifically, we design an Uncertainty-aware Multi-shot Teacher-Student (UMTS) Network. It consists of a teacher network (T-net) that learns the comprehensive features from multiple images of the same object, and a student network (S-net) that takes a single image as input. In particular, we take into account the data dependent heteroscedastic uncertainty for effectively transferring the knowledge from the T-net to S-net. To the best of our knowledge, we are the first to make use of multi-shots of an object in a teacher-student learning manner for effectively boosting the single image based re-id. We validate the effectiveness of our approach on the popular vehicle re-id and person re-id datasets. In inference, the S-net alone significantly outperforms the baselines and achieves the state-of-the-art performance.
1101.1608
Jasni Mohamad Zain
Jasni Mohamad Zain, Mengkar Tey, Yingsoon Goh
Does Aesthetics of Web Page Interface Matters to Mandarin Learning?
9 pages, 1 figure, 9 tables
International Journal of Computer Science and Network Security, Vol. 7 No. 8 pp. 43-51 August 2007, ISSN 1738-7906
null
null
cs.HC cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aesthetics of web page refers to how attractive a web page is in which it catches the attention of the user to read through the information. In addition, the visual appearance is important in getting attentions of the users. Moreover, it was found that those screens, which were perceived as aesthetically pleasing, were having a better usability. Usability might be a strong basic in relating to the applicability for learning, and in this study pertaining to Mandarin learning. It was also found that aesthetically pleasing layouts of web page would motivate students in Mandarin learning The Mandarin Learning web pages were manipulated according to the desired aesthetic measurements. GUI aesthetic measuring method was used for this purpose. The Aesthetics-Measurement Application (AMA) accomplished with six aesthetic measures was developed and used. On top of it, questionnaires were distributed to the users to gather information on the students' perceptions on the aesthetic aspects and learning aspects. Respondents for this study were students taking Mandarin course level I at UiTM Terengganu. A significant correlation of the aesthetic aspect was found with its relevance to Mandarin learning. In summary, aesthetics should not be ignored or overlooked in designing effective learning interfaces for educational purposes.
[ { "created": "Sat, 8 Jan 2011 16:58:47 GMT", "version": "v1" } ]
2011-01-11
[ [ "Zain", "Jasni Mohamad", "" ], [ "Tey", "Mengkar", "" ], [ "Goh", "Yingsoon", "" ] ]
Aesthetics of web page refers to how attractive a web page is in which it catches the attention of the user to read through the information. In addition, the visual appearance is important in getting attentions of the users. Moreover, it was found that those screens, which were perceived as aesthetically pleasing, were having a better usability. Usability might be a strong basic in relating to the applicability for learning, and in this study pertaining to Mandarin learning. It was also found that aesthetically pleasing layouts of web page would motivate students in Mandarin learning The Mandarin Learning web pages were manipulated according to the desired aesthetic measurements. GUI aesthetic measuring method was used for this purpose. The Aesthetics-Measurement Application (AMA) accomplished with six aesthetic measures was developed and used. On top of it, questionnaires were distributed to the users to gather information on the students' perceptions on the aesthetic aspects and learning aspects. Respondents for this study were students taking Mandarin course level I at UiTM Terengganu. A significant correlation of the aesthetic aspect was found with its relevance to Mandarin learning. In summary, aesthetics should not be ignored or overlooked in designing effective learning interfaces for educational purposes.
1008.3725
Grenville Croll
Grenville J. Croll, David F. Baker, Ola Lawal
Evaluating Financial Model Performance: An Empirical Analysis of Some North Sea Investments
11 Pages, 1 Table, 5 Figures
Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2010 87-98 ISBN 978-1-905404-50-6
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fifty North Sea oil & gas investment transactions were analysed using traditional spreadsheet based financial modelling methods. The purpose of the analysis was to determine if there was a statistically significant relationship between the price paid for an oil & gas asset and the actual or expected financial return over the asset's economically useful life. Several interesting and statistically significant relationships were found which reveal useful information about financial modelling performance, the premia paid to acquire North Sea assets, the contribution oil and gas price uncertainty has on estimates of future financial returns and the median financial return of these North Sea Investments.
[ { "created": "Sun, 22 Aug 2010 21:26:01 GMT", "version": "v1" } ]
2010-08-24
[ [ "Croll", "Grenville J.", "" ], [ "Baker", "David F.", "" ], [ "Lawal", "Ola", "" ] ]
Fifty North Sea oil & gas investment transactions were analysed using traditional spreadsheet based financial modelling methods. The purpose of the analysis was to determine if there was a statistically significant relationship between the price paid for an oil & gas asset and the actual or expected financial return over the asset's economically useful life. Several interesting and statistically significant relationships were found which reveal useful information about financial modelling performance, the premia paid to acquire North Sea assets, the contribution oil and gas price uncertainty has on estimates of future financial returns and the median financial return of these North Sea Investments.
1812.07640
Alexander Veretennikov Borisovich
Alexander B. Veretennikov
Proximity Full-Text Search by Means of Additional Indexes with Multi-component Keys: In Pursuit of Optimal Performance
Revised paper of "Veretennikov A.B. Proximity full-text search with a response time guarantee by means of additional indexes with multi-component keys", Selected Papers of the XX International Conference on Data Analytics and Management in Data Intensive Domains (DAMDID/RCDL 2018), Moscow, Russia, October 9-12, 2018, http://ceur-ws.org/Vol-2277, http://ceur-ws.org/Vol-2277/paper23.pdf
Data Analytics and Management in Data Intensive Domains. DAMDID/RCDL 2018. Communications in Computer and Information Science, vol 1003. Springer, Cham
10.1007/978-3-030-23584-0_7
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Full-text search engines are important tools for information retrieval. In a proximity full-text search, a document is relevant if it contains query terms near each other, especially if the query terms are frequently occurring words. For each word in a text, we use additional indexes to store information about nearby words that are at distances from the given word of less than or equal to the MaxDistance parameter. We showed that additional indexes with three-component keys can be used to improve the average query execution time by up to 94.7 times if the queries consist of high-frequency occurring words. In this paper, we present a new search algorithm with even more performance gains. We consider several strategies for selecting multi-component key indexes for a specific query and compare these strategies with the optimal strategy. We also present the results of search experiments, which show that three-component key indexes enable much faster searches in comparison with two-component key indexes. This is a pre-print of a contribution "Veretennikov A.B. (2019) Proximity Full-Text Search by Means of Additional Indexes with Multi-component Keys: In Pursuit of Optimal Performance." published in "Manolopoulos Y., Stupnikov S. (eds) Data Analytics and Management in Data Intensive Domains. DAMDID/RCDL 2018. Communications in Computer and Information Science, vol 1003" published by Springer, Cham. This book constitutes the refereed proceedings of the 20th International Conference on Data Analytics and Management in Data Intensive Domains, DAMDID/RCDL 2018, held in Moscow, Russia, in October 2018. The 9 revised full papers presented together with three invited papers were carefully reviewed and selected from 54 submissions. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-23584-0_7.
[ { "created": "Tue, 18 Dec 2018 20:57:15 GMT", "version": "v1" }, { "created": "Tue, 9 Jul 2019 18:33:56 GMT", "version": "v2" } ]
2019-07-11
[ [ "Veretennikov", "Alexander B.", "" ] ]
Full-text search engines are important tools for information retrieval. In a proximity full-text search, a document is relevant if it contains query terms near each other, especially if the query terms are frequently occurring words. For each word in a text, we use additional indexes to store information about nearby words that are at distances from the given word of less than or equal to the MaxDistance parameter. We showed that additional indexes with three-component keys can be used to improve the average query execution time by up to 94.7 times if the queries consist of high-frequency occurring words. In this paper, we present a new search algorithm with even more performance gains. We consider several strategies for selecting multi-component key indexes for a specific query and compare these strategies with the optimal strategy. We also present the results of search experiments, which show that three-component key indexes enable much faster searches in comparison with two-component key indexes. This is a pre-print of a contribution "Veretennikov A.B. (2019) Proximity Full-Text Search by Means of Additional Indexes with Multi-component Keys: In Pursuit of Optimal Performance." published in "Manolopoulos Y., Stupnikov S. (eds) Data Analytics and Management in Data Intensive Domains. DAMDID/RCDL 2018. Communications in Computer and Information Science, vol 1003" published by Springer, Cham. This book constitutes the refereed proceedings of the 20th International Conference on Data Analytics and Management in Data Intensive Domains, DAMDID/RCDL 2018, held in Moscow, Russia, in October 2018. The 9 revised full papers presented together with three invited papers were carefully reviewed and selected from 54 submissions. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-23584-0_7.
2311.12223
Shisheng Hu
Shisheng Hu, Jie Gao, Xinyu Huang, Mushu Li, Kaige Qu, Conghao Zhou, and Xuemin (Sherman) Shen
Digital Twin-Based User-Centric Edge Continual Learning in Integrated Sensing and Communication
submitted to IEEE ICC 2024
null
null
null
cs.NI cs.AI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a digital twin (DT)-based user-centric approach for processing sensing data in an integrated sensing and communication (ISAC) system with high accuracy and efficient resource utilization. The considered scenario involves an ISAC device with a lightweight deep neural network (DNN) and a mobile edge computing (MEC) server with a large DNN. After collecting sensing data, the ISAC device either processes the data locally or uploads them to the server for higher-accuracy data processing. To cope with data drifts, the server updates the lightweight DNN when necessary, referred to as continual learning. Our objective is to minimize the long-term average computation cost of the MEC server by optimizing two decisions, i.e., sensing data offloading and sensing data selection for the DNN update. A DT of the ISAC device is constructed to predict the impact of potential decisions on the long-term computation cost of the server, based on which the decisions are made with closed-form formulas. Experiments on executing DNN-based human motion recognition tasks are conducted to demonstrate the outstanding performance of the proposed DT-based approach in computation cost minimization.
[ { "created": "Mon, 20 Nov 2023 22:27:14 GMT", "version": "v1" } ]
2023-11-22
[ [ "Hu", "Shisheng", "", "Sherman" ], [ "Gao", "Jie", "", "Sherman" ], [ "Huang", "Xinyu", "", "Sherman" ], [ "Li", "Mushu", "", "Sherman" ], [ "Qu", "Kaige", "", "Sherman" ], [ "Zhou", "Conghao", "", "Sherman" ], [ "Xuemin", "", "", "Sherman" ], [ "Shen", "", "" ] ]
In this paper, we propose a digital twin (DT)-based user-centric approach for processing sensing data in an integrated sensing and communication (ISAC) system with high accuracy and efficient resource utilization. The considered scenario involves an ISAC device with a lightweight deep neural network (DNN) and a mobile edge computing (MEC) server with a large DNN. After collecting sensing data, the ISAC device either processes the data locally or uploads them to the server for higher-accuracy data processing. To cope with data drifts, the server updates the lightweight DNN when necessary, referred to as continual learning. Our objective is to minimize the long-term average computation cost of the MEC server by optimizing two decisions, i.e., sensing data offloading and sensing data selection for the DNN update. A DT of the ISAC device is constructed to predict the impact of potential decisions on the long-term computation cost of the server, based on which the decisions are made with closed-form formulas. Experiments on executing DNN-based human motion recognition tasks are conducted to demonstrate the outstanding performance of the proposed DT-based approach in computation cost minimization.