id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2311.11230
Herve Kabamba
Herve Mbikayi Kabamba, Matthew Khouzam, Michel Dagenais
Advanced Strategies for Precise and Transparent Debugging of Performance Issues in In-Memory Data Store-Based Microservices
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
The rise of microservice architectures has revolutionized application design, fostering adaptability and resilience. These architectures facilitate scaling and encourage collaborative efforts among specialized teams, streamlining deployment and maintenance. Critical to this ecosystem is the demand for low latency, prompting the adoption of cloud-based structures and in-memory data storage. This shift optimizes data access times, supplanting direct disk access and driving the adoption of non-relational databases. Despite their benefits, microservice architectures present challenges in system performance and debugging, particularly as complexity grows. Performance issues can readily cascade through components, jeopardizing user satisfaction and service quality. Existing monitoring approaches often require code instrumentation, demanding extensive developer involvement. Recent strategies like proxies and service meshes aim to enhance tracing transparency, but introduce added configuration complexities. Our innovative solution introduces a new framework that transparently integrates heterogeneous microservices, enabling the creation of tailored tools for fine-grained performance debugging, especially for in-memory data store-based microservices. This approach leverages transparent user-level tracing, employing a two-level abstraction analysis model to pinpoint key performance influencers. It harnesses system tracing and advanced analysis to provide visualization tools for identifying intricate performance issues. In a performance-centric landscape, this approach offers a promising solution to ensure peak efficiency and reliability for in-memory data store-based cloud applications.
[ { "created": "Sun, 19 Nov 2023 05:10:22 GMT", "version": "v1" } ]
2023-11-21
[ [ "Kabamba", "Herve Mbikayi", "" ], [ "Khouzam", "Matthew", "" ], [ "Dagenais", "Michel", "" ] ]
The rise of microservice architectures has revolutionized application design, fostering adaptability and resilience. These architectures facilitate scaling and encourage collaborative efforts among specialized teams, streamlining deployment and maintenance. Critical to this ecosystem is the demand for low latency, prompting the adoption of cloud-based structures and in-memory data storage. This shift optimizes data access times, supplanting direct disk access and driving the adoption of non-relational databases. Despite their benefits, microservice architectures present challenges in system performance and debugging, particularly as complexity grows. Performance issues can readily cascade through components, jeopardizing user satisfaction and service quality. Existing monitoring approaches often require code instrumentation, demanding extensive developer involvement. Recent strategies like proxies and service meshes aim to enhance tracing transparency, but introduce added configuration complexities. Our innovative solution introduces a new framework that transparently integrates heterogeneous microservices, enabling the creation of tailored tools for fine-grained performance debugging, especially for in-memory data store-based microservices. This approach leverages transparent user-level tracing, employing a two-level abstraction analysis model to pinpoint key performance influencers. It harnesses system tracing and advanced analysis to provide visualization tools for identifying intricate performance issues. In a performance-centric landscape, this approach offers a promising solution to ensure peak efficiency and reliability for in-memory data store-based cloud applications.
2402.02933
Vinitra Swamy
Vinitra Swamy, Syrielle Montariol, Julian Blackwell, Jibril Frej, Martin Jaggi, Tanja K\"aser
InterpretCC: Intrinsic User-Centric Interpretability through Global Mixture of Experts
null
null
null
null
cs.LG cs.CY cs.HC
http://creativecommons.org/licenses/by/4.0/
Interpretability for neural networks is a trade-off between three key requirements: 1) faithfulness of the explanation (i.e., how perfectly it explains the prediction), 2) understandability of the explanation by humans, and 3) model performance. Most existing methods compromise one or more of these requirements; e.g., post-hoc approaches provide limited faithfulness, automatically identified feature masks compromise understandability, and intrinsically interpretable methods such as decision trees limit model performance. These shortcomings are unacceptable for sensitive applications such as education and healthcare, which require trustworthy explanations, actionable interpretations, and accurate predictions. In this work, we present InterpretCC (interpretable conditional computation), a family of interpretable-by-design neural networks that guarantee human-centric interpretability, while maintaining comparable performance to state-of-the-art models by adaptively and sparsely activating features before prediction. We extend this idea into an interpretable, global mixture-of-experts (MoE) model that allows humans to specify topics of interest, discretely separates the feature space for each data point into topical subnetworks, and adaptively and sparsely activates these topical subnetworks for prediction. We apply variations of the InterpretCC architecture for text, time series and tabular data across several real-world benchmarks, demonstrating comparable performance with non-interpretable baselines, outperforming interpretable-by-design baselines, and showing higher actionability and usefulness according to a user study.
[ { "created": "Mon, 5 Feb 2024 11:55:50 GMT", "version": "v1" }, { "created": "Tue, 28 May 2024 14:58:26 GMT", "version": "v2" }, { "created": "Wed, 29 May 2024 12:03:40 GMT", "version": "v3" } ]
2024-05-30
[ [ "Swamy", "Vinitra", "" ], [ "Montariol", "Syrielle", "" ], [ "Blackwell", "Julian", "" ], [ "Frej", "Jibril", "" ], [ "Jaggi", "Martin", "" ], [ "Käser", "Tanja", "" ] ]
Interpretability for neural networks is a trade-off between three key requirements: 1) faithfulness of the explanation (i.e., how perfectly it explains the prediction), 2) understandability of the explanation by humans, and 3) model performance. Most existing methods compromise one or more of these requirements; e.g., post-hoc approaches provide limited faithfulness, automatically identified feature masks compromise understandability, and intrinsically interpretable methods such as decision trees limit model performance. These shortcomings are unacceptable for sensitive applications such as education and healthcare, which require trustworthy explanations, actionable interpretations, and accurate predictions. In this work, we present InterpretCC (interpretable conditional computation), a family of interpretable-by-design neural networks that guarantee human-centric interpretability, while maintaining comparable performance to state-of-the-art models by adaptively and sparsely activating features before prediction. We extend this idea into an interpretable, global mixture-of-experts (MoE) model that allows humans to specify topics of interest, discretely separates the feature space for each data point into topical subnetworks, and adaptively and sparsely activates these topical subnetworks for prediction. We apply variations of the InterpretCC architecture for text, time series and tabular data across several real-world benchmarks, demonstrating comparable performance with non-interpretable baselines, outperforming interpretable-by-design baselines, and showing higher actionability and usefulness according to a user study.
2207.01076
Mingzhe Guo
Mingzhe Guo, Zhipeng Zhang, Heng Fan, Liping Jing
Divert More Attention to Vision-Language Tracking
18 pages, 7 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relying on Transformer for complex visual feature learning, object tracking has witnessed the new standard for state-of-the-arts (SOTAs). However, this advancement accompanies by larger training data and longer training period, making tracking increasingly expensive. In this paper, we demonstrate that the Transformer-reliance is not necessary and the pure ConvNets are still competitive and even better yet more economical and friendly in achieving SOTA tracking. Our solution is to unleash the power of multimodal vision-language (VL) tracking, simply using ConvNets. The essence lies in learning novel unified-adaptive VL representations with our modality mixer (ModaMixer) and asymmetrical ConvNet search. We show that our unified-adaptive VL representation, learned purely with the ConvNets, is a simple yet strong alternative to Transformer visual features, by unbelievably improving a CNN-based Siamese tracker by 14.5% in SUC on challenging LaSOT (50.7% > 65.2%), even outperforming several Transformer-based SOTA trackers. Besides empirical results, we theoretically analyze our approach to evidence its effectiveness. By revealing the potential of VL representation, we expect the community to divert more attention to VL tracking and hope to open more possibilities for future tracking beyond Transformer. Code and models will be released at https://github.com/JudasDie/SOTS.
[ { "created": "Sun, 3 Jul 2022 16:38:24 GMT", "version": "v1" } ]
2022-07-05
[ [ "Guo", "Mingzhe", "" ], [ "Zhang", "Zhipeng", "" ], [ "Fan", "Heng", "" ], [ "Jing", "Liping", "" ] ]
Relying on Transformer for complex visual feature learning, object tracking has witnessed the new standard for state-of-the-arts (SOTAs). However, this advancement accompanies by larger training data and longer training period, making tracking increasingly expensive. In this paper, we demonstrate that the Transformer-reliance is not necessary and the pure ConvNets are still competitive and even better yet more economical and friendly in achieving SOTA tracking. Our solution is to unleash the power of multimodal vision-language (VL) tracking, simply using ConvNets. The essence lies in learning novel unified-adaptive VL representations with our modality mixer (ModaMixer) and asymmetrical ConvNet search. We show that our unified-adaptive VL representation, learned purely with the ConvNets, is a simple yet strong alternative to Transformer visual features, by unbelievably improving a CNN-based Siamese tracker by 14.5% in SUC on challenging LaSOT (50.7% > 65.2%), even outperforming several Transformer-based SOTA trackers. Besides empirical results, we theoretically analyze our approach to evidence its effectiveness. By revealing the potential of VL representation, we expect the community to divert more attention to VL tracking and hope to open more possibilities for future tracking beyond Transformer. Code and models will be released at https://github.com/JudasDie/SOTS.
2305.07988
Haochen Tan
Haochen Tan, Han Wu, Wei Shao, Xinyun Zhang, Mingjie Zhan, Zhaohui Hou, Ding Liang, Linqi Song
Reconstruct Before Summarize: An Efficient Two-Step Framework for Condensing and Summarizing Meeting Transcripts
Accepted to EMNLP 2023 main conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Meetings typically involve multiple participants and lengthy conversations, resulting in redundant and trivial content. To overcome these challenges, we propose a two-step framework, Reconstruct before Summarize (RbS), for effective and efficient meeting summarization. RbS first leverages a self-supervised paradigm to annotate essential contents by reconstructing the meeting transcripts. Secondly, we propose a relative positional bucketing (RPB) algorithm to equip (conventional) summarization models to generate the summary. Despite the additional reconstruction process, our proposed RPB significantly compressed the input, leading to faster processing and reduced memory consumption compared to traditional summarization methods. We validate the effectiveness and efficiency of our method through extensive evaluations and analysis. On two meeting summarization datasets, AMI and ICSI, our approach outperforms previous state-of-the-art approaches without relying on large-scale pre-training or expert-grade annotating tools.
[ { "created": "Sat, 13 May 2023 19:54:46 GMT", "version": "v1" }, { "created": "Sun, 22 Oct 2023 17:42:44 GMT", "version": "v2" } ]
2023-10-24
[ [ "Tan", "Haochen", "" ], [ "Wu", "Han", "" ], [ "Shao", "Wei", "" ], [ "Zhang", "Xinyun", "" ], [ "Zhan", "Mingjie", "" ], [ "Hou", "Zhaohui", "" ], [ "Liang", "Ding", "" ], [ "Song", "Linqi", "" ] ]
Meetings typically involve multiple participants and lengthy conversations, resulting in redundant and trivial content. To overcome these challenges, we propose a two-step framework, Reconstruct before Summarize (RbS), for effective and efficient meeting summarization. RbS first leverages a self-supervised paradigm to annotate essential contents by reconstructing the meeting transcripts. Secondly, we propose a relative positional bucketing (RPB) algorithm to equip (conventional) summarization models to generate the summary. Despite the additional reconstruction process, our proposed RPB significantly compressed the input, leading to faster processing and reduced memory consumption compared to traditional summarization methods. We validate the effectiveness and efficiency of our method through extensive evaluations and analysis. On two meeting summarization datasets, AMI and ICSI, our approach outperforms previous state-of-the-art approaches without relying on large-scale pre-training or expert-grade annotating tools.
1909.00392
Tae Ha Park
Tae Ha Park, Sumant Sharma, Simone D'Amico
Towards Robust Learning-Based Pose Estimation of Noncooperative Spacecraft
Presented at 2019 AAS/AIAA Astrodynamics Specialist Conference
null
null
AAS 19-840
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents a novel Convolutional Neural Network (CNN) architecture and a training procedure to enable robust and accurate pose estimation of a noncooperative spacecraft. First, a new CNN architecture is introduced that has scored a fourth place in the recent Pose Estimation Challenge hosted by Stanford's Space Rendezvous Laboratory (SLAB) and the Advanced Concepts Team (ACT) of the European Space Agency (ESA). The proposed architecture first detects the object by regressing a 2D bounding box, then a separate network regresses the 2D locations of the known surface keypoints from an image of the target cropped around the detected Region-of-Interest (RoI). In a single-image pose estimation problem, the extracted 2D keypoints can be used in conjunction with corresponding 3D model coordinates to compute relative pose via the Perspective-n-Point (PnP) problem. These keypoint locations have known correspondences to those in the 3D model, since the CNN is trained to predict the corners in a pre-defined order, allowing for bypassing the computationally expensive feature matching processes. This work also introduces and explores the texture randomization to train a CNN for spaceborne applications. Specifically, Neural Style Transfer (NST) is applied to randomize the texture of the spacecraft in synthetically rendered images. It is shown that using the texture-randomized images of spacecraft for training improves the network's performance on spaceborne images without exposure to them during training. It is also shown that when using the texture-randomized spacecraft images during training, regressing 3D bounding box corners leads to better performance on spaceborne images than regressing surface keypoints, as NST inevitably distorts the spacecraft's geometric features to which the surface keypoints have closer relation.
[ { "created": "Sun, 1 Sep 2019 13:22:19 GMT", "version": "v1" } ]
2019-09-04
[ [ "Park", "Tae Ha", "" ], [ "Sharma", "Sumant", "" ], [ "D'Amico", "Simone", "" ] ]
This work presents a novel Convolutional Neural Network (CNN) architecture and a training procedure to enable robust and accurate pose estimation of a noncooperative spacecraft. First, a new CNN architecture is introduced that has scored a fourth place in the recent Pose Estimation Challenge hosted by Stanford's Space Rendezvous Laboratory (SLAB) and the Advanced Concepts Team (ACT) of the European Space Agency (ESA). The proposed architecture first detects the object by regressing a 2D bounding box, then a separate network regresses the 2D locations of the known surface keypoints from an image of the target cropped around the detected Region-of-Interest (RoI). In a single-image pose estimation problem, the extracted 2D keypoints can be used in conjunction with corresponding 3D model coordinates to compute relative pose via the Perspective-n-Point (PnP) problem. These keypoint locations have known correspondences to those in the 3D model, since the CNN is trained to predict the corners in a pre-defined order, allowing for bypassing the computationally expensive feature matching processes. This work also introduces and explores the texture randomization to train a CNN for spaceborne applications. Specifically, Neural Style Transfer (NST) is applied to randomize the texture of the spacecraft in synthetically rendered images. It is shown that using the texture-randomized images of spacecraft for training improves the network's performance on spaceborne images without exposure to them during training. It is also shown that when using the texture-randomized spacecraft images during training, regressing 3D bounding box corners leads to better performance on spaceborne images than regressing surface keypoints, as NST inevitably distorts the spacecraft's geometric features to which the surface keypoints have closer relation.
2402.07945
Runliang Niu
Runliang Niu, Jindong Li, Shiqi Wang, Yali Fu, Xiyu Hu, Xueyuan Leng, He Kong, Yi Chang, Qi Wang
ScreenAgent: A Vision Language Model-driven Computer Control Agent
null
null
null
null
cs.HC cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Existing Large Language Models (LLM) can invoke a variety of tools and APIs to complete complex tasks. The computer, as the most powerful and universal tool, could potentially be controlled directly by a trained LLM agent. Powered by the computer, we can hopefully build a more generalized agent to assist humans in various daily digital works. In this paper, we construct an environment for a Vision Language Model (VLM) agent to interact with a real computer screen. Within this environment, the agent can observe screenshots and manipulate the Graphics User Interface (GUI) by outputting mouse and keyboard actions. We also design an automated control pipeline that includes planning, acting, and reflecting phases, guiding the agent to continuously interact with the environment and complete multi-step tasks. Additionally, we construct the ScreenAgent Dataset, which collects screenshots and action sequences when completing a variety of daily computer tasks. Finally, we trained a model, ScreenAgent, which achieved computer control capabilities comparable to GPT-4V and demonstrated more precise UI positioning capabilities. Our attempts could inspire further research on building a generalist LLM agent. The code is available at \url{https://github.com/niuzaisheng/ScreenAgent}.
[ { "created": "Fri, 9 Feb 2024 02:33:45 GMT", "version": "v1" } ]
2024-02-14
[ [ "Niu", "Runliang", "" ], [ "Li", "Jindong", "" ], [ "Wang", "Shiqi", "" ], [ "Fu", "Yali", "" ], [ "Hu", "Xiyu", "" ], [ "Leng", "Xueyuan", "" ], [ "Kong", "He", "" ], [ "Chang", "Yi", "" ], [ "Wang", "Qi", "" ] ]
Existing Large Language Models (LLM) can invoke a variety of tools and APIs to complete complex tasks. The computer, as the most powerful and universal tool, could potentially be controlled directly by a trained LLM agent. Powered by the computer, we can hopefully build a more generalized agent to assist humans in various daily digital works. In this paper, we construct an environment for a Vision Language Model (VLM) agent to interact with a real computer screen. Within this environment, the agent can observe screenshots and manipulate the Graphics User Interface (GUI) by outputting mouse and keyboard actions. We also design an automated control pipeline that includes planning, acting, and reflecting phases, guiding the agent to continuously interact with the environment and complete multi-step tasks. Additionally, we construct the ScreenAgent Dataset, which collects screenshots and action sequences when completing a variety of daily computer tasks. Finally, we trained a model, ScreenAgent, which achieved computer control capabilities comparable to GPT-4V and demonstrated more precise UI positioning capabilities. Our attempts could inspire further research on building a generalist LLM agent. The code is available at \url{https://github.com/niuzaisheng/ScreenAgent}.
1706.09927
Cedomir Stefanovic
Federico Clazzer, Enrico Paolini, Iacopo Mambelli, Cedomir Stefanovic
Irregular Repetition Slotted ALOHA over the Rayleigh Block Fading Channel with Capture
Presented at ICC 2017
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Random access protocols relying on the transmission of packet replicas in multiple slots and exploiting interference cancellation at the receiver have been shown to achieve per- formance competitive with that of orthogonal schemes. So far the optimization of the repetition degree profile, defining the probability for a user to transmit a given number of replicas, has mainly been performed targeting the collision channel model. In this paper the analysis is extended to a block fading channel model, also assuming capture effect at the receiver. Density evolution equations are developed for the new setting and, based on them, some repetition degree profiles are optimized and analyzed via Monte Carlo simulation in a finite frame length setting. The derived distributions are shown to achieve throughputs largely exceeding 1 [packet/slot].
[ { "created": "Thu, 29 Jun 2017 19:05:01 GMT", "version": "v1" } ]
2017-07-03
[ [ "Clazzer", "Federico", "" ], [ "Paolini", "Enrico", "" ], [ "Mambelli", "Iacopo", "" ], [ "Stefanovic", "Cedomir", "" ] ]
Random access protocols relying on the transmission of packet replicas in multiple slots and exploiting interference cancellation at the receiver have been shown to achieve per- formance competitive with that of orthogonal schemes. So far the optimization of the repetition degree profile, defining the probability for a user to transmit a given number of replicas, has mainly been performed targeting the collision channel model. In this paper the analysis is extended to a block fading channel model, also assuming capture effect at the receiver. Density evolution equations are developed for the new setting and, based on them, some repetition degree profiles are optimized and analyzed via Monte Carlo simulation in a finite frame length setting. The derived distributions are shown to achieve throughputs largely exceeding 1 [packet/slot].
0911.1972
Neal Patwari
Neal Patwari and Joey Wilson
People-Sensing Spatial Characteristics of RF Sensor Networks
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An "RF sensor" network can monitor RSS values on links in the network and perform device-free localization, i.e., locating a person or object moving in the area in which the network is deployed. This paper provides a statistical model for the RSS variance as a function of the person's position w.r.t. the transmitter (TX) and receiver (RX). We show that the ensemble mean of the RSS variance has an approximately linear relationship with the expected total affected power (ETAP). We then use analysis to derive approximate expressions for the ETAP as a function of the person's position, for both scattering and reflection. Counterintuitively, we show that reflection, not scattering, causes the RSS variance contours to be shaped like Cassini ovals. Experimental tests reported here and in past literature are shown to validate the analysis.
[ { "created": "Tue, 10 Nov 2009 19:31:07 GMT", "version": "v1" } ]
2009-11-11
[ [ "Patwari", "Neal", "" ], [ "Wilson", "Joey", "" ] ]
An "RF sensor" network can monitor RSS values on links in the network and perform device-free localization, i.e., locating a person or object moving in the area in which the network is deployed. This paper provides a statistical model for the RSS variance as a function of the person's position w.r.t. the transmitter (TX) and receiver (RX). We show that the ensemble mean of the RSS variance has an approximately linear relationship with the expected total affected power (ETAP). We then use analysis to derive approximate expressions for the ETAP as a function of the person's position, for both scattering and reflection. Counterintuitively, we show that reflection, not scattering, causes the RSS variance contours to be shaped like Cassini ovals. Experimental tests reported here and in past literature are shown to validate the analysis.
1309.6036
Shamgar Gurevich
Alexander Fish and Shamgar Gurevich
Almost Linear Complexity Methods for Delay-Doppler Channel Estimation
4 double column pages. arXiv admin note: substantial text overlap with arXiv:1309.3720
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental task in wireless communication is channel estimation: Compute the channel parameters a signal undergoes while traveling from a transmitter to a receiver. In the case of delay-Doppler channel, i.e., a signal undergoes only delay and Doppler shifts, a widely used method to compute delay-Doppler parameters is the pseudo-random method. It uses a pseudo-random sequence of length N; and, in case of non-trivial relative velocity between transmitter and receiver, its computational complexity is O(N^2logN) arithmetic operations. In [1] the flag method was introduced to provide a faster algorithm for delay-Doppler channel estimation. It uses specially designed flag sequences and its complexity is O(rNlogN) for channels of sparsity r. In these notes, we introduce the incidence and cross methods for channel estimation. They use triple-chirp and double-chirp sequences of length N, correspondingly. These sequences are closely related to chirp sequences widely used in radar systems. The arithmetic complexity of the incidence and cross methods is O(NlogN + r^3), and O(NlogN + r^2), respectively.
[ { "created": "Tue, 24 Sep 2013 03:30:27 GMT", "version": "v1" } ]
2013-09-25
[ [ "Fish", "Alexander", "" ], [ "Gurevich", "Shamgar", "" ] ]
A fundamental task in wireless communication is channel estimation: Compute the channel parameters a signal undergoes while traveling from a transmitter to a receiver. In the case of delay-Doppler channel, i.e., a signal undergoes only delay and Doppler shifts, a widely used method to compute delay-Doppler parameters is the pseudo-random method. It uses a pseudo-random sequence of length N; and, in case of non-trivial relative velocity between transmitter and receiver, its computational complexity is O(N^2logN) arithmetic operations. In [1] the flag method was introduced to provide a faster algorithm for delay-Doppler channel estimation. It uses specially designed flag sequences and its complexity is O(rNlogN) for channels of sparsity r. In these notes, we introduce the incidence and cross methods for channel estimation. They use triple-chirp and double-chirp sequences of length N, correspondingly. These sequences are closely related to chirp sequences widely used in radar systems. The arithmetic complexity of the incidence and cross methods is O(NlogN + r^3), and O(NlogN + r^2), respectively.
2110.09170
Jordan J. Bird
Jordan J. Bird
Continuation of Famous Art with AI: A Conditional Adversarial Network Inpainting Approach
null
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Much of the state-of-the-art in image synthesis inspired by real artwork are either entirely generative by filtered random noise or inspired by the transfer of style. This work explores the application of image inpainting to continue famous artworks and produce generative art with a Conditional GAN. During the training stage of the process, the borders of images are cropped, leaving only the centre. An inpainting GAN is then tasked with learning to reconstruct the original image from the centre crop by way of minimising both adversarial and absolute difference losses, which are analysed by both their Fr\'echet Inception Distances and manual observations which are presented. Once the network is trained, images are then resized rather than cropped and presented as input to the generator. Following the learning process, the generator then creates new images by continuing from the edges of the original piece. Three experiments are performed with datasets of 4766 landscape paintings (impressionism and romanticism), 1167 Ukiyo-e works from the Japanese Edo period, and 4968 abstract artworks. Results show that geometry and texture (including canvas and paint) as well as scenery such as sky, clouds, water, land (including hills and mountains), grass, and flowers are implemented by the generator when extending real artworks. In the Ukiyo-e experiments, it was observed that features such as written text were generated even in cases where the original image did not have any, due to the presence of an unpainted border within the input image.
[ { "created": "Mon, 18 Oct 2021 10:39:32 GMT", "version": "v1" }, { "created": "Tue, 26 Oct 2021 18:23:51 GMT", "version": "v2" }, { "created": "Tue, 1 Feb 2022 14:13:18 GMT", "version": "v3" } ]
2022-02-02
[ [ "Bird", "Jordan J.", "" ] ]
Much of the state-of-the-art in image synthesis inspired by real artwork are either entirely generative by filtered random noise or inspired by the transfer of style. This work explores the application of image inpainting to continue famous artworks and produce generative art with a Conditional GAN. During the training stage of the process, the borders of images are cropped, leaving only the centre. An inpainting GAN is then tasked with learning to reconstruct the original image from the centre crop by way of minimising both adversarial and absolute difference losses, which are analysed by both their Fr\'echet Inception Distances and manual observations which are presented. Once the network is trained, images are then resized rather than cropped and presented as input to the generator. Following the learning process, the generator then creates new images by continuing from the edges of the original piece. Three experiments are performed with datasets of 4766 landscape paintings (impressionism and romanticism), 1167 Ukiyo-e works from the Japanese Edo period, and 4968 abstract artworks. Results show that geometry and texture (including canvas and paint) as well as scenery such as sky, clouds, water, land (including hills and mountains), grass, and flowers are implemented by the generator when extending real artworks. In the Ukiyo-e experiments, it was observed that features such as written text were generated even in cases where the original image did not have any, due to the presence of an unpainted border within the input image.
2211.13737
Dongdong Lin
Dongdong Lin, Benedetta Tondi, Bin Li, Mauro Barni
CycleGANWM: A CycleGAN watermarking method for ownership verification
There is an crucial error in Figure 1, where the "watermark" should be modified
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the proliferation and widespread use of deep neural networks (DNN), their Intellectual Property Rights (IPR) protection has become increasingly important. This paper presents a novel model watermarking method for an unsupervised image-to-image translation (I2IT) networks, named CycleGAN, which leverage the image translation visual quality and watermark embedding. In this method, a watermark decoder is trained initially. Then the decoder is frozen and used to extract the watermark bits when training the CycleGAN watermarking model. The CycleGAN watermarking (CycleGANWM) is trained with specific loss functions and optimized to get a good performance on both I2IT task and watermark embedding. For watermark verification, this work uses statistical significance test to identify the ownership of the model from the extract watermark bits. We evaluate the robustness of the model against image post-processing and improve it by fine-tuning the model with adding data augmentation on the output images before extracting the watermark bits. We also carry out surrogate model attack under black-box access of the model. The experimental results prove that the proposed method is effective and robust to some image post-processing, and it is able to resist surrogate model attack.
[ { "created": "Thu, 24 Nov 2022 17:56:45 GMT", "version": "v1" }, { "created": "Fri, 9 Dec 2022 15:27:56 GMT", "version": "v2" } ]
2022-12-12
[ [ "Lin", "Dongdong", "" ], [ "Tondi", "Benedetta", "" ], [ "Li", "Bin", "" ], [ "Barni", "Mauro", "" ] ]
Due to the proliferation and widespread use of deep neural networks (DNN), their Intellectual Property Rights (IPR) protection has become increasingly important. This paper presents a novel model watermarking method for an unsupervised image-to-image translation (I2IT) networks, named CycleGAN, which leverage the image translation visual quality and watermark embedding. In this method, a watermark decoder is trained initially. Then the decoder is frozen and used to extract the watermark bits when training the CycleGAN watermarking model. The CycleGAN watermarking (CycleGANWM) is trained with specific loss functions and optimized to get a good performance on both I2IT task and watermark embedding. For watermark verification, this work uses statistical significance test to identify the ownership of the model from the extract watermark bits. We evaluate the robustness of the model against image post-processing and improve it by fine-tuning the model with adding data augmentation on the output images before extracting the watermark bits. We also carry out surrogate model attack under black-box access of the model. The experimental results prove that the proposed method is effective and robust to some image post-processing, and it is able to resist surrogate model attack.
2104.10357
Qian Chen
Qian Chen, Wen Wang, Qinglin Zhang
Pre-training for Spoken Language Understanding with Joint Textual and Phonetic Representation Learning
Accepted by INTERSPEECH 2021
Proc. Interspeech 2021
10.21437/Interspeech.2021-234
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the traditional cascading architecture for spoken language understanding (SLU), it has been observed that automatic speech recognition errors could be detrimental to the performance of natural language understanding. End-to-end (E2E) SLU models have been proposed to directly map speech input to desired semantic frame with a single model, hence mitigating ASR error propagation. Recently, pre-training technologies have been explored for these E2E models. In this paper, we propose a novel joint textual-phonetic pre-training approach for learning spoken language representations, aiming at exploring the full potentials of phonetic information to improve SLU robustness to ASR errors. We explore phoneme labels as high-level speech features, and design and compare pre-training tasks based on conditional masked language model objectives and inter-sentence relation objectives. We also investigate the efficacy of combining textual and phonetic information during fine-tuning. Experimental results on spoken language understanding benchmarks, Fluent Speech Commands and SNIPS, show that the proposed approach significantly outperforms strong baseline models and improves robustness of spoken language understanding to ASR errors.
[ { "created": "Wed, 21 Apr 2021 05:19:13 GMT", "version": "v1" }, { "created": "Fri, 18 Jun 2021 07:45:52 GMT", "version": "v2" }, { "created": "Wed, 1 Sep 2021 05:55:00 GMT", "version": "v3" } ]
2021-09-02
[ [ "Chen", "Qian", "" ], [ "Wang", "Wen", "" ], [ "Zhang", "Qinglin", "" ] ]
In the traditional cascading architecture for spoken language understanding (SLU), it has been observed that automatic speech recognition errors could be detrimental to the performance of natural language understanding. End-to-end (E2E) SLU models have been proposed to directly map speech input to desired semantic frame with a single model, hence mitigating ASR error propagation. Recently, pre-training technologies have been explored for these E2E models. In this paper, we propose a novel joint textual-phonetic pre-training approach for learning spoken language representations, aiming at exploring the full potentials of phonetic information to improve SLU robustness to ASR errors. We explore phoneme labels as high-level speech features, and design and compare pre-training tasks based on conditional masked language model objectives and inter-sentence relation objectives. We also investigate the efficacy of combining textual and phonetic information during fine-tuning. Experimental results on spoken language understanding benchmarks, Fluent Speech Commands and SNIPS, show that the proposed approach significantly outperforms strong baseline models and improves robustness of spoken language understanding to ASR errors.
2204.10945
Jaskaran Grover
Jaskaran Grover, Nishant Mohanty, Wenhao Luo, Changliu Liu, Katia Sycara
Noncooperative Herding With Control Barrier Functions: Theory and Experiments
null
null
null
null
cs.RO math.OC
http://creativecommons.org/licenses/by/4.0/
In this paper, we consider the problem of protecting a high-value unit from inadvertent attack by a group of agents using defending robots. Specifically, we develop a control strategy for the defending agents that we call "dog robots" to prevent a flock of "sheep agents" from breaching a protected zone. We take recourse to control barrier functions to pose this problem and exploit the interaction dynamics between the sheep and dogs to find dogs' velocities that result in the sheep getting repelled from the zone. We solve a QP reactively that incorporates the defending constraints to compute the desired velocities for all dogs. Owing to this, our proposed framework is composable \textit{i.e.} it allows for simultaneous inclusion of multiple protected zones in the constraints on dog robots' velocities. We provide a theoretical proof of feasibility of our strategy for the one dog/one sheep case. Additionally, we provide empirical results of two dogs defending the protected zone from upto ten sheep averaged over a hundred simulations and report high success rates. We also demonstrate this algorithm experimentally on non-holonomic robots. Videos of these results are available at https://tinyurl.com/4dj2kjwx.
[ { "created": "Fri, 22 Apr 2022 22:14:03 GMT", "version": "v1" } ]
2022-04-26
[ [ "Grover", "Jaskaran", "" ], [ "Mohanty", "Nishant", "" ], [ "Luo", "Wenhao", "" ], [ "Liu", "Changliu", "" ], [ "Sycara", "Katia", "" ] ]
In this paper, we consider the problem of protecting a high-value unit from inadvertent attack by a group of agents using defending robots. Specifically, we develop a control strategy for the defending agents that we call "dog robots" to prevent a flock of "sheep agents" from breaching a protected zone. We take recourse to control barrier functions to pose this problem and exploit the interaction dynamics between the sheep and dogs to find dogs' velocities that result in the sheep getting repelled from the zone. We solve a QP reactively that incorporates the defending constraints to compute the desired velocities for all dogs. Owing to this, our proposed framework is composable \textit{i.e.} it allows for simultaneous inclusion of multiple protected zones in the constraints on dog robots' velocities. We provide a theoretical proof of feasibility of our strategy for the one dog/one sheep case. Additionally, we provide empirical results of two dogs defending the protected zone from upto ten sheep averaged over a hundred simulations and report high success rates. We also demonstrate this algorithm experimentally on non-holonomic robots. Videos of these results are available at https://tinyurl.com/4dj2kjwx.
2408.00981
Junhao Zheng
Junhao Zheng, Haibin Chen, Qianli Ma
Cross-domain Named Entity Recognition via Graph Matching
Findings of ACL; available at Findings 2022 https://aclanthology.org/2022.findings-acl.210/; Improve presentation
null
10.18653/v1/2022.findings-acl.210
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Cross-domain NER is a practical yet challenging problem since the data scarcity in the real-world scenario. A common practice is first to learn a NER model in a rich-resource general domain and then adapt the model to specific domains. Due to the mismatch problem between entity types across domains, the wide knowledge in the general domain can not effectively transfer to the target domain NER model. To this end, we model the label relationship as a probability distribution and construct label graphs in both source and target label spaces. To enhance the contextual representation with label structures, we fuse the label graph into the word embedding output by BERT. By representing label relationships as graphs, we formulate cross-domain NER as a graph matching problem. Furthermore, the proposed method has good applicability with pre-training methods and is potentially capable of other cross-domain prediction tasks. Empirical results on four datasets show that our method outperforms a series of transfer learning, multi-task learning, and few-shot learning methods.
[ { "created": "Fri, 2 Aug 2024 02:31:54 GMT", "version": "v1" }, { "created": "Thu, 8 Aug 2024 02:15:53 GMT", "version": "v2" } ]
2024-08-09
[ [ "Zheng", "Junhao", "" ], [ "Chen", "Haibin", "" ], [ "Ma", "Qianli", "" ] ]
Cross-domain NER is a practical yet challenging problem since the data scarcity in the real-world scenario. A common practice is first to learn a NER model in a rich-resource general domain and then adapt the model to specific domains. Due to the mismatch problem between entity types across domains, the wide knowledge in the general domain can not effectively transfer to the target domain NER model. To this end, we model the label relationship as a probability distribution and construct label graphs in both source and target label spaces. To enhance the contextual representation with label structures, we fuse the label graph into the word embedding output by BERT. By representing label relationships as graphs, we formulate cross-domain NER as a graph matching problem. Furthermore, the proposed method has good applicability with pre-training methods and is potentially capable of other cross-domain prediction tasks. Empirical results on four datasets show that our method outperforms a series of transfer learning, multi-task learning, and few-shot learning methods.
0802.0251
Fabrice Rossi
Fabrice Rossi (INRIA Rocquencourt / INRIA Sophia Antipolis, CEREMADE), Brieuc Conan-Guez (INRIA Rocquencourt / INRIA Sophia Antipolis, LITA)
Multi-Layer Perceptrons and Symbolic Data
null
Symbolic Data Analysis and the SODAS Software Wiley (Ed.) (2008) 373-391
null
null
cs.NE
null
In some real world situations, linear models are not sufficient to represent accurately complex relations between input variables and output variables of a studied system. Multilayer Perceptrons are one of the most successful non-linear regression tool but they are unfortunately restricted to inputs and outputs that belong to a normed vector space. In this chapter, we propose a general recoding method that allows to use symbolic data both as inputs and outputs to Multilayer Perceptrons. The recoding is quite simple to implement and yet provides a flexible framework that allows to deal with almost all practical cases. The proposed method is illustrated on a real world data set.
[ { "created": "Sat, 2 Feb 2008 15:09:42 GMT", "version": "v1" } ]
2008-02-05
[ [ "Rossi", "Fabrice", "", "INRIA Rocquencourt / INRIA Sophia Antipolis, CEREMADE" ], [ "Conan-Guez", "Brieuc", "", "INRIA Rocquencourt / INRIA Sophia Antipolis, LITA" ] ]
In some real world situations, linear models are not sufficient to represent accurately complex relations between input variables and output variables of a studied system. Multilayer Perceptrons are one of the most successful non-linear regression tool but they are unfortunately restricted to inputs and outputs that belong to a normed vector space. In this chapter, we propose a general recoding method that allows to use symbolic data both as inputs and outputs to Multilayer Perceptrons. The recoding is quite simple to implement and yet provides a flexible framework that allows to deal with almost all practical cases. The proposed method is illustrated on a real world data set.
2303.17508
Tyler Malloy
Tyler Malloy, Miao Liu, Matthew D. Riemer, Tim Klinger, Gerald Tesauro, Chris R. Sims
Learning in Factored Domains with Information-Constrained Visual Representations
null
null
null
null
cs.AI cs.CV cs.HC q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans learn quickly even in tasks that contain complex visual information. This is due in part to the efficient formation of compressed representations of visual information, allowing for better generalization and robustness. However, compressed representations alone are insufficient for explaining the high speed of human learning. Reinforcement learning (RL) models that seek to replicate this impressive efficiency may do so through the use of factored representations of tasks. These informationally simplistic representations of tasks are similarly motivated as the use of compressed representations of visual information. Recent studies have connected biological visual perception to disentangled and compressed representations. This raises the question of how humans learn to efficiently represent visual information in a manner useful for learning tasks. In this paper we present a model of human factored representation learning based on an altered form of a $\beta$-Variational Auto-encoder used in a visual learning task. Modelling results demonstrate a trade-off in the informational complexity of model latent dimension spaces, between the speed of learning and the accuracy of reconstructions.
[ { "created": "Thu, 30 Mar 2023 16:22:10 GMT", "version": "v1" } ]
2023-03-31
[ [ "Malloy", "Tyler", "" ], [ "Liu", "Miao", "" ], [ "Riemer", "Matthew D.", "" ], [ "Klinger", "Tim", "" ], [ "Tesauro", "Gerald", "" ], [ "Sims", "Chris R.", "" ] ]
Humans learn quickly even in tasks that contain complex visual information. This is due in part to the efficient formation of compressed representations of visual information, allowing for better generalization and robustness. However, compressed representations alone are insufficient for explaining the high speed of human learning. Reinforcement learning (RL) models that seek to replicate this impressive efficiency may do so through the use of factored representations of tasks. These informationally simplistic representations of tasks are similarly motivated as the use of compressed representations of visual information. Recent studies have connected biological visual perception to disentangled and compressed representations. This raises the question of how humans learn to efficiently represent visual information in a manner useful for learning tasks. In this paper we present a model of human factored representation learning based on an altered form of a $\beta$-Variational Auto-encoder used in a visual learning task. Modelling results demonstrate a trade-off in the informational complexity of model latent dimension spaces, between the speed of learning and the accuracy of reconstructions.
2105.00674
Heiko Paulheim
Michael Matthias Voit and Heiko Paulheim
Bias in Knowledge Graphs -- an Empirical Study with Movie Recommendation and Different Language Editions of DBpedia
Accepted for publication at 3rd Conference on Language, Data and Knowledge (LDK 2021)
null
null
null
cs.IR cs.AI
http://creativecommons.org/licenses/by/4.0/
Public knowledge graphs such as DBpedia and Wikidata have been recognized as interesting sources of background knowledge to build content-based recommender systems. They can be used to add information about the items to be recommended and links between those. While quite a few approaches for exploiting knowledge graphs have been proposed, most of them aim at optimizing the recommendation strategy while using a fixed knowledge graph. In this paper, we take a different approach, i.e., we fix the recommendation strategy and observe changes when using different underlying knowledge graphs. Particularly, we use different language editions of DBpedia. We show that the usage of different knowledge graphs does not only lead to differently biased recommender systems, but also to recommender systems that differ in performance for particular fields of recommendations.
[ { "created": "Mon, 3 May 2021 08:07:30 GMT", "version": "v1" } ]
2021-05-04
[ [ "Voit", "Michael Matthias", "" ], [ "Paulheim", "Heiko", "" ] ]
Public knowledge graphs such as DBpedia and Wikidata have been recognized as interesting sources of background knowledge to build content-based recommender systems. They can be used to add information about the items to be recommended and links between those. While quite a few approaches for exploiting knowledge graphs have been proposed, most of them aim at optimizing the recommendation strategy while using a fixed knowledge graph. In this paper, we take a different approach, i.e., we fix the recommendation strategy and observe changes when using different underlying knowledge graphs. Particularly, we use different language editions of DBpedia. We show that the usage of different knowledge graphs does not only lead to differently biased recommender systems, but also to recommender systems that differ in performance for particular fields of recommendations.
2301.09347
Alexander Bentkamp
Alexander Bentkamp, Ramon Fern\'andez Mir, Jeremy Avigad
Verified reductions for optimization
null
null
null
null
cs.LO math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Numerical and symbolic methods for optimization are used extensively in engineering, industry, and finance. Various methods are used to reduce problems of interest to ones that are amenable to solution by such software. We develop a framework for designing and applying such reductions, using the Lean programming language and interactive proof assistant. Formal verification makes the process more reliable, and the availability of an interactive framework and ambient mathematical library provides a robust environment for constructing the reductions and reasoning about them.
[ { "created": "Mon, 23 Jan 2023 10:25:48 GMT", "version": "v1" }, { "created": "Tue, 24 Jan 2023 16:38:03 GMT", "version": "v2" }, { "created": "Wed, 22 Feb 2023 15:37:56 GMT", "version": "v3" } ]
2023-02-23
[ [ "Bentkamp", "Alexander", "" ], [ "Mir", "Ramon Fernández", "" ], [ "Avigad", "Jeremy", "" ] ]
Numerical and symbolic methods for optimization are used extensively in engineering, industry, and finance. Various methods are used to reduce problems of interest to ones that are amenable to solution by such software. We develop a framework for designing and applying such reductions, using the Lean programming language and interactive proof assistant. Formal verification makes the process more reliable, and the availability of an interactive framework and ambient mathematical library provides a robust environment for constructing the reductions and reasoning about them.
2404.08330
Hyesong Choi
Hyesong Choi, Hunsang Lee, Seyoung Joung, Hyejin Park, Jiyeong Kim, Dongbo Min
Emerging Property of Masked Token for Effective Pre-training
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Driven by the success of Masked Language Modeling (MLM), the realm of self-supervised learning for computer vision has been invigorated by the central role of Masked Image Modeling (MIM) in driving recent breakthroughs. Notwithstanding the achievements of MIM across various downstream tasks, its overall efficiency is occasionally hampered by the lengthy duration of the pre-training phase. This paper presents a perspective that the optimization of masked tokens as a means of addressing the prevailing issue. Initially, we delve into an exploration of the inherent properties that a masked token ought to possess. Within the properties, we principally dedicated to articulating and emphasizing the `data singularity' attribute inherent in masked tokens. Through a comprehensive analysis of the heterogeneity between masked tokens and visible tokens within pre-trained models, we propose a novel approach termed masked token optimization (MTO), specifically designed to improve model efficiency through weight recalibration and the enhancement of the key property of masked tokens. The proposed method serves as an adaptable solution that seamlessly integrates into any MIM approach that leverages masked tokens. As a result, MTO achieves a considerable improvement in pre-training efficiency, resulting in an approximately 50% reduction in pre-training epochs required to attain converged performance of the recent approaches.
[ { "created": "Fri, 12 Apr 2024 08:46:53 GMT", "version": "v1" } ]
2024-04-15
[ [ "Choi", "Hyesong", "" ], [ "Lee", "Hunsang", "" ], [ "Joung", "Seyoung", "" ], [ "Park", "Hyejin", "" ], [ "Kim", "Jiyeong", "" ], [ "Min", "Dongbo", "" ] ]
Driven by the success of Masked Language Modeling (MLM), the realm of self-supervised learning for computer vision has been invigorated by the central role of Masked Image Modeling (MIM) in driving recent breakthroughs. Notwithstanding the achievements of MIM across various downstream tasks, its overall efficiency is occasionally hampered by the lengthy duration of the pre-training phase. This paper presents a perspective that the optimization of masked tokens as a means of addressing the prevailing issue. Initially, we delve into an exploration of the inherent properties that a masked token ought to possess. Within the properties, we principally dedicated to articulating and emphasizing the `data singularity' attribute inherent in masked tokens. Through a comprehensive analysis of the heterogeneity between masked tokens and visible tokens within pre-trained models, we propose a novel approach termed masked token optimization (MTO), specifically designed to improve model efficiency through weight recalibration and the enhancement of the key property of masked tokens. The proposed method serves as an adaptable solution that seamlessly integrates into any MIM approach that leverages masked tokens. As a result, MTO achieves a considerable improvement in pre-training efficiency, resulting in an approximately 50% reduction in pre-training epochs required to attain converged performance of the recent approaches.
1509.09235
Malte Probst
Malte Probst
Generative Adversarial Networks in Estimation of Distribution Algorithms for Combinatorial Optimization
null
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Generative Adversarial Networks (GAN) are generative neural networks which can be trained to implicitly model the probability distribution of given data, and it is possible to sample this distribution. We integrate a GAN into an EDA and evaluate the performance of this system when solving combinatorial optimization problems with a single objective. We use several standard benchmark problems and compare the results to state-of-the-art multivariate EDAs. GAN-EDA doe not yield competitive results - the GAN lacks the ability to quickly learn a good approximation of the probability distribution. A key reason seems to be the large amount of noise present in the first EDA generations.
[ { "created": "Wed, 30 Sep 2015 16:02:59 GMT", "version": "v1" }, { "created": "Mon, 8 Aug 2016 13:01:39 GMT", "version": "v2" } ]
2016-08-09
[ [ "Probst", "Malte", "" ] ]
Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Generative Adversarial Networks (GAN) are generative neural networks which can be trained to implicitly model the probability distribution of given data, and it is possible to sample this distribution. We integrate a GAN into an EDA and evaluate the performance of this system when solving combinatorial optimization problems with a single objective. We use several standard benchmark problems and compare the results to state-of-the-art multivariate EDAs. GAN-EDA doe not yield competitive results - the GAN lacks the ability to quickly learn a good approximation of the probability distribution. A key reason seems to be the large amount of noise present in the first EDA generations.
1904.08631
Junbao Zhuo
Junbao Zhuo, Shuhui Wang, Shuhao Cui and Qingming Huang
Unsupervised Open Domain Recognition by Semantic Discrepancy Minimization
Accepted to CVPR 2019, 10 pages, 4 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the unsupervised open domain recognition (UODR) problem, where categories in labeled source domain S is only a subset of those in unlabeled target domain T. The task is to correctly classify all samples in T including known and unknown categories. UODR is challenging due to the domain discrepancy, which becomes even harder to bridge when a large number of unknown categories exist in T. Moreover, the classification rules propagated by graph CNN (GCN) may be distracted by unknown categories and lack generalization capability. To measure the domain discrepancy for asymmetric label space between S and T, we propose Semantic-Guided Matching Discrepancy (SGMD), which first employs instance matching between S and T, and then the discrepancy is measured by a weighted feature distance between matched instances. We further design a limited balance constraint to achieve a more balanced classification output on known and unknown categories. We develop Unsupervised Open Domain Transfer Network (UODTN), which learns both the backbone classification network and GCN jointly by reducing the SGMD, enforcing the limited balance constraint and minimizing the classification loss on S. UODTN better preserves the semantic structure and enforces the consistency between the learned domain invariant visual features and the semantic embeddings. Experimental results show superiority of our method on recognizing images of both known and unknown categories.
[ { "created": "Thu, 18 Apr 2019 08:13:54 GMT", "version": "v1" } ]
2019-04-19
[ [ "Zhuo", "Junbao", "" ], [ "Wang", "Shuhui", "" ], [ "Cui", "Shuhao", "" ], [ "Huang", "Qingming", "" ] ]
We address the unsupervised open domain recognition (UODR) problem, where categories in labeled source domain S is only a subset of those in unlabeled target domain T. The task is to correctly classify all samples in T including known and unknown categories. UODR is challenging due to the domain discrepancy, which becomes even harder to bridge when a large number of unknown categories exist in T. Moreover, the classification rules propagated by graph CNN (GCN) may be distracted by unknown categories and lack generalization capability. To measure the domain discrepancy for asymmetric label space between S and T, we propose Semantic-Guided Matching Discrepancy (SGMD), which first employs instance matching between S and T, and then the discrepancy is measured by a weighted feature distance between matched instances. We further design a limited balance constraint to achieve a more balanced classification output on known and unknown categories. We develop Unsupervised Open Domain Transfer Network (UODTN), which learns both the backbone classification network and GCN jointly by reducing the SGMD, enforcing the limited balance constraint and minimizing the classification loss on S. UODTN better preserves the semantic structure and enforces the consistency between the learned domain invariant visual features and the semantic embeddings. Experimental results show superiority of our method on recognizing images of both known and unknown categories.
2407.11676
Remi Flamary
Yanis Lalou, Th\'eo Gnassounou, Antoine Collas, Antoine de Mathelin, Oleksii Kachaiev, Ambroise Odonnat, Alexandre Gramfort, Thomas Moreau, R\'emi Flamary
SKADA-Bench: Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation
null
null
null
null
cs.LG cs.AI stat.ME stat.ML
http://creativecommons.org/licenses/by/4.0/
Unsupervised Domain Adaptation (DA) consists of adapting a model trained on a labeled source domain to perform well on an unlabeled target domain with some data distribution shift. While many methods have been proposed in the literature, fair and realistic evaluation remains an open question, particularly due to methodological difficulties in selecting hyperparameters in the unsupervised setting. With SKADA-Bench, we propose a framework to evaluate DA methods and present a fair evaluation of existing shallow algorithms, including reweighting, mapping, and subspace alignment. Realistic hyperparameter selection is performed with nested cross-validation and various unsupervised model selection scores, on both simulated datasets with controlled shifts and real-world datasets across diverse modalities, such as images, text, biomedical, and tabular data with specific feature extraction. Our benchmark highlights the importance of realistic validation and provides practical guidance for real-life applications, with key insights into the choice and impact of model selection approaches. SKADA-Bench is open-source, reproducible, and can be easily extended with novel DA methods, datasets, and model selection criteria without requiring re-evaluating competitors. SKADA-Bench is available on GitHub at https://github.com/scikit-adaptation/skada-bench.
[ { "created": "Tue, 16 Jul 2024 12:52:29 GMT", "version": "v1" } ]
2024-07-17
[ [ "Lalou", "Yanis", "" ], [ "Gnassounou", "Théo", "" ], [ "Collas", "Antoine", "" ], [ "de Mathelin", "Antoine", "" ], [ "Kachaiev", "Oleksii", "" ], [ "Odonnat", "Ambroise", "" ], [ "Gramfort", "Alexandre", "" ], [ "Moreau", "Thomas", "" ], [ "Flamary", "Rémi", "" ] ]
Unsupervised Domain Adaptation (DA) consists of adapting a model trained on a labeled source domain to perform well on an unlabeled target domain with some data distribution shift. While many methods have been proposed in the literature, fair and realistic evaluation remains an open question, particularly due to methodological difficulties in selecting hyperparameters in the unsupervised setting. With SKADA-Bench, we propose a framework to evaluate DA methods and present a fair evaluation of existing shallow algorithms, including reweighting, mapping, and subspace alignment. Realistic hyperparameter selection is performed with nested cross-validation and various unsupervised model selection scores, on both simulated datasets with controlled shifts and real-world datasets across diverse modalities, such as images, text, biomedical, and tabular data with specific feature extraction. Our benchmark highlights the importance of realistic validation and provides practical guidance for real-life applications, with key insights into the choice and impact of model selection approaches. SKADA-Bench is open-source, reproducible, and can be easily extended with novel DA methods, datasets, and model selection criteria without requiring re-evaluating competitors. SKADA-Bench is available on GitHub at https://github.com/scikit-adaptation/skada-bench.
2107.02025
Jim Samuel
Jim Samuel, Ratnakar Palle and Eduardo Correa Soares
Textual Data Distributions: Kullback Leibler Textual Distributions Contrasts on GPT-2 Generated Texts, with Supervised, Unsupervised Learning on Vaccine & Market Topics & Sentiment
null
null
null
null
cs.CL cs.LG cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Efficient textual data distributions (TDD) alignment and generation are open research problems in textual analytics and NLP. It is presently difficult to parsimoniously and methodologically confirm that two or more natural language datasets belong to similar distributions, and to identify the extent to which textual data possess alignment. This study focuses on addressing a segment of the broader problem described above by applying multiple supervised and unsupervised machine learning (ML) methods to explore the behavior of TDD by (i) topical alignment, and (ii) by sentiment alignment. Furthermore we use multiple text generation methods including fine-tuned GPT-2, to generate text by topic and by sentiment. Finally we develop a unique process driven variation of Kullback-Leibler divergence (KLD) application to TDD, named KL Textual Distributions Contrasts(KL-TDC) to identify the alignment of machine generated textual corpora with naturally occurring textual corpora. This study thus identifies a unique approach for generating and validating TDD by topic and sentiment, which can be used to help address sparse data problems and other research, practice and classroom situations in need of artificially generated topic or sentiment aligned textual data.
[ { "created": "Tue, 15 Jun 2021 21:30:46 GMT", "version": "v1" } ]
2021-07-06
[ [ "Samuel", "Jim", "" ], [ "Palle", "Ratnakar", "" ], [ "Soares", "Eduardo Correa", "" ] ]
Efficient textual data distributions (TDD) alignment and generation are open research problems in textual analytics and NLP. It is presently difficult to parsimoniously and methodologically confirm that two or more natural language datasets belong to similar distributions, and to identify the extent to which textual data possess alignment. This study focuses on addressing a segment of the broader problem described above by applying multiple supervised and unsupervised machine learning (ML) methods to explore the behavior of TDD by (i) topical alignment, and (ii) by sentiment alignment. Furthermore we use multiple text generation methods including fine-tuned GPT-2, to generate text by topic and by sentiment. Finally we develop a unique process driven variation of Kullback-Leibler divergence (KLD) application to TDD, named KL Textual Distributions Contrasts(KL-TDC) to identify the alignment of machine generated textual corpora with naturally occurring textual corpora. This study thus identifies a unique approach for generating and validating TDD by topic and sentiment, which can be used to help address sparse data problems and other research, practice and classroom situations in need of artificially generated topic or sentiment aligned textual data.
2301.05012
Sophie Noiret
Sophie Noiret, Siddharth Ravi, Martin Kampel, Francisco Florez-Revuelta
Fairly Private: Investigating The Fairness of Visual Privacy Preservation Algorithms
Camera-ready version for the PPAI-23 workshop of the AAAI23
null
null
null
cs.CV cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the privacy risks posed by camera surveillance and facial recognition have grown, so has the research into privacy preservation algorithms. Among these, visual privacy preservation algorithms attempt to impart bodily privacy to subjects in visuals by obfuscating privacy-sensitive areas. While disparate performances of facial recognition systems across phenotypes are the subject of much study, its counterpart, privacy preservation, is not commonly analysed from a fairness perspective. In this paper, the fairness of commonly used visual privacy preservation algorithms is investigated through the performances of facial recognition models on obfuscated images. Experiments on the PubFig dataset clearly show that the privacy protection provided is unequal across groups.
[ { "created": "Thu, 12 Jan 2023 13:40:38 GMT", "version": "v1" } ]
2023-01-13
[ [ "Noiret", "Sophie", "" ], [ "Ravi", "Siddharth", "" ], [ "Kampel", "Martin", "" ], [ "Florez-Revuelta", "Francisco", "" ] ]
As the privacy risks posed by camera surveillance and facial recognition have grown, so has the research into privacy preservation algorithms. Among these, visual privacy preservation algorithms attempt to impart bodily privacy to subjects in visuals by obfuscating privacy-sensitive areas. While disparate performances of facial recognition systems across phenotypes are the subject of much study, its counterpart, privacy preservation, is not commonly analysed from a fairness perspective. In this paper, the fairness of commonly used visual privacy preservation algorithms is investigated through the performances of facial recognition models on obfuscated images. Experiments on the PubFig dataset clearly show that the privacy protection provided is unequal across groups.
2405.13413
Hee-Youl Kwak
Hee-Youl Kwak, Dae-Young Yun, Yongjune Kim, Sang-Hyo Kim, and Jong-Seon No
Boosted Neural Decoders: Achieving Extreme Reliability of LDPC Codes for 6G Networks
12 pages, 11 figures
null
null
null
cs.IT cs.LG eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Ensuring extremely high reliability is essential for channel coding in 6G networks. The next-generation of ultra-reliable and low-latency communications (xURLLC) scenario within 6G networks requires a frame error rate (FER) below 10-9. However, low-density parity-check (LDPC) codes, the standard in 5G new radio (NR), encounter a challenge known as the error floor phenomenon, which hinders to achieve such low rates. To tackle this problem, we introduce an innovative solution: boosted neural min-sum (NMS) decoder. This decoder operates identically to conventional NMS decoders, but is trained by novel training methods including: i) boosting learning with uncorrected vectors, ii) block-wise training schedule to address the vanishing gradient issue, iii) dynamic weight sharing to minimize the number of trainable weights, iv) transfer learning to reduce the required sample count, and v) data augmentation to expedite the sampling process. Leveraging these training strategies, the boosted NMS decoder achieves the state-of-the art performance in reducing the error floor as well as superior waterfall performance. Remarkably, we fulfill the 6G xURLLC requirement for 5G LDPC codes without the severe error floor. Additionally, the boosted NMS decoder, once its weights are trained, can perform decoding without additional modules, making it highly practical for immediate application.
[ { "created": "Wed, 22 May 2024 07:48:24 GMT", "version": "v1" } ]
2024-05-24
[ [ "Kwak", "Hee-Youl", "" ], [ "Yun", "Dae-Young", "" ], [ "Kim", "Yongjune", "" ], [ "Kim", "Sang-Hyo", "" ], [ "No", "Jong-Seon", "" ] ]
Ensuring extremely high reliability is essential for channel coding in 6G networks. The next-generation of ultra-reliable and low-latency communications (xURLLC) scenario within 6G networks requires a frame error rate (FER) below 10-9. However, low-density parity-check (LDPC) codes, the standard in 5G new radio (NR), encounter a challenge known as the error floor phenomenon, which hinders to achieve such low rates. To tackle this problem, we introduce an innovative solution: boosted neural min-sum (NMS) decoder. This decoder operates identically to conventional NMS decoders, but is trained by novel training methods including: i) boosting learning with uncorrected vectors, ii) block-wise training schedule to address the vanishing gradient issue, iii) dynamic weight sharing to minimize the number of trainable weights, iv) transfer learning to reduce the required sample count, and v) data augmentation to expedite the sampling process. Leveraging these training strategies, the boosted NMS decoder achieves the state-of-the art performance in reducing the error floor as well as superior waterfall performance. Remarkably, we fulfill the 6G xURLLC requirement for 5G LDPC codes without the severe error floor. Additionally, the boosted NMS decoder, once its weights are trained, can perform decoding without additional modules, making it highly practical for immediate application.
1710.02322
Diogo Luvizon
Diogo C. Luvizon, Hedi Tabia, David Picard
Human Pose Regression by Combining Indirect Part Detection and Contextual Information
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose an end-to-end trainable regression approach for human pose estimation from still images. We use the proposed Soft-argmax function to convert feature maps directly to joint coordinates, resulting in a fully differentiable framework. Our method is able to learn heat maps representations indirectly, without additional steps of artificial ground truth generation. Consequently, contextual information can be included to the pose predictions in a seamless way. We evaluated our method on two very challenging datasets, the Leeds Sports Poses (LSP) and the MPII Human Pose datasets, reaching the best performance among all the existing regression methods and comparable results to the state-of-the-art detection based approaches.
[ { "created": "Fri, 6 Oct 2017 09:27:44 GMT", "version": "v1" } ]
2017-10-09
[ [ "Luvizon", "Diogo C.", "" ], [ "Tabia", "Hedi", "" ], [ "Picard", "David", "" ] ]
In this paper, we propose an end-to-end trainable regression approach for human pose estimation from still images. We use the proposed Soft-argmax function to convert feature maps directly to joint coordinates, resulting in a fully differentiable framework. Our method is able to learn heat maps representations indirectly, without additional steps of artificial ground truth generation. Consequently, contextual information can be included to the pose predictions in a seamless way. We evaluated our method on two very challenging datasets, the Leeds Sports Poses (LSP) and the MPII Human Pose datasets, reaching the best performance among all the existing regression methods and comparable results to the state-of-the-art detection based approaches.
1905.01468
Steven Kelk
Steven Kelk and Simone Linz
New reduction rules for the tree bisection and reconnection distance
Accepted for journal publication. This version contains extra figures. Keywords: fixed-parameter tractability, tree bisection and reconnection, generator, kernelization, agreement forest, phylogenetic network, phylogenetic tree, hybridization number
Annals of Combinatorics, 24:475-502, 2020
10.1007/s00026-020-00502-7
null
cs.DS q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently it was shown that, if the subtree and chain reduction rules have been applied exhaustively to two unrooted phylogenetic trees, the reduced trees will have at most 15k-9 taxa where k is the TBR (Tree Bisection and Reconnection) distance between the two trees, and that this bound is tight. Here we propose five new reduction rules and show that these further reduce the bound to 11k-9. The new rules combine the ``unrooted generator'' approach introduced in [Kelk and Linz 2018] with a careful analysis of agreement forests to identify (i) situations when chains of length 3 can be further shortened without reducing the TBR distance, and (ii) situations when small subtrees can be identified whose deletion is guaranteed to reduce the TBR distance by 1. To the best of our knowledge these are the first reduction rules that strictly enhance the reductive power of the subtree and chain reduction rules.
[ { "created": "Sat, 4 May 2019 10:00:09 GMT", "version": "v1" }, { "created": "Sun, 14 Jun 2020 06:51:19 GMT", "version": "v2" } ]
2021-04-13
[ [ "Kelk", "Steven", "" ], [ "Linz", "Simone", "" ] ]
Recently it was shown that, if the subtree and chain reduction rules have been applied exhaustively to two unrooted phylogenetic trees, the reduced trees will have at most 15k-9 taxa where k is the TBR (Tree Bisection and Reconnection) distance between the two trees, and that this bound is tight. Here we propose five new reduction rules and show that these further reduce the bound to 11k-9. The new rules combine the ``unrooted generator'' approach introduced in [Kelk and Linz 2018] with a careful analysis of agreement forests to identify (i) situations when chains of length 3 can be further shortened without reducing the TBR distance, and (ii) situations when small subtrees can be identified whose deletion is guaranteed to reduce the TBR distance by 1. To the best of our knowledge these are the first reduction rules that strictly enhance the reductive power of the subtree and chain reduction rules.
2212.13448
Ju-Bong Kim
Ju-Bong Kim, Ho-Bin Choi, Youn-Hee Han
Strangeness-driven Exploration in Multi-Agent Reinforcement Learning
9 pages, 7 figures
null
null
null
cs.LG cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Efficient exploration strategy is one of essential issues in cooperative multi-agent reinforcement learning (MARL) algorithms requiring complex coordination. In this study, we introduce a new exploration method with the strangeness that can be easily incorporated into any centralized training and decentralized execution (CTDE)-based MARL algorithms. The strangeness refers to the degree of unfamiliarity of the observations that an agent visits. In order to give the observation strangeness a global perspective, it is also augmented with the the degree of unfamiliarity of the visited entire state. The exploration bonus is obtained from the strangeness and the proposed exploration method is not much affected by stochastic transitions commonly observed in MARL tasks. To prevent a high exploration bonus from making the MARL training insensitive to extrinsic rewards, we also propose a separate action-value function trained by both extrinsic reward and exploration bonus, on which a behavioral policy to generate transitions is designed based. It makes the CTDE-based MARL algorithms more stable when they are used with an exploration method. Through a comparative evaluation in didactic examples and the StarCraft Multi-Agent Challenge, we show that the proposed exploration method achieves significant performance improvement in the CTDE-based MARL algorithms.
[ { "created": "Tue, 27 Dec 2022 11:08:49 GMT", "version": "v1" } ]
2022-12-29
[ [ "Kim", "Ju-Bong", "" ], [ "Choi", "Ho-Bin", "" ], [ "Han", "Youn-Hee", "" ] ]
Efficient exploration strategy is one of essential issues in cooperative multi-agent reinforcement learning (MARL) algorithms requiring complex coordination. In this study, we introduce a new exploration method with the strangeness that can be easily incorporated into any centralized training and decentralized execution (CTDE)-based MARL algorithms. The strangeness refers to the degree of unfamiliarity of the observations that an agent visits. In order to give the observation strangeness a global perspective, it is also augmented with the the degree of unfamiliarity of the visited entire state. The exploration bonus is obtained from the strangeness and the proposed exploration method is not much affected by stochastic transitions commonly observed in MARL tasks. To prevent a high exploration bonus from making the MARL training insensitive to extrinsic rewards, we also propose a separate action-value function trained by both extrinsic reward and exploration bonus, on which a behavioral policy to generate transitions is designed based. It makes the CTDE-based MARL algorithms more stable when they are used with an exploration method. Through a comparative evaluation in didactic examples and the StarCraft Multi-Agent Challenge, we show that the proposed exploration method achieves significant performance improvement in the CTDE-based MARL algorithms.
2404.18562
F\'atima Rodr\'iguez-Gal\'an
F\'atima Rodr\'iguez-Gal\'an, Ama Bandara, Elana Pereira de Santana, Peter Haring Bol\'ivar, Eduard Alarc\'on and Sergi Abadal
Time Reversal for Near-Field Communications on Multi-chip Wireless Networks
null
null
null
null
cs.AR
http://creativecommons.org/licenses/by/4.0/
Wireless Network-on-Chip (WNoC) has been proposed as a low-latency, versatile, and broadcast-capable complement to current interconnects in the quest for satisfying the ever-increasing communications needs of modern computing systems. However, to realize the promise of WNoC, multiple wireless links operating at several tens of Gb/s need to be created within a computing package. Unfortunately, the highly integrated and enclosed nature of such computing packages incurs significant Co-Channel Interference (CCI) and Inter-Symbol Interference (ISI), not only preventing the deployment of multiple spatial channels, but also severely limiting the symbol rate of each individual channel. In this work, Time Reversal (TR) is proposed as a means to compensate the channel impairments and enable multiple concurrent high-speed links at the chip scale. We offer evidence, via full-wave simulations at 140 GHz, that TR can increase the symbol rate by an order of magnitude and allow the deployment of multiple concurrent links towards achieving aggregate speeds in excess of 100 Gb/s. Finally, the challenges relative to the realization of TR at the chip scale are analyzed from the implementation, protocol support, and architectural perspectives.
[ { "created": "Mon, 29 Apr 2024 10:09:16 GMT", "version": "v1" }, { "created": "Tue, 30 Apr 2024 08:32:14 GMT", "version": "v2" } ]
2024-05-01
[ [ "Rodríguez-Galán", "Fátima", "" ], [ "Bandara", "Ama", "" ], [ "de Santana", "Elana Pereira", "" ], [ "Bolívar", "Peter Haring", "" ], [ "Alarcón", "Eduard", "" ], [ "Abadal", "Sergi", "" ] ]
Wireless Network-on-Chip (WNoC) has been proposed as a low-latency, versatile, and broadcast-capable complement to current interconnects in the quest for satisfying the ever-increasing communications needs of modern computing systems. However, to realize the promise of WNoC, multiple wireless links operating at several tens of Gb/s need to be created within a computing package. Unfortunately, the highly integrated and enclosed nature of such computing packages incurs significant Co-Channel Interference (CCI) and Inter-Symbol Interference (ISI), not only preventing the deployment of multiple spatial channels, but also severely limiting the symbol rate of each individual channel. In this work, Time Reversal (TR) is proposed as a means to compensate the channel impairments and enable multiple concurrent high-speed links at the chip scale. We offer evidence, via full-wave simulations at 140 GHz, that TR can increase the symbol rate by an order of magnitude and allow the deployment of multiple concurrent links towards achieving aggregate speeds in excess of 100 Gb/s. Finally, the challenges relative to the realization of TR at the chip scale are analyzed from the implementation, protocol support, and architectural perspectives.
2302.13939
Rui-Jie Zhu
Rui-Jie Zhu, Qihang Zhao, Guoqi Li, Jason K. Eshraghian
SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks
Accepted by TMLR
null
null
null
cs.CL cs.LG cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
As the size of large language models continue to scale, so does the computational resources required to run it. Spiking Neural Networks (SNNs) have emerged as an energy-efficient approach to deep learning that leverage sparse and event-driven activations to reduce the computational overhead associated with model inference. While they have become competitive with non-spiking models on many computer vision tasks, SNNs have also proven to be more challenging to train. As a result, their performance lags behind modern deep learning, and we are yet to see the effectiveness of SNNs in language generation. In this paper, inspired by the Receptance Weighted Key Value (RWKV) language model, we successfully implement `SpikeGPT', a generative language model with binary, event-driven spiking activation units. We train the proposed model on two model variants: 45M and 216M parameters. To the best of our knowledge, SpikeGPT is the largest backpropagation-trained SNN model to date, rendering it suitable for both the generation and comprehension of natural language. We achieve this by modifying the transformer block to replace multi-head self attention to reduce quadratic computational complexity O(N^2) to linear complexity O(N) with increasing sequence length. Input tokens are instead streamed in sequentially to our attention mechanism (as with typical SNNs). Our preliminary experiments show that SpikeGPT remains competitive with non-spiking models on tested benchmarks, while maintaining 20x fewer operations when processed on neuromorphic hardware that can leverage sparse, event-driven activations. Our code implementation is available at https://github.com/ridgerchu/SpikeGPT.
[ { "created": "Mon, 27 Feb 2023 16:43:04 GMT", "version": "v1" }, { "created": "Tue, 28 Feb 2023 06:28:43 GMT", "version": "v2" }, { "created": "Mon, 26 Jun 2023 02:38:07 GMT", "version": "v3" }, { "created": "Tue, 27 Jun 2023 02:55:23 GMT", "version": "v4" }, { "created": "Thu, 11 Jul 2024 10:16:12 GMT", "version": "v5" } ]
2024-07-12
[ [ "Zhu", "Rui-Jie", "" ], [ "Zhao", "Qihang", "" ], [ "Li", "Guoqi", "" ], [ "Eshraghian", "Jason K.", "" ] ]
As the size of large language models continue to scale, so does the computational resources required to run it. Spiking Neural Networks (SNNs) have emerged as an energy-efficient approach to deep learning that leverage sparse and event-driven activations to reduce the computational overhead associated with model inference. While they have become competitive with non-spiking models on many computer vision tasks, SNNs have also proven to be more challenging to train. As a result, their performance lags behind modern deep learning, and we are yet to see the effectiveness of SNNs in language generation. In this paper, inspired by the Receptance Weighted Key Value (RWKV) language model, we successfully implement `SpikeGPT', a generative language model with binary, event-driven spiking activation units. We train the proposed model on two model variants: 45M and 216M parameters. To the best of our knowledge, SpikeGPT is the largest backpropagation-trained SNN model to date, rendering it suitable for both the generation and comprehension of natural language. We achieve this by modifying the transformer block to replace multi-head self attention to reduce quadratic computational complexity O(N^2) to linear complexity O(N) with increasing sequence length. Input tokens are instead streamed in sequentially to our attention mechanism (as with typical SNNs). Our preliminary experiments show that SpikeGPT remains competitive with non-spiking models on tested benchmarks, while maintaining 20x fewer operations when processed on neuromorphic hardware that can leverage sparse, event-driven activations. Our code implementation is available at https://github.com/ridgerchu/SpikeGPT.
2402.16871
Sascha Ossowski
Alberto Fern\'andez, Holger Billhardt, Sascha Ossowski, \'Oscar S\'anchez
Bike3S: A Tool for Bike Sharing Systems Simulation
null
Journal of Simulation 14(4), 2020
10.1080/17477778.2020.1718022
null
cs.MA cs.AI
http://creativecommons.org/licenses/by/4.0/
Vehicle sharing systems are becoming increasingly popular. The effectiveness of such systems depends, among other factors, on different strategic and operational management decisions and policies, like the dimension of the fleet or the distribution of vehicles. It is of foremost importance to be able to anticipate and evaluate the potential effects of such strategies before they can be successfully deployed. In this paper we present Bike3S, a simulator for a station-based bike sharing system. The simulator performs semi-realistic simulations of the operation of a bike sharing system and allows for evaluating and testing different management decisions and strategies. In particular, the simulator has been designed to test different station capacities, station distributions, and balancing strategies. The simulator carries out microscopic agent-based simulations, where users of different types can be defined that act according to their individual goals and objectives which influences the overall dynamics of the whole system.
[ { "created": "Wed, 24 Jan 2024 17:33:40 GMT", "version": "v1" } ]
2024-02-28
[ [ "Fernández", "Alberto", "" ], [ "Billhardt", "Holger", "" ], [ "Ossowski", "Sascha", "" ], [ "Sánchez", "Óscar", "" ] ]
Vehicle sharing systems are becoming increasingly popular. The effectiveness of such systems depends, among other factors, on different strategic and operational management decisions and policies, like the dimension of the fleet or the distribution of vehicles. It is of foremost importance to be able to anticipate and evaluate the potential effects of such strategies before they can be successfully deployed. In this paper we present Bike3S, a simulator for a station-based bike sharing system. The simulator performs semi-realistic simulations of the operation of a bike sharing system and allows for evaluating and testing different management decisions and strategies. In particular, the simulator has been designed to test different station capacities, station distributions, and balancing strategies. The simulator carries out microscopic agent-based simulations, where users of different types can be defined that act according to their individual goals and objectives which influences the overall dynamics of the whole system.
1909.07745
Ali Ghadirzadeh
Xi Chen, Ali Ghadirzadeh, M{\aa}rten Bj\"orkman and Patric Jensfelt
Adversarial Feature Training for Generalizable Robotic Visuomotor Control
null
null
null
null
cs.RO cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep reinforcement learning (RL) has enabled training action-selection policies, end-to-end, by learning a function which maps image pixels to action outputs. However, it's application to visuomotor robotic policy training has been limited because of the challenge of large-scale data collection when working with physical hardware. A suitable visuomotor policy should perform well not just for the task-setup it has been trained for, but also for all varieties of the task, including novel objects at different viewpoints surrounded by task-irrelevant objects. However, it is impractical for a robotic setup to sufficiently collect interactive samples in a RL framework to generalize well to novel aspects of a task. In this work, we demonstrate that by using adversarial training for domain transfer, it is possible to train visuomotor policies based on RL frameworks, and then transfer the acquired policy to other novel task domains. We propose to leverage the deep RL capabilities to learn complex visuomotor skills for uncomplicated task setups, and then exploit transfer learning to generalize to new task domains provided only still images of the task in the target domain. We evaluate our method on two real robotic tasks, picking and pouring, and compare it to a number of prior works, demonstrating its superiority.
[ { "created": "Tue, 17 Sep 2019 12:18:34 GMT", "version": "v1" } ]
2019-09-18
[ [ "Chen", "Xi", "" ], [ "Ghadirzadeh", "Ali", "" ], [ "Björkman", "Mårten", "" ], [ "Jensfelt", "Patric", "" ] ]
Deep reinforcement learning (RL) has enabled training action-selection policies, end-to-end, by learning a function which maps image pixels to action outputs. However, it's application to visuomotor robotic policy training has been limited because of the challenge of large-scale data collection when working with physical hardware. A suitable visuomotor policy should perform well not just for the task-setup it has been trained for, but also for all varieties of the task, including novel objects at different viewpoints surrounded by task-irrelevant objects. However, it is impractical for a robotic setup to sufficiently collect interactive samples in a RL framework to generalize well to novel aspects of a task. In this work, we demonstrate that by using adversarial training for domain transfer, it is possible to train visuomotor policies based on RL frameworks, and then transfer the acquired policy to other novel task domains. We propose to leverage the deep RL capabilities to learn complex visuomotor skills for uncomplicated task setups, and then exploit transfer learning to generalize to new task domains provided only still images of the task in the target domain. We evaluate our method on two real robotic tasks, picking and pouring, and compare it to a number of prior works, demonstrating its superiority.
2406.07257
Hamed Babaei Giglou
Hamed Babaei Giglou, Tilahun Abedissa Taffa, Rana Abdullah, Aida Usmanova, Ricardo Usbeck, Jennifer D'Souza, S\"oren Auer
Scholarly Question Answering using Large Language Models in the NFDI4DataScience Gateway
13 pages main content, 16 pages overall, 3 Figures, accepted for publication at NSLP 2024 workshop at ESWC 2024
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a scholarly Question Answering (QA) system on top of the NFDI4DataScience Gateway, employing a Retrieval Augmented Generation-based (RAG) approach. The NFDI4DS Gateway, as a foundational framework, offers a unified and intuitive interface for querying various scientific databases using federated search. The RAG-based scholarly QA, powered by a Large Language Model (LLM), facilitates dynamic interaction with search results, enhancing filtering capabilities and fostering a conversational engagement with the Gateway search. The effectiveness of both the Gateway and the scholarly QA system is demonstrated through experimental analysis.
[ { "created": "Tue, 11 Jun 2024 13:36:19 GMT", "version": "v1" } ]
2024-06-12
[ [ "Giglou", "Hamed Babaei", "" ], [ "Taffa", "Tilahun Abedissa", "" ], [ "Abdullah", "Rana", "" ], [ "Usmanova", "Aida", "" ], [ "Usbeck", "Ricardo", "" ], [ "D'Souza", "Jennifer", "" ], [ "Auer", "Sören", "" ] ]
This paper introduces a scholarly Question Answering (QA) system on top of the NFDI4DataScience Gateway, employing a Retrieval Augmented Generation-based (RAG) approach. The NFDI4DS Gateway, as a foundational framework, offers a unified and intuitive interface for querying various scientific databases using federated search. The RAG-based scholarly QA, powered by a Large Language Model (LLM), facilitates dynamic interaction with search results, enhancing filtering capabilities and fostering a conversational engagement with the Gateway search. The effectiveness of both the Gateway and the scholarly QA system is demonstrated through experimental analysis.
2010.08012
Agnieszka Maria S{\l}owik
Alex Lamb, Anirudh Goyal, Agnieszka S{\l}owik, Michael Mozer, Philippe Beaudoin, Yoshua Bengio
Neural Function Modules with Sparse Arguments: A Dynamic Approach to Integrating Information across Layers
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feed-forward neural networks consist of a sequence of layers, in which each layer performs some processing on the information from the previous layer. A downside to this approach is that each layer (or module, as multiple modules can operate in parallel) is tasked with processing the entire hidden state, rather than a particular part of the state which is most relevant for that module. Methods which only operate on a small number of input variables are an essential part of most programming languages, and they allow for improved modularity and code re-usability. Our proposed method, Neural Function Modules (NFM), aims to introduce the same structural capability into deep learning. Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems. The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm which, as we show, improves the results in standard classification, out-of-domain generalization, generative modeling, and learning representations in the context of reinforcement learning.
[ { "created": "Thu, 15 Oct 2020 20:43:17 GMT", "version": "v1" } ]
2020-10-19
[ [ "Lamb", "Alex", "" ], [ "Goyal", "Anirudh", "" ], [ "Słowik", "Agnieszka", "" ], [ "Mozer", "Michael", "" ], [ "Beaudoin", "Philippe", "" ], [ "Bengio", "Yoshua", "" ] ]
Feed-forward neural networks consist of a sequence of layers, in which each layer performs some processing on the information from the previous layer. A downside to this approach is that each layer (or module, as multiple modules can operate in parallel) is tasked with processing the entire hidden state, rather than a particular part of the state which is most relevant for that module. Methods which only operate on a small number of input variables are an essential part of most programming languages, and they allow for improved modularity and code re-usability. Our proposed method, Neural Function Modules (NFM), aims to introduce the same structural capability into deep learning. Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems. The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm which, as we show, improves the results in standard classification, out-of-domain generalization, generative modeling, and learning representations in the context of reinforcement learning.
2407.00482
Barproda Halder
Barproda Halder, Faisal Hamman, Pasan Dissanayake, Qiuyi Zhang, Ilia Sucholutsky, Sanghamitra Dutta
Quantifying Spuriousness of Biased Datasets Using Partial Information Decomposition
Accepted at ICML 2024 Workshop on Data-centric Machine Learning Research (DMLR): Datasets for Foundation Models
null
null
null
cs.LG cs.AI cs.CV cs.CY cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Spurious patterns refer to a mathematical association between two or more variables in a dataset that are not causally related. However, this notion of spuriousness, which is usually introduced due to sampling biases in the dataset, has classically lacked a formal definition. To address this gap, this work presents the first information-theoretic formalization of spuriousness in a dataset (given a split of spurious and core features) using a mathematical framework called Partial Information Decomposition (PID). Specifically, we disentangle the joint information content that the spurious and core features share about another target variable (e.g., the prediction label) into distinct components, namely unique, redundant, and synergistic information. We propose the use of unique information, with roots in Blackwell Sufficiency, as a novel metric to formally quantify dataset spuriousness and derive its desirable properties. We empirically demonstrate how higher unique information in the spurious features in a dataset could lead a model into choosing the spurious features over the core features for inference, often having low worst-group-accuracy. We also propose a novel autoencoder-based estimator for computing unique information that is able to handle high-dimensional image data. Finally, we also show how this unique information in the spurious feature is reduced across several dataset-based spurious-pattern-mitigation techniques such as data reweighting and varying levels of background mixing, demonstrating a novel tradeoff between unique information (spuriousness) and worst-group-accuracy.
[ { "created": "Sat, 29 Jun 2024 16:05:47 GMT", "version": "v1" } ]
2024-07-02
[ [ "Halder", "Barproda", "" ], [ "Hamman", "Faisal", "" ], [ "Dissanayake", "Pasan", "" ], [ "Zhang", "Qiuyi", "" ], [ "Sucholutsky", "Ilia", "" ], [ "Dutta", "Sanghamitra", "" ] ]
Spurious patterns refer to a mathematical association between two or more variables in a dataset that are not causally related. However, this notion of spuriousness, which is usually introduced due to sampling biases in the dataset, has classically lacked a formal definition. To address this gap, this work presents the first information-theoretic formalization of spuriousness in a dataset (given a split of spurious and core features) using a mathematical framework called Partial Information Decomposition (PID). Specifically, we disentangle the joint information content that the spurious and core features share about another target variable (e.g., the prediction label) into distinct components, namely unique, redundant, and synergistic information. We propose the use of unique information, with roots in Blackwell Sufficiency, as a novel metric to formally quantify dataset spuriousness and derive its desirable properties. We empirically demonstrate how higher unique information in the spurious features in a dataset could lead a model into choosing the spurious features over the core features for inference, often having low worst-group-accuracy. We also propose a novel autoencoder-based estimator for computing unique information that is able to handle high-dimensional image data. Finally, we also show how this unique information in the spurious feature is reduced across several dataset-based spurious-pattern-mitigation techniques such as data reweighting and varying levels of background mixing, demonstrating a novel tradeoff between unique information (spuriousness) and worst-group-accuracy.
2303.00076
Jan Vyb\'iral
Cornelia Schneider, Jan Vyb\'iral
A multivariate Riesz basis of ReLU neural networks
null
null
null
null
cs.IT math.FA math.IT
http://creativecommons.org/licenses/by/4.0/
We consider the trigonometric-like system of piecewise linear functions introduced recently by Daubechies, DeVore, Foucart, Hanin, and Petrova. We provide an alternative proof that this system forms a Riesz basis of $L_2([0,1])$ based on the Gershgorin theorem. We also generalize this system to higher dimensions $d>1$ by a construction, which avoids using (tensor) products. As a consequence, the functions from the new Riesz basis of $L_2([0,1]^d)$ can be easily represented by neural networks. Moreover, the Riesz constants of this system are independent of $d$, making it an attractive building block regarding future multivariate analysis of neural networks.
[ { "created": "Tue, 28 Feb 2023 20:48:03 GMT", "version": "v1" } ]
2023-03-02
[ [ "Schneider", "Cornelia", "" ], [ "Vybíral", "Jan", "" ] ]
We consider the trigonometric-like system of piecewise linear functions introduced recently by Daubechies, DeVore, Foucart, Hanin, and Petrova. We provide an alternative proof that this system forms a Riesz basis of $L_2([0,1])$ based on the Gershgorin theorem. We also generalize this system to higher dimensions $d>1$ by a construction, which avoids using (tensor) products. As a consequence, the functions from the new Riesz basis of $L_2([0,1]^d)$ can be easily represented by neural networks. Moreover, the Riesz constants of this system are independent of $d$, making it an attractive building block regarding future multivariate analysis of neural networks.
1604.02610
Santiago Segarra
Santiago Segarra, Antonio G. Marques, Gonzalo Mateos, and Alejandro Ribeiro
Network Topology Identification from Spectral Templates
null
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network topology inference is a cornerstone problem in statistical analyses of complex systems. In this context, the fresh look advocated here permeates benefits from convex optimization and graph signal processing, to identify the so-termed graph shift operator (encoding the network topology) given only the eigenvectors of the shift. These spectral templates can be obtained, for example, from principal component analysis of a set of graph signals defined on the particular network. The novel idea is to find a graph shift that while being consistent with the provided spectral information, it endows the network structure with certain desired properties such as sparsity. The focus is on developing efficient recovery algorithms along with identifiability conditions for two particular shifts, the adjacency matrix and the normalized graph Laplacian. Application domains include network topology identification from steady-state signals generated by a diffusion process, and design of a graph filter that facilitates the distributed implementation of a prescribed linear network operator. Numerical tests showcase the effectiveness of the proposed algorithms in recovering synthetic and structural brain networks.
[ { "created": "Sat, 9 Apr 2016 21:56:40 GMT", "version": "v1" } ]
2016-04-12
[ [ "Segarra", "Santiago", "" ], [ "Marques", "Antonio G.", "" ], [ "Mateos", "Gonzalo", "" ], [ "Ribeiro", "Alejandro", "" ] ]
Network topology inference is a cornerstone problem in statistical analyses of complex systems. In this context, the fresh look advocated here permeates benefits from convex optimization and graph signal processing, to identify the so-termed graph shift operator (encoding the network topology) given only the eigenvectors of the shift. These spectral templates can be obtained, for example, from principal component analysis of a set of graph signals defined on the particular network. The novel idea is to find a graph shift that while being consistent with the provided spectral information, it endows the network structure with certain desired properties such as sparsity. The focus is on developing efficient recovery algorithms along with identifiability conditions for two particular shifts, the adjacency matrix and the normalized graph Laplacian. Application domains include network topology identification from steady-state signals generated by a diffusion process, and design of a graph filter that facilitates the distributed implementation of a prescribed linear network operator. Numerical tests showcase the effectiveness of the proposed algorithms in recovering synthetic and structural brain networks.
1710.01968
Sebastian Schlag
Robin Andre, Sebastian Schlag and Christian Schulz
Memetic Multilevel Hypergraph Partitioning
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hypergraph partitioning has a wide range of important applications such as VLSI design or scientific computing. With focus on solution quality, we develop the first multilevel memetic algorithm to tackle the problem. Key components of our contribution are new effective multilevel recombination and mutation operations that provide a large amount of diversity. We perform a wide range of experiments on a benchmark set containing instances from application areas such VLSI, SAT solving, social networks, and scientific computing. Compared to the state-of-the-art hypergraph partitioning tools hMetis, PaToH, and KaHyPar, our new algorithm computes the best result on almost all instances.
[ { "created": "Thu, 5 Oct 2017 11:20:45 GMT", "version": "v1" }, { "created": "Sat, 3 Feb 2018 12:37:55 GMT", "version": "v2" } ]
2018-02-06
[ [ "Andre", "Robin", "" ], [ "Schlag", "Sebastian", "" ], [ "Schulz", "Christian", "" ] ]
Hypergraph partitioning has a wide range of important applications such as VLSI design or scientific computing. With focus on solution quality, we develop the first multilevel memetic algorithm to tackle the problem. Key components of our contribution are new effective multilevel recombination and mutation operations that provide a large amount of diversity. We perform a wide range of experiments on a benchmark set containing instances from application areas such VLSI, SAT solving, social networks, and scientific computing. Compared to the state-of-the-art hypergraph partitioning tools hMetis, PaToH, and KaHyPar, our new algorithm computes the best result on almost all instances.
2201.07916
Lizhong Chen
Drew Penney, Bin Li, Jaroslaw Sydir, Lizhong Chen, Charlie Tai, Stefan Lee, Eoin Walsh, Thomas Long
PROMPT: Learning Dynamic Resource Allocation Policies for Network Applications
Accepted in Future Generation Computer Systems (FGCS)
null
10.1016/j.future.2023.03.016
null
cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
A growing number of service providers are exploring methods to improve server utilization and reduce power consumption by co-scheduling high-priority latency-critical workloads with best-effort workloads. This practice requires strict resource allocation between workloads to reduce contention and maintain Quality-of-Service (QoS) guarantees. Prior work demonstrated promising opportunities to dynamically allocate resources based on workload demand, but may fail to meet QoS objectives in more stringent operating environments due to the presence of resource allocation cliffs, transient fluctuations in workload performance, and rapidly changing resource demand. We therefore propose PROMPT, a novel resource allocation framework using proactive QoS prediction to guide a reinforcement learning controller. PROMPT enables more precise resource optimization, more consistent handling of transient behaviors, and more robust generalization when co-scheduling new best-effort workloads not encountered during policy training. Evaluation shows that the proposed method incurs 4.2x fewer QoS violations, reduces severity of QoS violations by 12.7x, improves best-effort workload performance, and improves overall power efficiency over prior work.
[ { "created": "Wed, 19 Jan 2022 23:34:34 GMT", "version": "v1" }, { "created": "Sat, 25 Mar 2023 01:07:57 GMT", "version": "v2" } ]
2023-03-28
[ [ "Penney", "Drew", "" ], [ "Li", "Bin", "" ], [ "Sydir", "Jaroslaw", "" ], [ "Chen", "Lizhong", "" ], [ "Tai", "Charlie", "" ], [ "Lee", "Stefan", "" ], [ "Walsh", "Eoin", "" ], [ "Long", "Thomas", "" ] ]
A growing number of service providers are exploring methods to improve server utilization and reduce power consumption by co-scheduling high-priority latency-critical workloads with best-effort workloads. This practice requires strict resource allocation between workloads to reduce contention and maintain Quality-of-Service (QoS) guarantees. Prior work demonstrated promising opportunities to dynamically allocate resources based on workload demand, but may fail to meet QoS objectives in more stringent operating environments due to the presence of resource allocation cliffs, transient fluctuations in workload performance, and rapidly changing resource demand. We therefore propose PROMPT, a novel resource allocation framework using proactive QoS prediction to guide a reinforcement learning controller. PROMPT enables more precise resource optimization, more consistent handling of transient behaviors, and more robust generalization when co-scheduling new best-effort workloads not encountered during policy training. Evaluation shows that the proposed method incurs 4.2x fewer QoS violations, reduces severity of QoS violations by 12.7x, improves best-effort workload performance, and improves overall power efficiency over prior work.
2205.04980
Shujian Zhang
Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen, Mingyuan Zhou
ALLSH: Active Learning Guided by Local Sensitivity and Hardness
NAACL 2022 (finding); Our code is publicly available at https://github.com/szhang42/allsh
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Active learning, which effectively collects informative unlabeled data for annotation, reduces the demand for labeled data. In this work, we propose to retrieve unlabeled samples with a local sensitivity and hardness-aware acquisition function. The proposed method generates data copies through local perturbations and selects data points whose predictive likelihoods diverge the most from their copies. We further empower our acquisition function by injecting the select-worst case perturbation. Our method achieves consistent gains over the commonly used active learning strategies in various classification tasks. Furthermore, we observe consistent improvements over the baselines on the study of prompt selection in prompt-based few-shot learning. These experiments demonstrate that our acquisition guided by local sensitivity and hardness can be effective and beneficial for many NLP tasks.
[ { "created": "Tue, 10 May 2022 15:39:11 GMT", "version": "v1" }, { "created": "Fri, 23 Sep 2022 21:11:18 GMT", "version": "v2" } ]
2022-09-27
[ [ "Zhang", "Shujian", "" ], [ "Gong", "Chengyue", "" ], [ "Liu", "Xingchao", "" ], [ "He", "Pengcheng", "" ], [ "Chen", "Weizhu", "" ], [ "Zhou", "Mingyuan", "" ] ]
Active learning, which effectively collects informative unlabeled data for annotation, reduces the demand for labeled data. In this work, we propose to retrieve unlabeled samples with a local sensitivity and hardness-aware acquisition function. The proposed method generates data copies through local perturbations and selects data points whose predictive likelihoods diverge the most from their copies. We further empower our acquisition function by injecting the select-worst case perturbation. Our method achieves consistent gains over the commonly used active learning strategies in various classification tasks. Furthermore, we observe consistent improvements over the baselines on the study of prompt selection in prompt-based few-shot learning. These experiments demonstrate that our acquisition guided by local sensitivity and hardness can be effective and beneficial for many NLP tasks.
2105.01265
Adrian Dumitrescu
Adrian Dumitrescu
Finding Triangles or Independent Sets; and Other Dual Pair Approximations
13 pages, no figure
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We revisit the algorithmic problem of finding a triangle in a graph (\textsc{Triangle Detection}), and examine its relation to other problems such as \textsc{3Sum}, \textsc{Independent Set}, and \textsc{Graph Coloring}. We obtain several new algorithms: \smallskip (I) A simple randomized algorithm for finding a triangle in a graph. As an application, we study the range of a conjecture of P\v{a}tra\c{s}cu (2010) regarding the triangle detection problem. \smallskip (II) An algorithm which given a graph $G=(V,E)$ performs one of the following tasks in $O(m+n)$ (ie, linear) time: (i)~compute a $\Omega(1/\sqrt{n})$-approximation of a maximum independent set in $G$ or (ii)~find a triangle in $G$. The run-time is faster than that for any previous method for each of these tasks. \smallskip (III) An algorithm which given a graph $G=(V,E)$ performs one of the following tasks in $O(m+n^{3/2})$ time: (i)~compute an $\sqrt{n}$-approximation for \textsc{Graph Coloring} of $G$ or (ii)~find a triangle in $G$. The run-time is faster than that for any previous method for each of these tasks on dense graphs, with $m =\omega(n^{9/8})$. \smallskip (IV) The second and third results suggest the following broader research direction: if it is difficult to find (A) or (B) separately, can one find one of the two efficiently? This motivates the \emph{dual pair} concept we introduce. We discuss and provide several instances of dual-pair approximation.
[ { "created": "Tue, 4 May 2021 03:11:37 GMT", "version": "v1" }, { "created": "Sat, 29 Jul 2023 01:53:58 GMT", "version": "v2" }, { "created": "Sun, 11 Feb 2024 14:44:10 GMT", "version": "v3" } ]
2024-02-13
[ [ "Dumitrescu", "Adrian", "" ] ]
We revisit the algorithmic problem of finding a triangle in a graph (\textsc{Triangle Detection}), and examine its relation to other problems such as \textsc{3Sum}, \textsc{Independent Set}, and \textsc{Graph Coloring}. We obtain several new algorithms: \smallskip (I) A simple randomized algorithm for finding a triangle in a graph. As an application, we study the range of a conjecture of P\v{a}tra\c{s}cu (2010) regarding the triangle detection problem. \smallskip (II) An algorithm which given a graph $G=(V,E)$ performs one of the following tasks in $O(m+n)$ (ie, linear) time: (i)~compute a $\Omega(1/\sqrt{n})$-approximation of a maximum independent set in $G$ or (ii)~find a triangle in $G$. The run-time is faster than that for any previous method for each of these tasks. \smallskip (III) An algorithm which given a graph $G=(V,E)$ performs one of the following tasks in $O(m+n^{3/2})$ time: (i)~compute an $\sqrt{n}$-approximation for \textsc{Graph Coloring} of $G$ or (ii)~find a triangle in $G$. The run-time is faster than that for any previous method for each of these tasks on dense graphs, with $m =\omega(n^{9/8})$. \smallskip (IV) The second and third results suggest the following broader research direction: if it is difficult to find (A) or (B) separately, can one find one of the two efficiently? This motivates the \emph{dual pair} concept we introduce. We discuss and provide several instances of dual-pair approximation.
2406.02612
Ou Wu
Ou Wu, Weiyao Zhu, Mengyang Li
Is Data Valuation Learnable and Interpretable?
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Measuring the value of individual samples is critical for many data-driven tasks, e.g., the training of a deep learning model. Recent literature witnesses the substantial efforts in developing data valuation methods. The primary data valuation methodology is based on the Shapley value from game theory, and various methods are proposed along this path. {Even though Shapley value-based valuation has solid theoretical basis, it is entirely an experiment-based approach and no valuation model has been constructed so far.} In addition, current data valuation methods ignore the interpretability of the output values, despite an interptable data valuation method is of great helpful for applications such as data pricing. This study aims to answer an important question: is data valuation learnable and interpretable? A learned valuation model have several desirable merits such as fixed number of parameters and knowledge reusability. An intrepretable data valuation model can explain why a sample is valuable or invaluable. To this end, two new data value modeling frameworks are proposed, in which a multi-layer perception~(MLP) and a new regression tree are utilized as specific base models for model training and interpretability, respectively. Extensive experiments are conducted on benchmark datasets. {The experimental results provide a positive answer for the question.} Our study opens up a new technical path for the assessing of data values. Large data valuation models can be built across many different data-driven tasks, which can promote the widespread application of data valuation.
[ { "created": "Mon, 3 Jun 2024 08:13:47 GMT", "version": "v1" } ]
2024-06-06
[ [ "Wu", "Ou", "" ], [ "Zhu", "Weiyao", "" ], [ "Li", "Mengyang", "" ] ]
Measuring the value of individual samples is critical for many data-driven tasks, e.g., the training of a deep learning model. Recent literature witnesses the substantial efforts in developing data valuation methods. The primary data valuation methodology is based on the Shapley value from game theory, and various methods are proposed along this path. {Even though Shapley value-based valuation has solid theoretical basis, it is entirely an experiment-based approach and no valuation model has been constructed so far.} In addition, current data valuation methods ignore the interpretability of the output values, despite an interptable data valuation method is of great helpful for applications such as data pricing. This study aims to answer an important question: is data valuation learnable and interpretable? A learned valuation model have several desirable merits such as fixed number of parameters and knowledge reusability. An intrepretable data valuation model can explain why a sample is valuable or invaluable. To this end, two new data value modeling frameworks are proposed, in which a multi-layer perception~(MLP) and a new regression tree are utilized as specific base models for model training and interpretability, respectively. Extensive experiments are conducted on benchmark datasets. {The experimental results provide a positive answer for the question.} Our study opens up a new technical path for the assessing of data values. Large data valuation models can be built across many different data-driven tasks, which can promote the widespread application of data valuation.
1805.02896
Ilya Verenich
Ilya Verenich, Marlon Dumas, Marcello La Rosa, Fabrizio Maggi, Irene Teinemaa
Survey and cross-benchmark comparison of remaining time prediction methods in business process monitoring
null
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predictive business process monitoring methods exploit historical process execution logs to generate predictions about running instances (called cases) of a business process, such as the prediction of the outcome, next activity or remaining cycle time of a given process case. These insights could be used to support operational managers in taking remedial actions as business processes unfold, e.g. shifting resources from one case onto another to ensure this latter is completed on time. A number of methods to tackle the remaining cycle time prediction problem have been proposed in the literature. However, due to differences in their experimental setup, choice of datasets, evaluation measures and baselines, the relative merits of each method remain unclear. This article presents a systematic literature review and taxonomy of methods for remaining time prediction in the context of business processes, as well as a cross-benchmark comparison of 16 such methods based on 16 real-life datasets originating from different industry domains.
[ { "created": "Tue, 8 May 2018 08:38:58 GMT", "version": "v1" }, { "created": "Thu, 10 May 2018 21:56:51 GMT", "version": "v2" } ]
2018-05-14
[ [ "Verenich", "Ilya", "" ], [ "Dumas", "Marlon", "" ], [ "La Rosa", "Marcello", "" ], [ "Maggi", "Fabrizio", "" ], [ "Teinemaa", "Irene", "" ] ]
Predictive business process monitoring methods exploit historical process execution logs to generate predictions about running instances (called cases) of a business process, such as the prediction of the outcome, next activity or remaining cycle time of a given process case. These insights could be used to support operational managers in taking remedial actions as business processes unfold, e.g. shifting resources from one case onto another to ensure this latter is completed on time. A number of methods to tackle the remaining cycle time prediction problem have been proposed in the literature. However, due to differences in their experimental setup, choice of datasets, evaluation measures and baselines, the relative merits of each method remain unclear. This article presents a systematic literature review and taxonomy of methods for remaining time prediction in the context of business processes, as well as a cross-benchmark comparison of 16 such methods based on 16 real-life datasets originating from different industry domains.
2302.14421
Dimitrios Karoukis
Dimitrios Karoukis
Publicly verifiable delegative democracy with secret voting power
11 pages, 2 figures
null
null
null
cs.CR cs.DS cs.SI
http://creativecommons.org/licenses/by-nc-nd/4.0/
In a democratic setting, we introduce a commitment scheme which allows for transparent validation of transfers and reversible delegations of voting power between citizens without sacrificing their privacy. A unit of voting power is publicly represented by the Merkle root of a tree consisting of its latest owner's public key, a random nonce and the Merkle root of the tree of its previous owner's public key and random nonce and so on. A transition includes the input units, their owner's public keys and signatures, the hashes of their nonces and the output units generated with the new owners' public keys and random nonces. In case of a delegation, the receiver provides the sender with the hashed random nonces and hashed public keys for the output units. In case of a transfer, only the precomputed output units are provided by the receiver. In a reversal, a historical owner reveals the hashes of the nonces and public keys that resulted in the subsequent units. To vote, the owner reveals the actual nonces and public keys.
[ { "created": "Tue, 28 Feb 2023 08:54:07 GMT", "version": "v1" }, { "created": "Fri, 5 May 2023 11:48:41 GMT", "version": "v2" } ]
2023-05-08
[ [ "Karoukis", "Dimitrios", "" ] ]
In a democratic setting, we introduce a commitment scheme which allows for transparent validation of transfers and reversible delegations of voting power between citizens without sacrificing their privacy. A unit of voting power is publicly represented by the Merkle root of a tree consisting of its latest owner's public key, a random nonce and the Merkle root of the tree of its previous owner's public key and random nonce and so on. A transition includes the input units, their owner's public keys and signatures, the hashes of their nonces and the output units generated with the new owners' public keys and random nonces. In case of a delegation, the receiver provides the sender with the hashed random nonces and hashed public keys for the output units. In case of a transfer, only the precomputed output units are provided by the receiver. In a reversal, a historical owner reveals the hashes of the nonces and public keys that resulted in the subsequent units. To vote, the owner reveals the actual nonces and public keys.
2103.17028
Hassan Noura
Jean-Paul A. Yaacoub, Hassan N. Noura, Ola Salman and Ali Chehab
Digital Forensics vs. Anti-Digital Forensics: Techniques, Limitations and Recommendations
null
null
null
null
cs.CR
http://creativecommons.org/publicdomain/zero/1.0/
The number of cyber attacks has increased tremendously in the last few years. This resulted into both human and financial losses at the individual and organization levels. Recently, cyber-criminals are leveraging new skills and capabilities by employing anti-forensics activities, techniques and tools to cover their tracks and evade any possible detection. Consequently, cyber-attacks are becoming more efficient and more sophisticated. Therefore, traditional cryptographic and non-cryptographic solutions and access control systems are no longer enough to prevent such cyber attacks, especially in terms of acquiring evidence for attack investigation. Hence, the need for well-defined, sophisticated, and advanced forensics investigation tools are highly required to track down cyber criminals and to reduce the number of cyber crimes. This paper reviews the different forensics and anti-forensics methods, tools, techniques, types, and challenges, while also discussing the rise of the anti-anti-forensics as a new forensics protection mechanism against anti-forensics activities. This would help forensics investigators to better understand the different anti-forensics tools, methods and techniques that cyber criminals employ while launching their attacks. Moreover, the limitations of the current forensics techniques are discussed, especially in terms of issues and challenges. Finally, this paper presents a holistic view from a literature point of view over the forensics domain and also helps other fellow colleagues in their quest to further understand the digital forensics domain.
[ { "created": "Wed, 31 Mar 2021 12:27:08 GMT", "version": "v1" } ]
2021-04-01
[ [ "Yaacoub", "Jean-Paul A.", "" ], [ "Noura", "Hassan N.", "" ], [ "Salman", "Ola", "" ], [ "Chehab", "Ali", "" ] ]
The number of cyber attacks has increased tremendously in the last few years. This resulted into both human and financial losses at the individual and organization levels. Recently, cyber-criminals are leveraging new skills and capabilities by employing anti-forensics activities, techniques and tools to cover their tracks and evade any possible detection. Consequently, cyber-attacks are becoming more efficient and more sophisticated. Therefore, traditional cryptographic and non-cryptographic solutions and access control systems are no longer enough to prevent such cyber attacks, especially in terms of acquiring evidence for attack investigation. Hence, the need for well-defined, sophisticated, and advanced forensics investigation tools are highly required to track down cyber criminals and to reduce the number of cyber crimes. This paper reviews the different forensics and anti-forensics methods, tools, techniques, types, and challenges, while also discussing the rise of the anti-anti-forensics as a new forensics protection mechanism against anti-forensics activities. This would help forensics investigators to better understand the different anti-forensics tools, methods and techniques that cyber criminals employ while launching their attacks. Moreover, the limitations of the current forensics techniques are discussed, especially in terms of issues and challenges. Finally, this paper presents a holistic view from a literature point of view over the forensics domain and also helps other fellow colleagues in their quest to further understand the digital forensics domain.
2107.10998
Dan Liu
Dan Liu, Xi Chen, Jie Fu, Chen Ma, Xue Liu
Pruning Ternary Quantization
Merged with Hyperspherical Quantization: Toward Smaller and More Accurate Models (arXiv:2212.12653.)
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Inference time, model size, and accuracy are three key factors in deep model compression. Most of the existing work addresses these three key factors separately as it is difficult to optimize them all at the same time. For example, low-bit quantization aims at obtaining a faster model; weight sharing quantization aims at improving compression ratio and accuracy; and mixed-precision quantization aims at balancing accuracy and inference time. To simultaneously optimize bit-width, model size, and accuracy, we propose pruning ternary quantization (PTQ): a simple, effective, symmetric ternary quantization method. We integrate L2 normalization, pruning, and the weight decay term to reduce the weight discrepancy in the gradient estimator during quantization, thus producing highly compressed ternary weights. Our method brings the highest test accuracy and the highest compression ratio. For example, it produces a 939kb (49$\times$) 2bit ternary ResNet-18 model with only 4\% accuracy drop on the ImageNet dataset. It compresses 170MB Mask R-CNN to 5MB (34$\times$) with only 2.8\% average precision drop. Our method is verified on image classification, object detection/segmentation tasks with different network structures such as ResNet-18, ResNet-50, and MobileNetV2.
[ { "created": "Fri, 23 Jul 2021 02:18:00 GMT", "version": "v1" }, { "created": "Wed, 26 Jan 2022 18:21:52 GMT", "version": "v2" }, { "created": "Sat, 24 Dec 2022 04:37:54 GMT", "version": "v3" }, { "created": "Thu, 2 Mar 2023 03:11:04 GMT", "version": "v4" }, { "created": "Fri, 14 Jul 2023 22:37:31 GMT", "version": "v5" } ]
2023-07-18
[ [ "Liu", "Dan", "" ], [ "Chen", "Xi", "" ], [ "Fu", "Jie", "" ], [ "Ma", "Chen", "" ], [ "Liu", "Xue", "" ] ]
Inference time, model size, and accuracy are three key factors in deep model compression. Most of the existing work addresses these three key factors separately as it is difficult to optimize them all at the same time. For example, low-bit quantization aims at obtaining a faster model; weight sharing quantization aims at improving compression ratio and accuracy; and mixed-precision quantization aims at balancing accuracy and inference time. To simultaneously optimize bit-width, model size, and accuracy, we propose pruning ternary quantization (PTQ): a simple, effective, symmetric ternary quantization method. We integrate L2 normalization, pruning, and the weight decay term to reduce the weight discrepancy in the gradient estimator during quantization, thus producing highly compressed ternary weights. Our method brings the highest test accuracy and the highest compression ratio. For example, it produces a 939kb (49$\times$) 2bit ternary ResNet-18 model with only 4\% accuracy drop on the ImageNet dataset. It compresses 170MB Mask R-CNN to 5MB (34$\times$) with only 2.8\% average precision drop. Our method is verified on image classification, object detection/segmentation tasks with different network structures such as ResNet-18, ResNet-50, and MobileNetV2.
1604.07160
Naoya Takahashi
Naoya Takahashi, Michael Gygli, Beat Pfister, Luc Van Gool
Deep Convolutional Neural Networks and Data Augmentation for Acoustic Event Detection
Presented in INTERSPEECH 2016
null
null
null
cs.SD cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel method for Acoustic Event Detection (AED). In contrast to speech, sounds coming from acoustic events may be produced by a wide variety of sources. Furthermore, distinguishing them often requires analyzing an extended time period due to the lack of a clear sub-word unit. In order to incorporate the long-time frequency structure for AED, we introduce a convolutional neural network (CNN) with a large input field. In contrast to previous works, this enables to train audio event detection end-to-end. Our architecture is inspired by the success of VGGNet and uses small, 3x3 convolutions, but more depth than previous methods in AED. In order to prevent over-fitting and to take full advantage of the modeling capabilities of our network, we further propose a novel data augmentation method to introduce data variation. Experimental results show that our CNN significantly outperforms state of the art methods including Bag of Audio Words (BoAW) and classical CNNs, achieving a 16% absolute improvement.
[ { "created": "Mon, 25 Apr 2016 08:25:03 GMT", "version": "v1" }, { "created": "Thu, 8 Dec 2016 04:28:16 GMT", "version": "v2" } ]
2016-12-09
[ [ "Takahashi", "Naoya", "" ], [ "Gygli", "Michael", "" ], [ "Pfister", "Beat", "" ], [ "Van Gool", "Luc", "" ] ]
We propose a novel method for Acoustic Event Detection (AED). In contrast to speech, sounds coming from acoustic events may be produced by a wide variety of sources. Furthermore, distinguishing them often requires analyzing an extended time period due to the lack of a clear sub-word unit. In order to incorporate the long-time frequency structure for AED, we introduce a convolutional neural network (CNN) with a large input field. In contrast to previous works, this enables to train audio event detection end-to-end. Our architecture is inspired by the success of VGGNet and uses small, 3x3 convolutions, but more depth than previous methods in AED. In order to prevent over-fitting and to take full advantage of the modeling capabilities of our network, we further propose a novel data augmentation method to introduce data variation. Experimental results show that our CNN significantly outperforms state of the art methods including Bag of Audio Words (BoAW) and classical CNNs, achieving a 16% absolute improvement.
1008.2824
S Geetha
S. Geetha and N. Kamaraj
Optimized Image Steganalysis through Feature Selection using MBEGA
15 pages, IEEE NetCom 2009 Conference, IJCNC Journal
International Journal of Computer Networks & Communications 2.4 (2010) 161-175
null
null
cs.CR cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feature based steganalysis, an emerging branch in information forensics, aims at identifying the presence of a covert communication by employing the statistical features of the cover and stego image as clues/evidences. Due to the large volumes of security audit data as well as complex and dynamic properties of steganogram behaviours, optimizing the performance of steganalysers becomes an important open problem. This paper is focussed at fine tuning the performance of six promising steganalysers in this field, through feature selection. We propose to employ Markov Blanket-Embedded Genetic Algorithm (MBEGA) for stego sensitive feature selection process. In particular, the embedded Markov blanket based memetic operators add or delete features (or genes) from a genetic algorithm (GA) solution so as to quickly improve the solution and fine-tune the search. Empirical results suggest that MBEGA is effective and efficient in eliminating irrelevant and redundant features based on both Markov blanket and predictive power in classifier model. Observations show that the proposed method is superior in terms of number of selected features, classification accuracy and computational cost than their existing counterparts.
[ { "created": "Tue, 17 Aug 2010 05:57:36 GMT", "version": "v1" } ]
2010-08-18
[ [ "Geetha", "S.", "" ], [ "Kamaraj", "N.", "" ] ]
Feature based steganalysis, an emerging branch in information forensics, aims at identifying the presence of a covert communication by employing the statistical features of the cover and stego image as clues/evidences. Due to the large volumes of security audit data as well as complex and dynamic properties of steganogram behaviours, optimizing the performance of steganalysers becomes an important open problem. This paper is focussed at fine tuning the performance of six promising steganalysers in this field, through feature selection. We propose to employ Markov Blanket-Embedded Genetic Algorithm (MBEGA) for stego sensitive feature selection process. In particular, the embedded Markov blanket based memetic operators add or delete features (or genes) from a genetic algorithm (GA) solution so as to quickly improve the solution and fine-tune the search. Empirical results suggest that MBEGA is effective and efficient in eliminating irrelevant and redundant features based on both Markov blanket and predictive power in classifier model. Observations show that the proposed method is superior in terms of number of selected features, classification accuracy and computational cost than their existing counterparts.
0907.5165
Oliver Johnson
Oliver Johnson, Matthew Aldridge, Robert Piechocki
Interference alignment-based sum capacity bounds for random dense Gaussian interference networks
23 pages
IEEE Transactions on Information Theory, 57:1, 282-290, 2011
10.1109/TIT.2010.2090242
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a dense $K$ user Gaussian interference network formed by paired transmitters and receivers placed independently at random in a fixed spatial region. Under natural conditions on the node position distributions and signal attenuation, we prove convergence in probability of the average per-user capacity $\csum/K$ to $\half \ep \log(1 + 2 \SNR)$. The achievability result follows directly from results based on an interference alignment scheme presented in recent work of Nazer et al. Our main contribution comes through an upper bound, motivated by ideas of `bottleneck capacity' developed in recent work of Jafar. By controlling the physical location of transmitter--receiver pairs, we can match a large proportion of these pairs to form so-called $\epsilon$-bottleneck links, with consequent control of the sum capacity.
[ { "created": "Wed, 29 Jul 2009 15:47:48 GMT", "version": "v1" } ]
2011-09-12
[ [ "Johnson", "Oliver", "" ], [ "Aldridge", "Matthew", "" ], [ "Piechocki", "Robert", "" ] ]
We consider a dense $K$ user Gaussian interference network formed by paired transmitters and receivers placed independently at random in a fixed spatial region. Under natural conditions on the node position distributions and signal attenuation, we prove convergence in probability of the average per-user capacity $\csum/K$ to $\half \ep \log(1 + 2 \SNR)$. The achievability result follows directly from results based on an interference alignment scheme presented in recent work of Nazer et al. Our main contribution comes through an upper bound, motivated by ideas of `bottleneck capacity' developed in recent work of Jafar. By controlling the physical location of transmitter--receiver pairs, we can match a large proportion of these pairs to form so-called $\epsilon$-bottleneck links, with consequent control of the sum capacity.
1410.5055
Jun Fang
Jun Fang, Yanning Shen, Fuwei Li, and Hongbin Li
Prior Support Knowledge-Aided Sparse Bayesian Learning with Partly Erroneous Support Information
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been shown both experimentally and theoretically that sparse signal recovery can be significantly improved given that part of the signal's support is known \emph{a priori}. In practice, however, such prior knowledge is usually inaccurate and contains errors. Using such knowledge may result in severe performance degradation or even recovery failure. In this paper, we study the problem of sparse signal recovery when partial but partly erroneous prior knowledge of the signal's support is available. Based on the conventional sparse Bayesian learning framework, we propose a modified two-layer Gaussian-inverse Gamma hierarchical prior model and, moreover, an improved three-layer hierarchical prior model. The modified two-layer model employs an individual parameter $b_i$ for each sparsity-controlling hyperparameter $\alpha_i$, and has the ability to place non-sparsity-encouraging priors to those coefficients that are believed in the support set. The three-layer hierarchical model is built on the modified two-layer prior model, with a prior placed on the parameters $\{b_i\}$ in the third layer. Such a model enables to automatically learn the true support from partly erroneous information through learning the values of the parameters $\{b_i\}$. Variational Bayesian algorithms are developed based on the proposed hierarchical prior models. Numerical results are provided to illustrate the performance of the proposed algorithms.
[ { "created": "Sun, 19 Oct 2014 10:47:21 GMT", "version": "v1" } ]
2014-10-21
[ [ "Fang", "Jun", "" ], [ "Shen", "Yanning", "" ], [ "Li", "Fuwei", "" ], [ "Li", "Hongbin", "" ] ]
It has been shown both experimentally and theoretically that sparse signal recovery can be significantly improved given that part of the signal's support is known \emph{a priori}. In practice, however, such prior knowledge is usually inaccurate and contains errors. Using such knowledge may result in severe performance degradation or even recovery failure. In this paper, we study the problem of sparse signal recovery when partial but partly erroneous prior knowledge of the signal's support is available. Based on the conventional sparse Bayesian learning framework, we propose a modified two-layer Gaussian-inverse Gamma hierarchical prior model and, moreover, an improved three-layer hierarchical prior model. The modified two-layer model employs an individual parameter $b_i$ for each sparsity-controlling hyperparameter $\alpha_i$, and has the ability to place non-sparsity-encouraging priors to those coefficients that are believed in the support set. The three-layer hierarchical model is built on the modified two-layer prior model, with a prior placed on the parameters $\{b_i\}$ in the third layer. Such a model enables to automatically learn the true support from partly erroneous information through learning the values of the parameters $\{b_i\}$. Variational Bayesian algorithms are developed based on the proposed hierarchical prior models. Numerical results are provided to illustrate the performance of the proposed algorithms.
2310.12419
Huanyao Rong
Huanyao Rong, Wei You, Xiaofeng Wang, Tianhao Mao
Toward Unbiased Multiple-Target Fuzzing with Path Diversity
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose a novel directed fuzzing solution named AFLRun, which features target path-diversity metric and unbiased energy assignment. Firstly, we develop a new coverage metric by maintaining extra virgin map for each covered target to track the coverage status of seeds that hit the target. This approach enables the storage of waypoints into the corpus that hit a target through interesting path, thus enriching the path diversity for each target. Additionally, we propose a corpus-level energy assignment strategy that guarantees fairness for each target. AFLRun starts with uniform target weight and propagates this weight to seeds to get a desired seed weight distribution. By assigning energy to each seed in the corpus according to such desired distribution, a precise and unbiased energy assignment can be achieved. We built a prototype system and assessed its performance using a standard benchmark and several extensively fuzzed real-world applications. The evaluation results demonstrate that AFLRun outperforms state-of-the-art fuzzers in terms of vulnerability detection, both in quantity and speed. Moreover, AFLRun uncovers 29 previously unidentified vulnerabilities, including 8 CVEs, across four distinct programs.
[ { "created": "Thu, 19 Oct 2023 02:12:43 GMT", "version": "v1" }, { "created": "Thu, 6 Jun 2024 06:46:00 GMT", "version": "v2" } ]
2024-06-07
[ [ "Rong", "Huanyao", "" ], [ "You", "Wei", "" ], [ "Wang", "Xiaofeng", "" ], [ "Mao", "Tianhao", "" ] ]
In this paper, we propose a novel directed fuzzing solution named AFLRun, which features target path-diversity metric and unbiased energy assignment. Firstly, we develop a new coverage metric by maintaining extra virgin map for each covered target to track the coverage status of seeds that hit the target. This approach enables the storage of waypoints into the corpus that hit a target through interesting path, thus enriching the path diversity for each target. Additionally, we propose a corpus-level energy assignment strategy that guarantees fairness for each target. AFLRun starts with uniform target weight and propagates this weight to seeds to get a desired seed weight distribution. By assigning energy to each seed in the corpus according to such desired distribution, a precise and unbiased energy assignment can be achieved. We built a prototype system and assessed its performance using a standard benchmark and several extensively fuzzed real-world applications. The evaluation results demonstrate that AFLRun outperforms state-of-the-art fuzzers in terms of vulnerability detection, both in quantity and speed. Moreover, AFLRun uncovers 29 previously unidentified vulnerabilities, including 8 CVEs, across four distinct programs.
2312.05311
Jalees Nehvi
Jalees Nehvi, Berna Kabadayi, Julien Valentin, Justus Thies
360{\deg} Volumetric Portrait Avatar
Project page: https://jalees018.github.io/3VP-Avatar/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose 360{\deg} Volumetric Portrait (3VP) Avatar, a novel method for reconstructing 360{\deg} photo-realistic portrait avatars of human subjects solely based on monocular video inputs. State-of-the-art monocular avatar reconstruction methods rely on stable facial performance capturing. However, the common usage of 3DMM-based facial tracking has its limits; side-views can hardly be captured and it fails, especially, for back-views, as required inputs like facial landmarks or human parsing masks are missing. This results in incomplete avatar reconstructions that only cover the frontal hemisphere. In contrast to this, we propose a template-based tracking of the torso, head and facial expressions which allows us to cover the appearance of a human subject from all sides. Thus, given a sequence of a subject that is rotating in front of a single camera, we train a neural volumetric representation based on neural radiance fields. A key challenge to construct this representation is the modeling of appearance changes, especially, in the mouth region (i.e., lips and teeth). We, therefore, propose a deformation-field-based blend basis which allows us to interpolate between different appearance states. We evaluate our approach on captured real-world data and compare against state-of-the-art monocular reconstruction methods. In contrast to those, our method is the first monocular technique that reconstructs an entire 360{\deg} avatar.
[ { "created": "Fri, 8 Dec 2023 19:00:03 GMT", "version": "v1" } ]
2023-12-12
[ [ "Nehvi", "Jalees", "" ], [ "Kabadayi", "Berna", "" ], [ "Valentin", "Julien", "" ], [ "Thies", "Justus", "" ] ]
We propose 360{\deg} Volumetric Portrait (3VP) Avatar, a novel method for reconstructing 360{\deg} photo-realistic portrait avatars of human subjects solely based on monocular video inputs. State-of-the-art monocular avatar reconstruction methods rely on stable facial performance capturing. However, the common usage of 3DMM-based facial tracking has its limits; side-views can hardly be captured and it fails, especially, for back-views, as required inputs like facial landmarks or human parsing masks are missing. This results in incomplete avatar reconstructions that only cover the frontal hemisphere. In contrast to this, we propose a template-based tracking of the torso, head and facial expressions which allows us to cover the appearance of a human subject from all sides. Thus, given a sequence of a subject that is rotating in front of a single camera, we train a neural volumetric representation based on neural radiance fields. A key challenge to construct this representation is the modeling of appearance changes, especially, in the mouth region (i.e., lips and teeth). We, therefore, propose a deformation-field-based blend basis which allows us to interpolate between different appearance states. We evaluate our approach on captured real-world data and compare against state-of-the-art monocular reconstruction methods. In contrast to those, our method is the first monocular technique that reconstructs an entire 360{\deg} avatar.
2011.03252
Matteo Iovino
Matteo Iovino, Jonathan Styrud, Pietro Falco and Christian Smith
Learning Behavior Trees with Genetic Programming in Unpredictable Environments
null
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern industrial applications require robots to be able to operate in unpredictable environments, and programs to be created with a minimal effort, as there may be frequent changes to the task. In this paper, we show that genetic programming can be effectively used to learn the structure of a behavior tree (BT) to solve a robotic task in an unpredictable environment. Moreover, we propose to use a simple simulator for the learning and demonstrate that the learned BTs can solve the same task in a realistic simulator, reaching convergence without the need for task specific heuristics. The learned solution is tolerant to faults, making our method appealing for real robotic applications.
[ { "created": "Fri, 6 Nov 2020 09:28:23 GMT", "version": "v1" } ]
2020-11-09
[ [ "Iovino", "Matteo", "" ], [ "Styrud", "Jonathan", "" ], [ "Falco", "Pietro", "" ], [ "Smith", "Christian", "" ] ]
Modern industrial applications require robots to be able to operate in unpredictable environments, and programs to be created with a minimal effort, as there may be frequent changes to the task. In this paper, we show that genetic programming can be effectively used to learn the structure of a behavior tree (BT) to solve a robotic task in an unpredictable environment. Moreover, we propose to use a simple simulator for the learning and demonstrate that the learned BTs can solve the same task in a realistic simulator, reaching convergence without the need for task specific heuristics. The learned solution is tolerant to faults, making our method appealing for real robotic applications.
2203.13551
Miguel Romero
Miguel Romero, Oscar Ram\'irez, Jorge Finke, Camilo Rocha
Feature extraction using Spectral Clustering for Gene Function Prediction using Hierarchical Multi-label Classification
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Gene annotation addresses the problem of predicting unknown associations between gene and functions (e.g., biological processes) of a specific organism. Despite recent advances, the cost and time demanded by annotation procedures that rely largely on in vivo biological experiments remain prohibitively high. This paper presents a novel in silico approach for to the annotation problem that combines cluster analysis and hierarchical multi-label classification (HMC). The approach uses spectral clustering to extract new features from the gene co-expression network (GCN) and enrich the prediction task. HMC is used to build multiple estimators that consider the hierarchical structure of gene functions. The proposed approach is applied to a case study on Zea mays, one of the most dominant and productive crops in the world. The results illustrate how in silico approaches are key to reduce the time and costs of gene annotation. More specifically, they highlight the importance of: (i) building new features that represent the structure of gene relationships in GCNs to annotate genes; and (ii) taking into account the structure of biological processes to obtain consistent predictions.
[ { "created": "Fri, 25 Mar 2022 10:17:36 GMT", "version": "v1" }, { "created": "Thu, 28 Apr 2022 21:19:33 GMT", "version": "v2" } ]
2022-05-02
[ [ "Romero", "Miguel", "" ], [ "Ramírez", "Oscar", "" ], [ "Finke", "Jorge", "" ], [ "Rocha", "Camilo", "" ] ]
Gene annotation addresses the problem of predicting unknown associations between gene and functions (e.g., biological processes) of a specific organism. Despite recent advances, the cost and time demanded by annotation procedures that rely largely on in vivo biological experiments remain prohibitively high. This paper presents a novel in silico approach for to the annotation problem that combines cluster analysis and hierarchical multi-label classification (HMC). The approach uses spectral clustering to extract new features from the gene co-expression network (GCN) and enrich the prediction task. HMC is used to build multiple estimators that consider the hierarchical structure of gene functions. The proposed approach is applied to a case study on Zea mays, one of the most dominant and productive crops in the world. The results illustrate how in silico approaches are key to reduce the time and costs of gene annotation. More specifically, they highlight the importance of: (i) building new features that represent the structure of gene relationships in GCNs to annotate genes; and (ii) taking into account the structure of biological processes to obtain consistent predictions.
1708.05868
Shuping Dang
Shuping Dang and Justin P. Coon and Gaojie Chen and David E. Simmons
Outage Performance Analysis of Multicarrier Relay Selection for Cooperative Networks
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we analyze the outage performance of two multicarrier relay selection schemes, i.e. bulk and per-subcarrier selections, for two-hop orthogonal frequency-division multiplexing (OFDM) systems. To provide a comprehensive analysis, three forwarding protocols: decode-and-forward (DF), fixed-gain (FG) amplify-and-forward (AF) and variable-gain (VG) AF relay systems are considered. We obtain closed-form approximations for the outage probability and closed-form expressions for the asymptotic outage probability in the high signal-to-noise ratio (SNR) region for all cases. Our analysis is verified by Monte Carlo simulations, and provides an analytical framework for multicarrier systems with relay selection.
[ { "created": "Sat, 19 Aug 2017 16:03:29 GMT", "version": "v1" } ]
2017-08-22
[ [ "Dang", "Shuping", "" ], [ "Coon", "Justin P.", "" ], [ "Chen", "Gaojie", "" ], [ "Simmons", "David E.", "" ] ]
In this paper, we analyze the outage performance of two multicarrier relay selection schemes, i.e. bulk and per-subcarrier selections, for two-hop orthogonal frequency-division multiplexing (OFDM) systems. To provide a comprehensive analysis, three forwarding protocols: decode-and-forward (DF), fixed-gain (FG) amplify-and-forward (AF) and variable-gain (VG) AF relay systems are considered. We obtain closed-form approximations for the outage probability and closed-form expressions for the asymptotic outage probability in the high signal-to-noise ratio (SNR) region for all cases. Our analysis is verified by Monte Carlo simulations, and provides an analytical framework for multicarrier systems with relay selection.
2306.13285
Vasileios Magoulianitis
Vasileios Magoulianitis, Athanasios Psaltis
Learning Scene Flow With Skeleton Guidance For 3D Action Recognition
18 pages, 3 figures, 3 tables, conference
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Among the existing modalities for 3D action recognition, 3D flow has been poorly examined, although conveying rich motion information cues for human actions. Presumably, its susceptibility to noise renders it intractable, thus challenging the learning process within deep models. This work demonstrates the use of 3D flow sequence by a deep spatiotemporal model and further proposes an incremental two-level spatial attention mechanism, guided from skeleton domain, for emphasizing motion features close to the body joint areas and according to their informativeness. Towards this end, an extended deep skeleton model is also introduced to learn the most discriminant action motion dynamics, so as to estimate an informativeness score for each joint. Subsequently, a late fusion scheme is adopted between the two models for learning the high level cross-modal correlations. Experimental results on the currently largest and most challenging dataset NTU RGB+D, demonstrate the effectiveness of the proposed approach, achieving state-of-the-art results.
[ { "created": "Fri, 23 Jun 2023 04:14:25 GMT", "version": "v1" } ]
2023-06-26
[ [ "Magoulianitis", "Vasileios", "" ], [ "Psaltis", "Athanasios", "" ] ]
Among the existing modalities for 3D action recognition, 3D flow has been poorly examined, although conveying rich motion information cues for human actions. Presumably, its susceptibility to noise renders it intractable, thus challenging the learning process within deep models. This work demonstrates the use of 3D flow sequence by a deep spatiotemporal model and further proposes an incremental two-level spatial attention mechanism, guided from skeleton domain, for emphasizing motion features close to the body joint areas and according to their informativeness. Towards this end, an extended deep skeleton model is also introduced to learn the most discriminant action motion dynamics, so as to estimate an informativeness score for each joint. Subsequently, a late fusion scheme is adopted between the two models for learning the high level cross-modal correlations. Experimental results on the currently largest and most challenging dataset NTU RGB+D, demonstrate the effectiveness of the proposed approach, achieving state-of-the-art results.
1411.0281
Remi Chou
Remi A. Chou and Matthieu R. Bloch
Polar Coding for the Broadcast Channel with Confidential Messages: A Random Binning Analogy
20 pages, two-column, 6 figures, accepted to IEEE Transactions on Information Theory; parts of the results were presented at the 2015 IEEE Information Theory Workshop; minor change in title
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a low-complexity polar coding scheme for the discrete memoryless broadcast channel with confidential messages under strong secrecy and randomness constraints. Our scheme extends previous work by using an optimal rate of uniform randomness in the stochastic encoder, and avoiding assumptions regarding the symmetry or degraded nature of the channels. The price paid for these extensions is that the encoder and decoders are required to share a secret seed of negligible size and to increase the block length through chaining. We also highlight a close conceptual connection between the proposed polar coding scheme and a random binning proof of the secrecy capacity region.
[ { "created": "Sun, 2 Nov 2014 17:19:12 GMT", "version": "v1" }, { "created": "Sat, 5 Mar 2016 04:08:11 GMT", "version": "v2" } ]
2016-03-08
[ [ "Chou", "Remi A.", "" ], [ "Bloch", "Matthieu R.", "" ] ]
We develop a low-complexity polar coding scheme for the discrete memoryless broadcast channel with confidential messages under strong secrecy and randomness constraints. Our scheme extends previous work by using an optimal rate of uniform randomness in the stochastic encoder, and avoiding assumptions regarding the symmetry or degraded nature of the channels. The price paid for these extensions is that the encoder and decoders are required to share a secret seed of negligible size and to increase the block length through chaining. We also highlight a close conceptual connection between the proposed polar coding scheme and a random binning proof of the secrecy capacity region.
1702.03246
Christos Mousas
Christos Mousas
Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents the three scripting commands and main functionalities of a novel character animation environment called CHASE. CHASE was developed for enabling inexperienced programmers, animators, artists, and students to animate in meaningful ways virtual reality characters. This is achieved by scripting simple commands within CHASE. The commands identified, which are associated with simple parameters, are responsible for generating a number of predefined motions and actions of a character. Hence, the virtual character is able to animate within a virtual environment and to interact with tasks located within it. An additional functionality of CHASE is supplied. It provides the ability to generate multiple tasks of a character, such as providing the user the ability to generate scenario-related animated sequences. However, since multiple characters may require simultaneous animation, the ability to script actions of different characters at the same time is also provided.
[ { "created": "Fri, 10 Feb 2017 16:37:55 GMT", "version": "v1" } ]
2017-02-13
[ [ "Mousas", "Christos", "" ] ]
This paper presents the three scripting commands and main functionalities of a novel character animation environment called CHASE. CHASE was developed for enabling inexperienced programmers, animators, artists, and students to animate in meaningful ways virtual reality characters. This is achieved by scripting simple commands within CHASE. The commands identified, which are associated with simple parameters, are responsible for generating a number of predefined motions and actions of a character. Hence, the virtual character is able to animate within a virtual environment and to interact with tasks located within it. An additional functionality of CHASE is supplied. It provides the ability to generate multiple tasks of a character, such as providing the user the ability to generate scenario-related animated sequences. However, since multiple characters may require simultaneous animation, the ability to script actions of different characters at the same time is also provided.
2007.13687
Minghua Xia
Zongze Li, Minghua Xia, Miaowen Wen, and Yik-Chung Wu
Massive Access in Secure NOMA under Imperfect CSI: Security Guaranteed Sum-Rate Maximization with First-Order Algorithm
17 pages, 6 figures, accepted for publication in IEEE Journal on Selected Areas in Communications
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-orthogonal multiple access (NOMA) is a promising solution for secure transmission under massive access. However, in addition to the uncertain channel state information (CSI) of the eavesdroppers due to their passive nature, the CSI of the legitimate users may also be imperfect at the base station due to the limited feedback. Under both channel uncertainties, the optimal power allocation and transmission rate design for a secure NOMA scheme is currently not known due to the difficulty of handling the probabilistic constraints. This paper fills this gap by proposing novel transformation of the probabilistic constraints and variable decoupling so that the security guaranteed sum-rate maximization problem can be solved by alternatively executing branch-and-bound method and difference of convex programming. To scale the solution to a truly massive access scenario, a first-order algorithm with very low complexity is further proposed. Simulation results show that the proposed first-order algorithm achieves identical performance to the conventional method but saves at least two orders of magnitude in computation time. Moreover, the resultant transmission scheme significantly improves the security guaranteed sum-rate compared to the orthogonal multiple access transmission and NOMA ignoring CSI uncertainty.
[ { "created": "Mon, 27 Jul 2020 17:01:18 GMT", "version": "v1" } ]
2020-07-28
[ [ "Li", "Zongze", "" ], [ "Xia", "Minghua", "" ], [ "Wen", "Miaowen", "" ], [ "Wu", "Yik-Chung", "" ] ]
Non-orthogonal multiple access (NOMA) is a promising solution for secure transmission under massive access. However, in addition to the uncertain channel state information (CSI) of the eavesdroppers due to their passive nature, the CSI of the legitimate users may also be imperfect at the base station due to the limited feedback. Under both channel uncertainties, the optimal power allocation and transmission rate design for a secure NOMA scheme is currently not known due to the difficulty of handling the probabilistic constraints. This paper fills this gap by proposing novel transformation of the probabilistic constraints and variable decoupling so that the security guaranteed sum-rate maximization problem can be solved by alternatively executing branch-and-bound method and difference of convex programming. To scale the solution to a truly massive access scenario, a first-order algorithm with very low complexity is further proposed. Simulation results show that the proposed first-order algorithm achieves identical performance to the conventional method but saves at least two orders of magnitude in computation time. Moreover, the resultant transmission scheme significantly improves the security guaranteed sum-rate compared to the orthogonal multiple access transmission and NOMA ignoring CSI uncertainty.
1912.11720
Qing Ping
Qing Ping, Chaomei Chen
Convolutional Quantum-Like Language Model with Mutual-Attention for Product Rating Prediction
Accepted at MAISON workshop at ICTIR 19'
null
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommender systems are designed to help mitigate information overload users experience during online shopping. Recent work explores neural language models to learn user and item representations from user reviews and combines such representations with rating information. Most existing convolutional-based neural models take pooling immediately after convolution and loses the interaction information between the latent dimension of convolutional feature vectors along the way. Moreover, these models usually take all feature vectors at higher levels as equal and do not take into consideration that some features are more relevant to this specific user-item context. To bridge these gaps, this paper proposes a convolutional quantum-like language model with mutual-attention for rating prediction (ConQAR). By introducing a quantum-like density matrix layer, interactions between latent dimensions of convolutional feature vectors are well captured. With the attention weights learned from the mutual-attention layer, final representations of a user and an item absorb information from both itself and its counterparts for making rating prediction. Experiments on two large datasets show that our model outperforms multiple state-of-the-art CNN-based models. We also perform an ablation test to analyze the independent effects of the two components of our model. Moreover, we conduct a case study and present visualizations of the quantum probabilistic distributions in one user and one item review document to show that the learned distributions capture meaningful information about this user and item, and can be potentially used as textual profiling of the user and item.
[ { "created": "Wed, 25 Dec 2019 22:01:59 GMT", "version": "v1" } ]
2019-12-30
[ [ "Ping", "Qing", "" ], [ "Chen", "Chaomei", "" ] ]
Recommender systems are designed to help mitigate information overload users experience during online shopping. Recent work explores neural language models to learn user and item representations from user reviews and combines such representations with rating information. Most existing convolutional-based neural models take pooling immediately after convolution and loses the interaction information between the latent dimension of convolutional feature vectors along the way. Moreover, these models usually take all feature vectors at higher levels as equal and do not take into consideration that some features are more relevant to this specific user-item context. To bridge these gaps, this paper proposes a convolutional quantum-like language model with mutual-attention for rating prediction (ConQAR). By introducing a quantum-like density matrix layer, interactions between latent dimensions of convolutional feature vectors are well captured. With the attention weights learned from the mutual-attention layer, final representations of a user and an item absorb information from both itself and its counterparts for making rating prediction. Experiments on two large datasets show that our model outperforms multiple state-of-the-art CNN-based models. We also perform an ablation test to analyze the independent effects of the two components of our model. Moreover, we conduct a case study and present visualizations of the quantum probabilistic distributions in one user and one item review document to show that the learned distributions capture meaningful information about this user and item, and can be potentially used as textual profiling of the user and item.
1803.07639
Srinivasan Parthasarathy
Srinivasan Parthasarathy
Adaptive Greedy Algorithms for Stochastic Set Cover Problems
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study adaptive greedy algorithms for the problems of stochastic set cover with perfect and imperfect coverages. In stochastic set cover with perfect coverage, we are given a set of items and a ground set B. Evaluating an item reveals its state which is a random subset of B drawn from the state distribution of the item. Every element in B is assumed to be present in the state of some item with probability 1. For this problem, we show that the adaptive greedy algorithm has an approximation ratio of H(|B|), the |B|th Harmonic number. In stochastic set cover with imperfect coverage, an element in the ground set need not be present in the state of any item. We show a reduction from this problem to the former problem; the adaptive greedy algorithm for the reduced instance has an approxiation ratio of H(|E|), where E is the set of pairs (F, e) such that the state of item F contains e with positive probability.
[ { "created": "Tue, 20 Mar 2018 20:30:55 GMT", "version": "v1" }, { "created": "Thu, 22 Mar 2018 11:55:59 GMT", "version": "v2" }, { "created": "Tue, 27 Mar 2018 04:47:14 GMT", "version": "v3" }, { "created": "Thu, 29 Mar 2018 12:17:17 GMT", "version": "v4" }, { "created": "Fri, 6 Apr 2018 18:41:50 GMT", "version": "v5" }, { "created": "Thu, 14 Jun 2018 15:22:31 GMT", "version": "v6" }, { "created": "Fri, 15 Jun 2018 19:39:55 GMT", "version": "v7" } ]
2018-06-19
[ [ "Parthasarathy", "Srinivasan", "" ] ]
We study adaptive greedy algorithms for the problems of stochastic set cover with perfect and imperfect coverages. In stochastic set cover with perfect coverage, we are given a set of items and a ground set B. Evaluating an item reveals its state which is a random subset of B drawn from the state distribution of the item. Every element in B is assumed to be present in the state of some item with probability 1. For this problem, we show that the adaptive greedy algorithm has an approximation ratio of H(|B|), the |B|th Harmonic number. In stochastic set cover with imperfect coverage, an element in the ground set need not be present in the state of any item. We show a reduction from this problem to the former problem; the adaptive greedy algorithm for the reduced instance has an approxiation ratio of H(|E|), where E is the set of pairs (F, e) such that the state of item F contains e with positive probability.
2403.15943
Zhenglin Li
Zhenglin Li, Yangchen Huang, Mengran Zhu, Jingyu Zhang, JingHao Chang, Houze Liu
Advanced Feature Manipulation for Enhanced Change Detection Leveraging Natural Language Models
This version is not our full version based on our new progress, related data, and methodology we are dealing with, and based on the rules and the laws, we are adjusting our current version
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Change detection is a fundamental task in computer vision that processes a bi-temporal image pair to differentiate between semantically altered and unaltered regions. Large language models (LLMs) have been utilized in various domains for their exceptional feature extraction capabilities and have shown promise in numerous downstream applications. In this study, we harness the power of a pre-trained LLM, extracting feature maps from extensive datasets, and employ an auxiliary network to detect changes. Unlike existing LLM-based change detection methods that solely focus on deriving high-quality feature maps, our approach emphasizes the manipulation of these feature maps to enhance semantic relevance.
[ { "created": "Sat, 23 Mar 2024 22:07:32 GMT", "version": "v1" }, { "created": "Thu, 13 Jun 2024 15:30:02 GMT", "version": "v2" } ]
2024-06-14
[ [ "Li", "Zhenglin", "" ], [ "Huang", "Yangchen", "" ], [ "Zhu", "Mengran", "" ], [ "Zhang", "Jingyu", "" ], [ "Chang", "JingHao", "" ], [ "Liu", "Houze", "" ] ]
Change detection is a fundamental task in computer vision that processes a bi-temporal image pair to differentiate between semantically altered and unaltered regions. Large language models (LLMs) have been utilized in various domains for their exceptional feature extraction capabilities and have shown promise in numerous downstream applications. In this study, we harness the power of a pre-trained LLM, extracting feature maps from extensive datasets, and employ an auxiliary network to detect changes. Unlike existing LLM-based change detection methods that solely focus on deriving high-quality feature maps, our approach emphasizes the manipulation of these feature maps to enhance semantic relevance.
2407.19365
Chuxu Song
Chuxu Song, Zining Fan, Hao Wang, Richard Martin
Seamless Website Fingerprinting in Multiple Environments
16 pages
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Website fingerprinting (WF) attacks identify the websites visited over anonymized connections by analyzing patterns in network traffic flows, such as packet sizes, directions, or interval times using a machine learning classifier. Previous studies showed WF attacks achieve high classification accuracy. However, several issues call into question whether existing WF approaches are realizable in practice and thus motivate a re-exploration. Due to Tor's performance issues and resulting poor browsing experience, the vast majority of users opt for Virtual Private Networking (VPN) despite VPNs weaker privacy protections. Many other past assumptions are increasingly unrealistic as web technology advances. Our work addresses several key limitations of prior art. First, we introduce a new approach that classifies entire websites rather than individual web pages. Site-level classification uses traffic from all site components, including advertisements, multimedia, and single-page applications. Second, our Convolutional Neural Network (CNN) uses only the jitter and size of 500 contiguous packets from any point in a TCP stream, in contrast to prior work requiring heuristics to find page boundaries. Our seamless approach makes eavesdropper attack models realistic. Using traces from a controlled browser, we show our CNN matches observed traffic to a website with over 90% accuracy. We found the training traffic quality is critical as classification accuracy is significantly reduced when the training data lacks variability in network location, performance, and clients' computational capability. We enhanced the base CNN's efficacy using domain adaptation, allowing it to discount irrelevant features, such as network location. Lastly, we evaluate several defensive strategies against seamless WF attacks.
[ { "created": "Sun, 28 Jul 2024 02:18:30 GMT", "version": "v1" } ]
2024-07-30
[ [ "Song", "Chuxu", "" ], [ "Fan", "Zining", "" ], [ "Wang", "Hao", "" ], [ "Martin", "Richard", "" ] ]
Website fingerprinting (WF) attacks identify the websites visited over anonymized connections by analyzing patterns in network traffic flows, such as packet sizes, directions, or interval times using a machine learning classifier. Previous studies showed WF attacks achieve high classification accuracy. However, several issues call into question whether existing WF approaches are realizable in practice and thus motivate a re-exploration. Due to Tor's performance issues and resulting poor browsing experience, the vast majority of users opt for Virtual Private Networking (VPN) despite VPNs weaker privacy protections. Many other past assumptions are increasingly unrealistic as web technology advances. Our work addresses several key limitations of prior art. First, we introduce a new approach that classifies entire websites rather than individual web pages. Site-level classification uses traffic from all site components, including advertisements, multimedia, and single-page applications. Second, our Convolutional Neural Network (CNN) uses only the jitter and size of 500 contiguous packets from any point in a TCP stream, in contrast to prior work requiring heuristics to find page boundaries. Our seamless approach makes eavesdropper attack models realistic. Using traces from a controlled browser, we show our CNN matches observed traffic to a website with over 90% accuracy. We found the training traffic quality is critical as classification accuracy is significantly reduced when the training data lacks variability in network location, performance, and clients' computational capability. We enhanced the base CNN's efficacy using domain adaptation, allowing it to discount irrelevant features, such as network location. Lastly, we evaluate several defensive strategies against seamless WF attacks.
1601.04908
Martha Lewis
Desislava Bankova, Bob Coecke, Martha Lewis, Daniel Marsden
Graded Entailment for Compositional Distributional Semantics
null
null
null
null
cs.CL cs.AI cs.LO math.CT quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The categorical compositional distributional model of natural language provides a conceptually motivated procedure to compute the meaning of sentences, given grammatical structure and the meanings of its words. This approach has outperformed other models in mainstream empirical language processing tasks. However, until recently it has lacked the crucial feature of lexical entailment -- as do other distributional models of meaning. In this paper we solve the problem of entailment for categorical compositional distributional semantics. Taking advantage of the abstract categorical framework allows us to vary our choice of model. This enables the introduction of a notion of entailment, exploiting ideas from the categorical semantics of partial knowledge in quantum computation. The new model of language uses density matrices, on which we introduce a novel robust graded order capturing the entailment strength between concepts. This graded measure emerges from a general framework for approximate entailment, induced by any commutative monoid. Quantum logic embeds in our graded order. Our main theorem shows that entailment strength lifts compositionally to the sentence level, giving a lower bound on sentence entailment. We describe the essential properties of graded entailment such as continuity, and provide a procedure for calculating entailment strength.
[ { "created": "Tue, 19 Jan 2016 13:13:25 GMT", "version": "v1" }, { "created": "Mon, 25 Jan 2016 20:10:27 GMT", "version": "v2" } ]
2016-01-26
[ [ "Bankova", "Desislava", "" ], [ "Coecke", "Bob", "" ], [ "Lewis", "Martha", "" ], [ "Marsden", "Daniel", "" ] ]
The categorical compositional distributional model of natural language provides a conceptually motivated procedure to compute the meaning of sentences, given grammatical structure and the meanings of its words. This approach has outperformed other models in mainstream empirical language processing tasks. However, until recently it has lacked the crucial feature of lexical entailment -- as do other distributional models of meaning. In this paper we solve the problem of entailment for categorical compositional distributional semantics. Taking advantage of the abstract categorical framework allows us to vary our choice of model. This enables the introduction of a notion of entailment, exploiting ideas from the categorical semantics of partial knowledge in quantum computation. The new model of language uses density matrices, on which we introduce a novel robust graded order capturing the entailment strength between concepts. This graded measure emerges from a general framework for approximate entailment, induced by any commutative monoid. Quantum logic embeds in our graded order. Our main theorem shows that entailment strength lifts compositionally to the sentence level, giving a lower bound on sentence entailment. We describe the essential properties of graded entailment such as continuity, and provide a procedure for calculating entailment strength.
2108.09757
Tserwa Bakasa
Tseriwa Bakasa and Ayanda Pekane
The Decision Criteria Used by Large Enterprises in South Africa for the Adoption of Cloud Computing
In proceedings of the 1st Virtual Conference on Implications of Information and Digital Technologies for Development, 2021
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Cloud computing is a technology that has become increasingly popular over the past decade within several enterprises. This popularity can be attributed to its benefits, including lower operating costs, improved computational capabilities, increased flexibility and on-demand storage space. As a result, many enterprises are already in various Cloud Computing (CC) adoption and implementation stages. This study investigates the decision criteria used by large enterprises in South Africa (SA) for the adoption of cloud technology. The majority of large enterprises have comprehensive resources, resulting in established Information Technology (IT) systems and infrastructure set up within their organizations. Though this is the case, the adoption of CC by large enterprises has been on the rise. This may not be a surprise as CC literature points out to benefits and influencers of CC adoption. However, the decision criteria used by large enterprises in SA in adopting CC are lacking in the literature reviewed. The study followed an inductive approach making use of qualitative methods. Findings revealed that large enterprises do not make use of a formalized or standardized decision criteria. However, operational cost, enterprise strategic intent and product efficiency formed key criteria for adopting CC. In addition, security, cloud service provider adoption frameworks and data sovereignty were the key criteria used to select a CC service provider. The research will contribute towards CC technology adoption literature, particularly for developing countries.
[ { "created": "Sun, 22 Aug 2021 15:33:55 GMT", "version": "v1" } ]
2021-08-24
[ [ "Bakasa", "Tseriwa", "" ], [ "Pekane", "Ayanda", "" ] ]
Cloud computing is a technology that has become increasingly popular over the past decade within several enterprises. This popularity can be attributed to its benefits, including lower operating costs, improved computational capabilities, increased flexibility and on-demand storage space. As a result, many enterprises are already in various Cloud Computing (CC) adoption and implementation stages. This study investigates the decision criteria used by large enterprises in South Africa (SA) for the adoption of cloud technology. The majority of large enterprises have comprehensive resources, resulting in established Information Technology (IT) systems and infrastructure set up within their organizations. Though this is the case, the adoption of CC by large enterprises has been on the rise. This may not be a surprise as CC literature points out to benefits and influencers of CC adoption. However, the decision criteria used by large enterprises in SA in adopting CC are lacking in the literature reviewed. The study followed an inductive approach making use of qualitative methods. Findings revealed that large enterprises do not make use of a formalized or standardized decision criteria. However, operational cost, enterprise strategic intent and product efficiency formed key criteria for adopting CC. In addition, security, cloud service provider adoption frameworks and data sovereignty were the key criteria used to select a CC service provider. The research will contribute towards CC technology adoption literature, particularly for developing countries.
1909.05615
Prabhat Kumar
Nikhil Singh, Prabhat Kumar, and Anupam Saxena
On Topology optimization with elliptical masks and honeycomb tessellation with explicit length scale constraints
36 pages, 24 figures
Structural and Multidisciplinary Optimization, 2020
10.1007/s00158-020-02548-w
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Topology optimization using gradient search with negative and positive elliptical masks and honeycomb tessellation is presented. Through a novel skeletonization algorithm for topologies defined using filled and void hexagonal cells/elements, explicit minimum and maximum length scales are imposed on solid states in the solutions. An analytical example is presented which suggests that for a skeletonized topology, optimal solutions may not always exist for any specified volume fraction, minimum and maximum length scales, and that there may exist implicit interdependence between them. A Sequence for Length Scale (SLS) methodology is proposed wherein solutions are sought by specifying only the minimum and maximum length scales with volume fraction getting determined systematically. Through four benchmark problems in small deformation topology optimization, it is demonstrated that solutions by-and-large satisfy the length scale constraints though the latter may get violated at certain local sites. The proposed approach seems promising, noting especially that solutions, if rendered perfectly {\it black and white} with minimum length scale explicitly imposed and boundaries smoothened, are quite close in performance compared to the parent topologies. Attaining {\it volume distributed} topologies, wherein members are more or less of the same thickness, may also be possible with the proposed approach.
[ { "created": "Thu, 12 Sep 2019 13:07:34 GMT", "version": "v1" }, { "created": "Fri, 9 Oct 2020 21:26:31 GMT", "version": "v2" } ]
2020-10-13
[ [ "Singh", "Nikhil", "" ], [ "Kumar", "Prabhat", "" ], [ "Saxena", "Anupam", "" ] ]
Topology optimization using gradient search with negative and positive elliptical masks and honeycomb tessellation is presented. Through a novel skeletonization algorithm for topologies defined using filled and void hexagonal cells/elements, explicit minimum and maximum length scales are imposed on solid states in the solutions. An analytical example is presented which suggests that for a skeletonized topology, optimal solutions may not always exist for any specified volume fraction, minimum and maximum length scales, and that there may exist implicit interdependence between them. A Sequence for Length Scale (SLS) methodology is proposed wherein solutions are sought by specifying only the minimum and maximum length scales with volume fraction getting determined systematically. Through four benchmark problems in small deformation topology optimization, it is demonstrated that solutions by-and-large satisfy the length scale constraints though the latter may get violated at certain local sites. The proposed approach seems promising, noting especially that solutions, if rendered perfectly {\it black and white} with minimum length scale explicitly imposed and boundaries smoothened, are quite close in performance compared to the parent topologies. Attaining {\it volume distributed} topologies, wherein members are more or less of the same thickness, may also be possible with the proposed approach.
1701.02831
Tamir Bendory
Tamir Bendory, Pavel Sidorenko and Yonina C. Eldar
On the Uniqueness of FROG Methods
null
null
10.1109/LSP.2017.2690358
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of recovering a signal from its power spectrum, called phase retrieval, arises in many scientific fields. One of many examples is ultra-short laser pulse characterization in which the electromagnetic field is oscillating with ~10^15 Hz and phase information cannot be measured directly due to limitations of the electronic sensors. Phase retrieval is ill-posed in most cases as there are many different signals with the same Fourier transform magnitude. To overcome this fundamental ill-posedness, several measurement techniques are used in practice. One of the most popular methods for complete characterization of ultra-short laser pulses is the Frequency-Resolved Optical Gating (FROG). In FROG, the acquired data is the power spectrum of the product of the unknown pulse with its delayed replica. Therefore the measured signal is a quartic function of the unknown pulse. A generalized version of FROG, where the delayed replica is replaced by a second unknown pulse, is called blind FROG. In this case, the measured signal is quadratic with respect to both pulses. In this letter we introduce and formulate FROG-type techniques. We then show that almost all band-limited signals are determined uniquely, up to trivial ambiguities, by blind FROG measurements (and thus also by FROG), if in addition we have access to the signals power spectrum.
[ { "created": "Wed, 11 Jan 2017 02:47:44 GMT", "version": "v1" }, { "created": "Sun, 19 Mar 2017 17:35:14 GMT", "version": "v2" }, { "created": "Sat, 1 Apr 2017 18:48:14 GMT", "version": "v3" } ]
2017-04-26
[ [ "Bendory", "Tamir", "" ], [ "Sidorenko", "Pavel", "" ], [ "Eldar", "Yonina C.", "" ] ]
The problem of recovering a signal from its power spectrum, called phase retrieval, arises in many scientific fields. One of many examples is ultra-short laser pulse characterization in which the electromagnetic field is oscillating with ~10^15 Hz and phase information cannot be measured directly due to limitations of the electronic sensors. Phase retrieval is ill-posed in most cases as there are many different signals with the same Fourier transform magnitude. To overcome this fundamental ill-posedness, several measurement techniques are used in practice. One of the most popular methods for complete characterization of ultra-short laser pulses is the Frequency-Resolved Optical Gating (FROG). In FROG, the acquired data is the power spectrum of the product of the unknown pulse with its delayed replica. Therefore the measured signal is a quartic function of the unknown pulse. A generalized version of FROG, where the delayed replica is replaced by a second unknown pulse, is called blind FROG. In this case, the measured signal is quadratic with respect to both pulses. In this letter we introduce and formulate FROG-type techniques. We then show that almost all band-limited signals are determined uniquely, up to trivial ambiguities, by blind FROG measurements (and thus also by FROG), if in addition we have access to the signals power spectrum.
2108.13696
Chathura Gamage
Cheng Xue, Vimukthini Pinto, Chathura Gamage, Ekaterina Nikonova, Peng Zhang, Jochen Renz
Phy-Q as a measure for physical reasoning intelligence
For the associated website, see https://github.com/phy-q/benchmark
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans are well-versed in reasoning about the behaviors of physical objects and choosing actions accordingly to accomplish tasks, while it remains a major challenge for AI. To facilitate research addressing this problem, we propose a new testbed that requires an agent to reason about physical scenarios and take an action appropriately. Inspired by the physical knowledge acquired in infancy and the capabilities required for robots to operate in real-world environments, we identify 15 essential physical scenarios. We create a wide variety of distinct task templates, and we ensure all the task templates within the same scenario can be solved by using one specific strategic physical rule. By having such a design, we evaluate two distinct levels of generalization, namely the local generalization and the broad generalization. We conduct an extensive evaluation with human players, learning agents with varying input types and architectures, and heuristic agents with different strategies. Inspired by how human IQ is calculated, we define the physical reasoning quotient (Phy-Q score) that reflects the physical reasoning intelligence of an agent using the physical scenarios we considered. Our evaluation shows that 1) all agents are far below human performance, and 2) learning agents, even with good local generalization ability, struggle to learn the underlying physical reasoning rules and fail to generalize broadly. We encourage the development of intelligent agents that can reach the human level Phy-Q score. Website: https://github.com/phy-q/benchmark
[ { "created": "Tue, 31 Aug 2021 09:11:27 GMT", "version": "v1" }, { "created": "Wed, 18 May 2022 03:39:05 GMT", "version": "v2" }, { "created": "Fri, 27 Jan 2023 01:52:45 GMT", "version": "v3" } ]
2023-01-30
[ [ "Xue", "Cheng", "" ], [ "Pinto", "Vimukthini", "" ], [ "Gamage", "Chathura", "" ], [ "Nikonova", "Ekaterina", "" ], [ "Zhang", "Peng", "" ], [ "Renz", "Jochen", "" ] ]
Humans are well-versed in reasoning about the behaviors of physical objects and choosing actions accordingly to accomplish tasks, while it remains a major challenge for AI. To facilitate research addressing this problem, we propose a new testbed that requires an agent to reason about physical scenarios and take an action appropriately. Inspired by the physical knowledge acquired in infancy and the capabilities required for robots to operate in real-world environments, we identify 15 essential physical scenarios. We create a wide variety of distinct task templates, and we ensure all the task templates within the same scenario can be solved by using one specific strategic physical rule. By having such a design, we evaluate two distinct levels of generalization, namely the local generalization and the broad generalization. We conduct an extensive evaluation with human players, learning agents with varying input types and architectures, and heuristic agents with different strategies. Inspired by how human IQ is calculated, we define the physical reasoning quotient (Phy-Q score) that reflects the physical reasoning intelligence of an agent using the physical scenarios we considered. Our evaluation shows that 1) all agents are far below human performance, and 2) learning agents, even with good local generalization ability, struggle to learn the underlying physical reasoning rules and fail to generalize broadly. We encourage the development of intelligent agents that can reach the human level Phy-Q score. Website: https://github.com/phy-q/benchmark
2111.08498
Yi Heng Lim
Yi Heng Lim, Muhammad Firmansyah Kasim
Reducing the Long Tail Losses in Scientific Emulations with Active Learning
8 pages, 4 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep-learning-based models are increasingly used to emulate scientific simulations to accelerate scientific research. However, accurate, supervised deep learning models require huge amount of labelled data, and that often becomes the bottleneck in employing neural networks. In this work, we leveraged an active learning approach called core-set selection to actively select data, per a pre-defined budget, to be labelled for training. To further improve the model performance and reduce the training costs, we also warm started the training using a shrink-and-perturb trick. We tested on two case studies in different fields, namely galaxy halo occupation distribution modelling in astrophysics and x-ray emission spectroscopy in plasma physics, and the results are promising: we achieved competitive overall performance compared to using a random sampling baseline, and more importantly, successfully reduced the larger absolute losses, i.e. the long tail in the loss distribution, at virtually no overhead costs.
[ { "created": "Mon, 15 Nov 2021 09:02:00 GMT", "version": "v1" }, { "created": "Sun, 9 Jan 2022 15:05:48 GMT", "version": "v2" } ]
2022-01-11
[ [ "Lim", "Yi Heng", "" ], [ "Kasim", "Muhammad Firmansyah", "" ] ]
Deep-learning-based models are increasingly used to emulate scientific simulations to accelerate scientific research. However, accurate, supervised deep learning models require huge amount of labelled data, and that often becomes the bottleneck in employing neural networks. In this work, we leveraged an active learning approach called core-set selection to actively select data, per a pre-defined budget, to be labelled for training. To further improve the model performance and reduce the training costs, we also warm started the training using a shrink-and-perturb trick. We tested on two case studies in different fields, namely galaxy halo occupation distribution modelling in astrophysics and x-ray emission spectroscopy in plasma physics, and the results are promising: we achieved competitive overall performance compared to using a random sampling baseline, and more importantly, successfully reduced the larger absolute losses, i.e. the long tail in the loss distribution, at virtually no overhead costs.
2106.00083
Maurice Herlihy
Daniel Engel, Maurice Herlihy
Composing Networks of Automated Market Makers
null
null
10.1145/3479722.3480987
null
cs.DC cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated market makers (AMMs) are automata that trade electronic assets at rates set by mathematical formulas. AMMs are usually implemented by smart contracts on blockchains. In practice, AMMs are often composed: and outputs from AMMs can be directed into other compatible AMMs. This paper proposes a mathematical model for AMM composition. We define sequential and parallel composition operators for AMMs in a way that ensures that AMMs are closed under composition, in a way that works for "higher-dimensional" AMMs that manage more than two asset classes, and so the composition of AMMs in "stable" states remains stable.
[ { "created": "Mon, 31 May 2021 20:09:26 GMT", "version": "v1" }, { "created": "Wed, 16 Jun 2021 17:53:07 GMT", "version": "v2" }, { "created": "Tue, 31 Aug 2021 13:32:32 GMT", "version": "v3" } ]
2021-09-01
[ [ "Engel", "Daniel", "" ], [ "Herlihy", "Maurice", "" ] ]
Automated market makers (AMMs) are automata that trade electronic assets at rates set by mathematical formulas. AMMs are usually implemented by smart contracts on blockchains. In practice, AMMs are often composed: and outputs from AMMs can be directed into other compatible AMMs. This paper proposes a mathematical model for AMM composition. We define sequential and parallel composition operators for AMMs in a way that ensures that AMMs are closed under composition, in a way that works for "higher-dimensional" AMMs that manage more than two asset classes, and so the composition of AMMs in "stable" states remains stable.
1806.10920
Matthew England Dr
M. England
Machine Learning for Mathematical Software
To appear in Proc. ICMS 2018
In: J.H. Davenport, M. Kauers, G. Labahn and J. Urban, eds. Mathematical Software - ICMS 2018, pp. 165-174. (Lecture Notes in Computer Science 10931). Springer, 2018
10.1007/978-3-319-96418-8_20
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While there has been some discussion on how Symbolic Computation could be used for AI there is little literature on applications in the other direction. However, recent results for quantifier elimination suggest that, given enough example problems, there is scope for machine learning tools like Support Vector Machines to improve the performance of Computer Algebra Systems. We survey the authors own work and similar applications for other mathematical software. It may seem that the inherently probabilistic nature of machine learning tools would invalidate the exact results prized by mathematical software. However, algorithms and implementations often come with a range of choices which have no effect on the mathematical correctness of the end result but a great effect on the resources required to find it, and thus here, machine learning can have a significant impact.
[ { "created": "Thu, 28 Jun 2018 12:35:47 GMT", "version": "v1" } ]
2018-11-01
[ [ "England", "M.", "" ] ]
While there has been some discussion on how Symbolic Computation could be used for AI there is little literature on applications in the other direction. However, recent results for quantifier elimination suggest that, given enough example problems, there is scope for machine learning tools like Support Vector Machines to improve the performance of Computer Algebra Systems. We survey the authors own work and similar applications for other mathematical software. It may seem that the inherently probabilistic nature of machine learning tools would invalidate the exact results prized by mathematical software. However, algorithms and implementations often come with a range of choices which have no effect on the mathematical correctness of the end result but a great effect on the resources required to find it, and thus here, machine learning can have a significant impact.
2403.17933
Daniel Dauner
Kashyap Chitta, Daniel Dauner, Andreas Geiger
SLEDGE: Synthesizing Driving Environments with Generative Models and Rule-Based Traffic
ECCV 2024
null
null
null
cs.RO cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
SLEDGE is the first generative simulator for vehicle motion planning trained on real-world driving logs. Its core component is a learned model that is able to generate agent bounding boxes and lane graphs. The model's outputs serve as an initial state for rule-based traffic simulation. The unique properties of the entities to be generated for SLEDGE, such as their connectivity and variable count per scene, render the naive application of most modern generative models to this task non-trivial. Therefore, together with a systematic study of existing lane graph representations, we introduce a novel raster-to-vector autoencoder. It encodes agents and the lane graph into distinct channels in a rasterized latent map. This facilitates both lane-conditioned agent generation and combined generation of lanes and agents with a Diffusion Transformer. Using generated entities in SLEDGE enables greater control over the simulation, e.g. upsampling turns or increasing traffic density. Further, SLEDGE can support 500m long routes, a capability not found in existing data-driven simulators like nuPlan. It presents new challenges for planning algorithms, evidenced by failure rates of over 40% for PDM, the winner of the 2023 nuPlan challenge, when tested on hard routes and dense traffic generated by our model. Compared to nuPlan, SLEDGE requires 500$\times$ less storage to set up (<4 GB), making it a more accessible option and helping with democratizing future research in this field.
[ { "created": "Tue, 26 Mar 2024 17:58:29 GMT", "version": "v1" }, { "created": "Thu, 11 Jul 2024 17:27:49 GMT", "version": "v2" } ]
2024-07-12
[ [ "Chitta", "Kashyap", "" ], [ "Dauner", "Daniel", "" ], [ "Geiger", "Andreas", "" ] ]
SLEDGE is the first generative simulator for vehicle motion planning trained on real-world driving logs. Its core component is a learned model that is able to generate agent bounding boxes and lane graphs. The model's outputs serve as an initial state for rule-based traffic simulation. The unique properties of the entities to be generated for SLEDGE, such as their connectivity and variable count per scene, render the naive application of most modern generative models to this task non-trivial. Therefore, together with a systematic study of existing lane graph representations, we introduce a novel raster-to-vector autoencoder. It encodes agents and the lane graph into distinct channels in a rasterized latent map. This facilitates both lane-conditioned agent generation and combined generation of lanes and agents with a Diffusion Transformer. Using generated entities in SLEDGE enables greater control over the simulation, e.g. upsampling turns or increasing traffic density. Further, SLEDGE can support 500m long routes, a capability not found in existing data-driven simulators like nuPlan. It presents new challenges for planning algorithms, evidenced by failure rates of over 40% for PDM, the winner of the 2023 nuPlan challenge, when tested on hard routes and dense traffic generated by our model. Compared to nuPlan, SLEDGE requires 500$\times$ less storage to set up (<4 GB), making it a more accessible option and helping with democratizing future research in this field.
1804.10985
Anton\'in Ku\v{c}era
Tom\'a\v{s} Br\'azdil, Krishnendu Chatterjee, Anton\'in Ku\v{c}era, Petr Novotn\'y, Dominik Velan, Florian Zuleger
Efficient Algorithms for Asymptotic Bounds on Termination Time in VASS
arXiv admin note: text overlap with arXiv:1708.09253
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Vector Addition Systems with States (VASS) provide a well-known and fundamental model for the analysis of concurrent processes, parameterized systems, and are also used as abstract models of programs in resource bound analysis. In this paper we study the problem of obtaining asymptotic bounds on the termination time of a given VASS. In particular, we focus on the practically important case of obtaining polynomial bounds on termination time. Our main contributions are as follows: First, we present a polynomial-time algorithm for deciding whether a given VASS has a linear asymptotic complexity. We also show that if the complexity of a VASS is not linear, it is at least quadratic. Second, we classify VASS according to quantitative properties of their cycles. We show that certain singularities in these properties are the key reason for non-polynomial asymptotic complexity of VASS. In absence of singularities, we show that the asymptotic complexity is always polynomial and of the form $\Theta(n^k)$, for some integer $k\leq d$, where $d$ is the dimension of the VASS. We present a polynomial-time algorithm computing the optimal $k$. For general VASS, the same algorithm, which is based on a complete technique for the construction of ranking functions in VASS, produces a valid lower bound, i.e., a $k$ such that the termination complexity is $\Omega(n^k)$. Our results are based on new insights into the geometry of VASS dynamics, which hold the potential for further applicability to VASS analysis.
[ { "created": "Sun, 29 Apr 2018 20:01:00 GMT", "version": "v1" } ]
2018-05-01
[ [ "Brázdil", "Tomáš", "" ], [ "Chatterjee", "Krishnendu", "" ], [ "Kučera", "Antonín", "" ], [ "Novotný", "Petr", "" ], [ "Velan", "Dominik", "" ], [ "Zuleger", "Florian", "" ] ]
Vector Addition Systems with States (VASS) provide a well-known and fundamental model for the analysis of concurrent processes, parameterized systems, and are also used as abstract models of programs in resource bound analysis. In this paper we study the problem of obtaining asymptotic bounds on the termination time of a given VASS. In particular, we focus on the practically important case of obtaining polynomial bounds on termination time. Our main contributions are as follows: First, we present a polynomial-time algorithm for deciding whether a given VASS has a linear asymptotic complexity. We also show that if the complexity of a VASS is not linear, it is at least quadratic. Second, we classify VASS according to quantitative properties of their cycles. We show that certain singularities in these properties are the key reason for non-polynomial asymptotic complexity of VASS. In absence of singularities, we show that the asymptotic complexity is always polynomial and of the form $\Theta(n^k)$, for some integer $k\leq d$, where $d$ is the dimension of the VASS. We present a polynomial-time algorithm computing the optimal $k$. For general VASS, the same algorithm, which is based on a complete technique for the construction of ranking functions in VASS, produces a valid lower bound, i.e., a $k$ such that the termination complexity is $\Omega(n^k)$. Our results are based on new insights into the geometry of VASS dynamics, which hold the potential for further applicability to VASS analysis.
1906.10886
Peng Gao
Zibin Zhou, Fei Wang, Wenjuan Xi, Huaying Chen, Peng Gao, Chengkang He
Joint Multi-frame Detection and Segmentation for Multi-cell Tracking
Accepted by International Conference on Image and Graphics (ICIG 2019)
null
null
null
cs.CV cs.GR eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tracking living cells in video sequence is difficult, because of cell morphology and high similarities between cells. Tracking-by-detection methods are widely used in multi-cell tracking. We perform multi-cell tracking based on the cell centroid detection, and the performance of the detector has high impact on tracking performance. In this paper, UNet is utilized to extract inter-frame and intra-frame spatio-temporal information of cells. Detection performance of cells in mitotic phase is improved by multi-frame input. Good detection results facilitate multi-cell tracking. A mitosis detection algorithm is proposed to detect cell mitosis and the cell lineage is built up. Another UNet is utilized to acquire primary segmentation. Jointly using detection and primary segmentation, cells can be fine segmented in highly dense cell population. Experiments are conducted to evaluate the effectiveness of our method, and results show its state-of-the-art performance.
[ { "created": "Wed, 26 Jun 2019 07:41:11 GMT", "version": "v1" } ]
2019-06-27
[ [ "Zhou", "Zibin", "" ], [ "Wang", "Fei", "" ], [ "Xi", "Wenjuan", "" ], [ "Chen", "Huaying", "" ], [ "Gao", "Peng", "" ], [ "He", "Chengkang", "" ] ]
Tracking living cells in video sequence is difficult, because of cell morphology and high similarities between cells. Tracking-by-detection methods are widely used in multi-cell tracking. We perform multi-cell tracking based on the cell centroid detection, and the performance of the detector has high impact on tracking performance. In this paper, UNet is utilized to extract inter-frame and intra-frame spatio-temporal information of cells. Detection performance of cells in mitotic phase is improved by multi-frame input. Good detection results facilitate multi-cell tracking. A mitosis detection algorithm is proposed to detect cell mitosis and the cell lineage is built up. Another UNet is utilized to acquire primary segmentation. Jointly using detection and primary segmentation, cells can be fine segmented in highly dense cell population. Experiments are conducted to evaluate the effectiveness of our method, and results show its state-of-the-art performance.
1812.01404
Zhan Yang
Zhan Yang, Osolo Ian Raymond, Wuqing Sun, Jun Long
Deep Attention-guided Hashing
Accepted to IEEE ACCESS
null
10.1109/ACCESS.2019.2891894
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid growth of multimedia data (e.g., image, audio and video etc.) on the web, learning-based hashing techniques such as Deep Supervised Hashing (DSH) have proven to be very efficient for large-scale multimedia search. The recent successes seen in Learning-based hashing methods are largely due to the success of deep learning-based hashing methods. However, there are some limitations to previous learning-based hashing methods (e.g., the learned hash codes containing repetitive and highly correlated information). In this paper, we propose a novel learning-based hashing method, named Deep Attention-guided Hashing (DAgH). DAgH is implemented using two stream frameworks. The core idea is to use guided hash codes which are generated by the hashing network of the first stream framework (called first hashing network) to guide the training of the hashing network of the second stream framework (called second hashing network). Specifically, in the first network, it leverages an attention network and hashing network to generate the attention-guided hash codes from the original images. The loss function we propose contains two components: the semantic loss and the attention loss. The attention loss is used to punish the attention network to obtain the salient region from pairs of images; in the second network, these attention-guided hash codes are used to guide the training of the second hashing network (i.e., these codes are treated as supervised labels to train the second network). By doing this, DAgH can make full use of the most critical information contained in images to guide the second hashing network in order to learn efficient hash codes in a true end-to-end fashion. Results from our experiments demonstrate that DAgH can generate high quality hash codes and it outperforms current state-of-the-art methods on three benchmark datasets, CIFAR-10, NUS-WIDE, and ImageNet.
[ { "created": "Tue, 4 Dec 2018 13:36:35 GMT", "version": "v1" }, { "created": "Tue, 8 Jan 2019 01:48:15 GMT", "version": "v2" } ]
2019-01-09
[ [ "Yang", "Zhan", "" ], [ "Raymond", "Osolo Ian", "" ], [ "Sun", "Wuqing", "" ], [ "Long", "Jun", "" ] ]
With the rapid growth of multimedia data (e.g., image, audio and video etc.) on the web, learning-based hashing techniques such as Deep Supervised Hashing (DSH) have proven to be very efficient for large-scale multimedia search. The recent successes seen in Learning-based hashing methods are largely due to the success of deep learning-based hashing methods. However, there are some limitations to previous learning-based hashing methods (e.g., the learned hash codes containing repetitive and highly correlated information). In this paper, we propose a novel learning-based hashing method, named Deep Attention-guided Hashing (DAgH). DAgH is implemented using two stream frameworks. The core idea is to use guided hash codes which are generated by the hashing network of the first stream framework (called first hashing network) to guide the training of the hashing network of the second stream framework (called second hashing network). Specifically, in the first network, it leverages an attention network and hashing network to generate the attention-guided hash codes from the original images. The loss function we propose contains two components: the semantic loss and the attention loss. The attention loss is used to punish the attention network to obtain the salient region from pairs of images; in the second network, these attention-guided hash codes are used to guide the training of the second hashing network (i.e., these codes are treated as supervised labels to train the second network). By doing this, DAgH can make full use of the most critical information contained in images to guide the second hashing network in order to learn efficient hash codes in a true end-to-end fashion. Results from our experiments demonstrate that DAgH can generate high quality hash codes and it outperforms current state-of-the-art methods on three benchmark datasets, CIFAR-10, NUS-WIDE, and ImageNet.
2112.08787
Yue Yu
Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, Chao Zhang
AcTune: Uncertainty-aware Active Self-Training for Semi-Supervised Active Learning with Pretrained Language Models
NAACL 2022 Main Conference (Code: https://github.com/yueyu1030/actune)
NAACL 2022
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
While pre-trained language model (PLM) fine-tuning has achieved strong performance in many NLP tasks, the fine-tuning stage can be still demanding in labeled data. Recent works have resorted to active fine-tuning to improve the label efficiency of PLM fine-tuning, but none of them investigate the potential of unlabeled data. We propose {\ours}, a new framework that leverages unlabeled data to improve the label efficiency of active PLM fine-tuning. AcTune switches between data annotation and model self-training based on uncertainty: it selects high-uncertainty unlabeled samples for active annotation and low-uncertainty ones for model self-training. Under this framework, we design (1) a region-aware sampling strategy that reduces redundancy when actively querying for annotations and (2) a momentum-based memory bank that dynamically aggregates the model's pseudo labels to suppress label noise in self-training. Experiments on 6 text classification datasets show that AcTune outperforms the strongest active learning and self-training baselines and improves the label efficiency of PLM fine-tuning by 56.2\% on average. Our implementation will be available at \url{https://github.com/yueyu1030/actune}.
[ { "created": "Thu, 16 Dec 2021 11:09:48 GMT", "version": "v1" }, { "created": "Tue, 3 May 2022 04:42:55 GMT", "version": "v2" } ]
2022-05-04
[ [ "Yu", "Yue", "" ], [ "Kong", "Lingkai", "" ], [ "Zhang", "Jieyu", "" ], [ "Zhang", "Rongzhi", "" ], [ "Zhang", "Chao", "" ] ]
While pre-trained language model (PLM) fine-tuning has achieved strong performance in many NLP tasks, the fine-tuning stage can be still demanding in labeled data. Recent works have resorted to active fine-tuning to improve the label efficiency of PLM fine-tuning, but none of them investigate the potential of unlabeled data. We propose {\ours}, a new framework that leverages unlabeled data to improve the label efficiency of active PLM fine-tuning. AcTune switches between data annotation and model self-training based on uncertainty: it selects high-uncertainty unlabeled samples for active annotation and low-uncertainty ones for model self-training. Under this framework, we design (1) a region-aware sampling strategy that reduces redundancy when actively querying for annotations and (2) a momentum-based memory bank that dynamically aggregates the model's pseudo labels to suppress label noise in self-training. Experiments on 6 text classification datasets show that AcTune outperforms the strongest active learning and self-training baselines and improves the label efficiency of PLM fine-tuning by 56.2\% on average. Our implementation will be available at \url{https://github.com/yueyu1030/actune}.
1812.08861
Aliaksandr Siarohin
Aliaksandr Siarohin, St\'ephane Lathuili\`ere, Sergey Tulyakov, Elisa Ricci and Nicu Sebe
Animating Arbitrary Objects via Deep Motion Transfer
CVPR-2019 (oral)
null
null
null
cs.GR cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a novel deep learning framework for image animation. Given an input image with a target object and a driving video sequence depicting a moving object, our framework generates a video in which the target object is animated according to the driving sequence. This is achieved through a deep architecture that decouples appearance and motion information. Our framework consists of three main modules: (i) a Keypoint Detector unsupervisely trained to extract object keypoints, (ii) a Dense Motion prediction network for generating dense heatmaps from sparse keypoints, in order to better encode motion information and (iii) a Motion Transfer Network, which uses the motion heatmaps and appearance information extracted from the input image to synthesize the output frames. We demonstrate the effectiveness of our method on several benchmark datasets, spanning a wide variety of object appearances, and show that our approach outperforms state-of-the-art image animation and video generation methods. Our source code is publicly available.
[ { "created": "Thu, 20 Dec 2018 21:45:56 GMT", "version": "v1" }, { "created": "Mon, 24 Dec 2018 08:01:58 GMT", "version": "v2" }, { "created": "Fri, 30 Aug 2019 23:48:13 GMT", "version": "v3" } ]
2019-09-04
[ [ "Siarohin", "Aliaksandr", "" ], [ "Lathuilière", "Stéphane", "" ], [ "Tulyakov", "Sergey", "" ], [ "Ricci", "Elisa", "" ], [ "Sebe", "Nicu", "" ] ]
This paper introduces a novel deep learning framework for image animation. Given an input image with a target object and a driving video sequence depicting a moving object, our framework generates a video in which the target object is animated according to the driving sequence. This is achieved through a deep architecture that decouples appearance and motion information. Our framework consists of three main modules: (i) a Keypoint Detector unsupervisely trained to extract object keypoints, (ii) a Dense Motion prediction network for generating dense heatmaps from sparse keypoints, in order to better encode motion information and (iii) a Motion Transfer Network, which uses the motion heatmaps and appearance information extracted from the input image to synthesize the output frames. We demonstrate the effectiveness of our method on several benchmark datasets, spanning a wide variety of object appearances, and show that our approach outperforms state-of-the-art image animation and video generation methods. Our source code is publicly available.
2001.06891
Zhu Zhang
Zhu Zhang, Zhou Zhao, Yang Zhao, Qi Wang, Huasheng Liu, Lianli Gao
Where Does It Exist: Spatio-Temporal Video Grounding for Multi-Form Sentences
The camera ready version for CVPR 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider a novel task, Spatio-Temporal Video Grounding for Multi-Form Sentences (STVG). Given an untrimmed video and a declarative/interrogative sentence depicting an object, STVG aims to localize the spatio-temporal tube of the queried object. STVG has two challenging settings: (1) We need to localize spatio-temporal object tubes from untrimmed videos, where the object may only exist in a very small segment of the video; (2) We deal with multi-form sentences, including the declarative sentences with explicit objects and interrogative sentences with unknown objects. Existing methods cannot tackle the STVG task due to the ineffective tube pre-generation and the lack of object relationship modeling. Thus, we then propose a novel Spatio-Temporal Graph Reasoning Network (STGRN) for this task. First, we build a spatio-temporal region graph to capture the region relationships with temporal object dynamics, which involves the implicit and explicit spatial subgraphs in each frame and the temporal dynamic subgraph across frames. We then incorporate textual clues into the graph and develop the multi-step cross-modal graph reasoning. Next, we introduce a spatio-temporal localizer with a dynamic selection method to directly retrieve the spatio-temporal tubes without tube pre-generation. Moreover, we contribute a large-scale video grounding dataset VidSTG based on video relation dataset VidOR. The extensive experiments demonstrate the effectiveness of our method.
[ { "created": "Sun, 19 Jan 2020 19:53:22 GMT", "version": "v1" }, { "created": "Tue, 25 Feb 2020 13:46:00 GMT", "version": "v2" }, { "created": "Tue, 24 Mar 2020 21:34:44 GMT", "version": "v3" } ]
2020-03-26
[ [ "Zhang", "Zhu", "" ], [ "Zhao", "Zhou", "" ], [ "Zhao", "Yang", "" ], [ "Wang", "Qi", "" ], [ "Liu", "Huasheng", "" ], [ "Gao", "Lianli", "" ] ]
In this paper, we consider a novel task, Spatio-Temporal Video Grounding for Multi-Form Sentences (STVG). Given an untrimmed video and a declarative/interrogative sentence depicting an object, STVG aims to localize the spatio-temporal tube of the queried object. STVG has two challenging settings: (1) We need to localize spatio-temporal object tubes from untrimmed videos, where the object may only exist in a very small segment of the video; (2) We deal with multi-form sentences, including the declarative sentences with explicit objects and interrogative sentences with unknown objects. Existing methods cannot tackle the STVG task due to the ineffective tube pre-generation and the lack of object relationship modeling. Thus, we then propose a novel Spatio-Temporal Graph Reasoning Network (STGRN) for this task. First, we build a spatio-temporal region graph to capture the region relationships with temporal object dynamics, which involves the implicit and explicit spatial subgraphs in each frame and the temporal dynamic subgraph across frames. We then incorporate textual clues into the graph and develop the multi-step cross-modal graph reasoning. Next, we introduce a spatio-temporal localizer with a dynamic selection method to directly retrieve the spatio-temporal tubes without tube pre-generation. Moreover, we contribute a large-scale video grounding dataset VidSTG based on video relation dataset VidOR. The extensive experiments demonstrate the effectiveness of our method.
2306.03906
Kenjiro Tadakuma
Josephine Galipon, Shoya Shimizu, Kenjiro Tadakuma
Biological Organisms as End Effectors
13 pages, 9 figures, 1 graphical abstract
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
In robotics, an end effector is a device at the end of a robotic arm that is designed to physically interact with objects in the environment or with the environment itself. Effectively, it serves as the hand of the robot, carrying out tasks on behalf of humans. But could we turn this concept on its head and consider using living organisms themselves as end effectors? This paper introduces a novel idea of using whole living organisms as end effectors for robotics. We showcase this by demonstrating that pill bugs and chitons -- types of small, harmless creatures -- can be utilized as functional grippers. Crucially, this method does not harm these creatures, enabling their release back into nature after use. How this concept may be expanded to other organisms and applications is also discussed.
[ { "created": "Tue, 6 Jun 2023 17:59:29 GMT", "version": "v1" }, { "created": "Mon, 12 Jun 2023 15:22:02 GMT", "version": "v2" } ]
2023-06-13
[ [ "Galipon", "Josephine", "" ], [ "Shimizu", "Shoya", "" ], [ "Tadakuma", "Kenjiro", "" ] ]
In robotics, an end effector is a device at the end of a robotic arm that is designed to physically interact with objects in the environment or with the environment itself. Effectively, it serves as the hand of the robot, carrying out tasks on behalf of humans. But could we turn this concept on its head and consider using living organisms themselves as end effectors? This paper introduces a novel idea of using whole living organisms as end effectors for robotics. We showcase this by demonstrating that pill bugs and chitons -- types of small, harmless creatures -- can be utilized as functional grippers. Crucially, this method does not harm these creatures, enabling their release back into nature after use. How this concept may be expanded to other organisms and applications is also discussed.
2211.09064
Gecheng Chen
Gecheng Chen, Yu Zhou, Xudong Zhang, Rui Tuo
Renewing Iterative Self-labeling Domain Adaptation with Application to the Spine Motion Prediction
null
null
null
null
cs.LG stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The area of transfer learning comprises supervised machine learning methods that cope with the issue when the training and testing data have different input feature spaces or distributions. In this work, we propose a novel transfer learning algorithm called Renewing Iterative Self-labeling Domain Adaptation (Re-ISDA). In this work, we propose a novel transfer learning algorithm called Renewing Iterative Self-labeling Domain Adaptation (Re-ISDA).
[ { "created": "Mon, 14 Nov 2022 21:06:02 GMT", "version": "v1" } ]
2022-11-17
[ [ "Chen", "Gecheng", "" ], [ "Zhou", "Yu", "" ], [ "Zhang", "Xudong", "" ], [ "Tuo", "Rui", "" ] ]
The area of transfer learning comprises supervised machine learning methods that cope with the issue when the training and testing data have different input feature spaces or distributions. In this work, we propose a novel transfer learning algorithm called Renewing Iterative Self-labeling Domain Adaptation (Re-ISDA). In this work, we propose a novel transfer learning algorithm called Renewing Iterative Self-labeling Domain Adaptation (Re-ISDA).
2210.11787
Prafulla Kumar Choubey
Prafulla Kumar Choubey and Ruihong Huang
Modeling Document-level Temporal Structures for Building Temporal Dependency Graphs
AACL 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We propose to leverage news discourse profiling to model document-level temporal structures for building temporal dependency graphs. Our key observation is that the functional roles of sentences used for profiling news discourse signify different time frames relevant to a news story and can, therefore, help to recover the global temporal structure of a document. Our analyses and experiments with the widely used knowledge distillation technique show that discourse profiling effectively identifies distant inter-sentence event and (or) time expression pairs that are temporally related and otherwise difficult to locate.
[ { "created": "Fri, 21 Oct 2022 07:45:17 GMT", "version": "v1" } ]
2022-10-24
[ [ "Choubey", "Prafulla Kumar", "" ], [ "Huang", "Ruihong", "" ] ]
We propose to leverage news discourse profiling to model document-level temporal structures for building temporal dependency graphs. Our key observation is that the functional roles of sentences used for profiling news discourse signify different time frames relevant to a news story and can, therefore, help to recover the global temporal structure of a document. Our analyses and experiments with the widely used knowledge distillation technique show that discourse profiling effectively identifies distant inter-sentence event and (or) time expression pairs that are temporally related and otherwise difficult to locate.
2403.19432
Song Wang
Song Wang, Yiliang Zhou, Ziqiang Han, Cui Tao, Yunyu Xiao, Ying Ding, Joydeep Ghosh, Yifan Peng
Uncovering Misattributed Suicide Causes through Annotation Inconsistency Detection in Death Investigation Notes
19 pages, 6 figures
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Data accuracy is essential for scientific research and policy development. The National Violent Death Reporting System (NVDRS) data is widely used for discovering the patterns and causes of death. Recent studies suggested the annotation inconsistencies within the NVDRS and the potential impact on erroneous suicide-cause attributions. We present an empirical Natural Language Processing (NLP) approach to detect annotation inconsistencies and adopt a cross-validation-like paradigm to identify problematic instances. We analyzed 267,804 suicide death incidents between 2003 and 2020 from the NVDRS. Our results showed that incorporating the target state's data into training the suicide-crisis classifier brought an increase of 5.4% to the F-1 score on the target state's test set and a decrease of 1.1% on other states' test set. To conclude, we demonstrated the annotation inconsistencies in NVDRS's death investigation notes, identified problematic instances, evaluated the effectiveness of correcting problematic instances, and eventually proposed an NLP improvement solution.
[ { "created": "Thu, 28 Mar 2024 14:03:12 GMT", "version": "v1" }, { "created": "Fri, 29 Mar 2024 17:21:02 GMT", "version": "v2" } ]
2024-04-01
[ [ "Wang", "Song", "" ], [ "Zhou", "Yiliang", "" ], [ "Han", "Ziqiang", "" ], [ "Tao", "Cui", "" ], [ "Xiao", "Yunyu", "" ], [ "Ding", "Ying", "" ], [ "Ghosh", "Joydeep", "" ], [ "Peng", "Yifan", "" ] ]
Data accuracy is essential for scientific research and policy development. The National Violent Death Reporting System (NVDRS) data is widely used for discovering the patterns and causes of death. Recent studies suggested the annotation inconsistencies within the NVDRS and the potential impact on erroneous suicide-cause attributions. We present an empirical Natural Language Processing (NLP) approach to detect annotation inconsistencies and adopt a cross-validation-like paradigm to identify problematic instances. We analyzed 267,804 suicide death incidents between 2003 and 2020 from the NVDRS. Our results showed that incorporating the target state's data into training the suicide-crisis classifier brought an increase of 5.4% to the F-1 score on the target state's test set and a decrease of 1.1% on other states' test set. To conclude, we demonstrated the annotation inconsistencies in NVDRS's death investigation notes, identified problematic instances, evaluated the effectiveness of correcting problematic instances, and eventually proposed an NLP improvement solution.
2010.04927
Qiansheng Wang
Qiansheng Wang, Yuxin Liu, Chengguo Lv, Zhen Wang and Guohong Fu
Cue-word Driven Neural Response Generation with a Shrinking Vocabulary
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open-domain response generation is the task of generating sensible and informative re-sponses to the source sentence. However, neural models tend to generate safe and mean-ingless responses. While cue-word introducing approaches encourage responses with concrete semantics and have shown tremendous potential, they still fail to explore di-verse responses during decoding. In this paper, we propose a novel but natural approach that can produce multiple cue-words during decoding, and then uses the produced cue-words to drive decoding and shrinks the decoding vocabulary. Thus the neural genera-tion model can explore the full space of responses and discover informative ones with efficiency. Experimental results show that our approach significantly outperforms several strong baseline models with much lower decoding complexity. Especially, our approach can converge to concrete semantics more efficiently during decoding.
[ { "created": "Sat, 10 Oct 2020 07:13:32 GMT", "version": "v1" } ]
2020-10-13
[ [ "Wang", "Qiansheng", "" ], [ "Liu", "Yuxin", "" ], [ "Lv", "Chengguo", "" ], [ "Wang", "Zhen", "" ], [ "Fu", "Guohong", "" ] ]
Open-domain response generation is the task of generating sensible and informative re-sponses to the source sentence. However, neural models tend to generate safe and mean-ingless responses. While cue-word introducing approaches encourage responses with concrete semantics and have shown tremendous potential, they still fail to explore di-verse responses during decoding. In this paper, we propose a novel but natural approach that can produce multiple cue-words during decoding, and then uses the produced cue-words to drive decoding and shrinks the decoding vocabulary. Thus the neural genera-tion model can explore the full space of responses and discover informative ones with efficiency. Experimental results show that our approach significantly outperforms several strong baseline models with much lower decoding complexity. Especially, our approach can converge to concrete semantics more efficiently during decoding.
1304.7854
Leopoldo Bertossi
Leopoldo Bertossi and Jaffer Gardezi
On the Complexity of Query Answering under Matching Dependencies for Entity Resolution
To appear in Proc. of the Alberto Mendelzon International Workshop on Foundations of Data Management (AMW 2013)
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Matching Dependencies (MDs) are a relatively recent proposal for declarative entity resolution. They are rules that specify, given the similarities satisfied by values in a database, what values should be considered duplicates, and have to be matched. On the basis of a chase-like procedure for MD enforcement, we can obtain clean (duplicate-free) instances; actually possibly several of them. The resolved answers to queries are those that are invariant under the resulting class of resolved instances. In previous work we identified some tractable cases (i.e. for certain classes of queries and MDs) of resolved query answering. In this paper we further investigate the complexity of this problem, identifying some intractable cases. For a special case we obtain a dichotomy complexity result.
[ { "created": "Tue, 30 Apr 2013 04:05:44 GMT", "version": "v1" }, { "created": "Sun, 26 May 2013 21:34:35 GMT", "version": "v2" } ]
2013-05-28
[ [ "Bertossi", "Leopoldo", "" ], [ "Gardezi", "Jaffer", "" ] ]
Matching Dependencies (MDs) are a relatively recent proposal for declarative entity resolution. They are rules that specify, given the similarities satisfied by values in a database, what values should be considered duplicates, and have to be matched. On the basis of a chase-like procedure for MD enforcement, we can obtain clean (duplicate-free) instances; actually possibly several of them. The resolved answers to queries are those that are invariant under the resulting class of resolved instances. In previous work we identified some tractable cases (i.e. for certain classes of queries and MDs) of resolved query answering. In this paper we further investigate the complexity of this problem, identifying some intractable cases. For a special case we obtain a dichotomy complexity result.
1908.08005
No\"elie Cherrier
No\"elie Cherrier, Jean-Philippe Poli, Maxime Defurne and Franck Sabati\'e
Consistent Feature Construction with Constrained Genetic Programming for Experimental Physics
Accepted in this version to CEC 2019
Proceedings of 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 2019, pp. 1650-1658
10.1109/CEC.2019.8789937
null
cs.NE cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A good feature representation is a determinant factor to achieve high performance for many machine learning algorithms in terms of classification. This is especially true for techniques that do not build complex internal representations of data (e.g. decision trees, in contrast to deep neural networks). To transform the feature space, feature construction techniques build new high-level features from the original ones. Among these techniques, Genetic Programming is a good candidate to provide interpretable features required for data analysis in high energy physics. Classically, original features or higher-level features based on physics first principles are used as inputs for training. However, physicists would benefit from an automatic and interpretable feature construction for the classification of particle collision events. Our main contribution consists in combining different aspects of Genetic Programming and applying them to feature construction for experimental physics. In particular, to be applicable to physics, dimensional consistency is enforced using grammars. Results of experiments on three physics datasets show that the constructed features can bring a significant gain to the classification accuracy. To the best of our knowledge, it is the first time a method is proposed for interpretable feature construction with units of measurement, and that experts in high-energy physics validate the overall approach as well as the interpretability of the built features.
[ { "created": "Sat, 17 Aug 2019 10:55:15 GMT", "version": "v1" } ]
2019-08-22
[ [ "Cherrier", "Noëlie", "" ], [ "Poli", "Jean-Philippe", "" ], [ "Defurne", "Maxime", "" ], [ "Sabatié", "Franck", "" ] ]
A good feature representation is a determinant factor to achieve high performance for many machine learning algorithms in terms of classification. This is especially true for techniques that do not build complex internal representations of data (e.g. decision trees, in contrast to deep neural networks). To transform the feature space, feature construction techniques build new high-level features from the original ones. Among these techniques, Genetic Programming is a good candidate to provide interpretable features required for data analysis in high energy physics. Classically, original features or higher-level features based on physics first principles are used as inputs for training. However, physicists would benefit from an automatic and interpretable feature construction for the classification of particle collision events. Our main contribution consists in combining different aspects of Genetic Programming and applying them to feature construction for experimental physics. In particular, to be applicable to physics, dimensional consistency is enforced using grammars. Results of experiments on three physics datasets show that the constructed features can bring a significant gain to the classification accuracy. To the best of our knowledge, it is the first time a method is proposed for interpretable feature construction with units of measurement, and that experts in high-energy physics validate the overall approach as well as the interpretability of the built features.
2102.05700
Johannes Knittel
Johannes Knittel, Steffen Koch, Thomas Ertl
ELSKE: Efficient Large-Scale Keyphrase Extraction
null
null
10.1145/3469096.3474930
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Keyphrase extraction methods can provide insights into large collections of documents such as social media posts. Existing methods, however, are less suited for the real-time analysis of streaming data, because they are computationally too expensive or require restrictive constraints regarding the structure of keyphrases. We propose an efficient approach to extract keyphrases from large document collections and show that the method also performs competitively on individual documents.
[ { "created": "Wed, 10 Feb 2021 19:14:01 GMT", "version": "v1" } ]
2021-09-16
[ [ "Knittel", "Johannes", "" ], [ "Koch", "Steffen", "" ], [ "Ertl", "Thomas", "" ] ]
Keyphrase extraction methods can provide insights into large collections of documents such as social media posts. Existing methods, however, are less suited for the real-time analysis of streaming data, because they are computationally too expensive or require restrictive constraints regarding the structure of keyphrases. We propose an efficient approach to extract keyphrases from large document collections and show that the method also performs competitively on individual documents.
2002.07948
Alireza Fallah
Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
Personalized Federated Learning: A Meta-Learning Approach
To appear in 34th Conference on Neural Information Processing Systems (NeurIPS 2020)
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Federated Learning, we aim to train models across multiple computing units (users), while users can only communicate with a common central server, without exchanging their data samples. This mechanism exploits the computational power of all users and allows users to obtain a richer model as their models are trained over a larger set of data points. However, this scheme only develops a common output for all the users, and, therefore, it does not adapt the model to each user. This is an important missing feature, especially given the heterogeneity of the underlying data distribution for various users. In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data. This approach keeps all the benefits of the federated learning architecture, and, by structure, leads to a more personalized model for each user. We show this problem can be studied within the Model-Agnostic Meta-Learning (MAML) framework. Inspired by this connection, we study a personalized variant of the well-known Federated Averaging algorithm and evaluate its performance in terms of gradient norm for non-convex loss functions. Further, we characterize how this performance is affected by the closeness of underlying distributions of user data, measured in terms of distribution distances such as Total Variation and 1-Wasserstein metric.
[ { "created": "Wed, 19 Feb 2020 01:08:46 GMT", "version": "v1" }, { "created": "Tue, 23 Jun 2020 04:16:11 GMT", "version": "v2" }, { "created": "Sat, 27 Jun 2020 02:52:01 GMT", "version": "v3" }, { "created": "Fri, 23 Oct 2020 03:04:01 GMT", "version": "v4" } ]
2020-10-26
[ [ "Fallah", "Alireza", "" ], [ "Mokhtari", "Aryan", "" ], [ "Ozdaglar", "Asuman", "" ] ]
In Federated Learning, we aim to train models across multiple computing units (users), while users can only communicate with a common central server, without exchanging their data samples. This mechanism exploits the computational power of all users and allows users to obtain a richer model as their models are trained over a larger set of data points. However, this scheme only develops a common output for all the users, and, therefore, it does not adapt the model to each user. This is an important missing feature, especially given the heterogeneity of the underlying data distribution for various users. In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data. This approach keeps all the benefits of the federated learning architecture, and, by structure, leads to a more personalized model for each user. We show this problem can be studied within the Model-Agnostic Meta-Learning (MAML) framework. Inspired by this connection, we study a personalized variant of the well-known Federated Averaging algorithm and evaluate its performance in terms of gradient norm for non-convex loss functions. Further, we characterize how this performance is affected by the closeness of underlying distributions of user data, measured in terms of distribution distances such as Total Variation and 1-Wasserstein metric.
1807.05127
Patrick Verga
Shikhar Murty*, Patrick Verga*, Luke Vilnis, Irena Radovanovic, Andrew McCallum
Hierarchical Losses and New Resources for Fine-grained Entity Typing and Linking
ACL 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extraction from raw text to a knowledge base of entities and fine-grained types is often cast as prediction into a flat set of entity and type labels, neglecting the rich hierarchies over types and entities contained in curated ontologies. Previous attempts to incorporate hierarchical structure have yielded little benefit and are restricted to shallow ontologies. This paper presents new methods using real and complex bilinear mappings for integrating hierarchical information, yielding substantial improvement over flat predictions in entity linking and fine-grained entity typing, and achieving new state-of-the-art results for end-to-end models on the benchmark FIGER dataset. We also present two new human-annotated datasets containing wide and deep hierarchies which we will release to the community to encourage further research in this direction: MedMentions, a collection of PubMed abstracts in which 246k mentions have been mapped to the massive UMLS ontology; and TypeNet, which aligns Freebase types with the WordNet hierarchy to obtain nearly 2k entity types. In experiments on all three datasets we show substantial gains from hierarchy-aware training.
[ { "created": "Fri, 13 Jul 2018 15:15:41 GMT", "version": "v1" } ]
2018-07-16
[ [ "Murty*", "Shikhar", "" ], [ "Verga*", "Patrick", "" ], [ "Vilnis", "Luke", "" ], [ "Radovanovic", "Irena", "" ], [ "McCallum", "Andrew", "" ] ]
Extraction from raw text to a knowledge base of entities and fine-grained types is often cast as prediction into a flat set of entity and type labels, neglecting the rich hierarchies over types and entities contained in curated ontologies. Previous attempts to incorporate hierarchical structure have yielded little benefit and are restricted to shallow ontologies. This paper presents new methods using real and complex bilinear mappings for integrating hierarchical information, yielding substantial improvement over flat predictions in entity linking and fine-grained entity typing, and achieving new state-of-the-art results for end-to-end models on the benchmark FIGER dataset. We also present two new human-annotated datasets containing wide and deep hierarchies which we will release to the community to encourage further research in this direction: MedMentions, a collection of PubMed abstracts in which 246k mentions have been mapped to the massive UMLS ontology; and TypeNet, which aligns Freebase types with the WordNet hierarchy to obtain nearly 2k entity types. In experiments on all three datasets we show substantial gains from hierarchy-aware training.
1501.01327
Rama Krishna Bandi
Rama Krishna Bandi and Maheshanand Bhaintwal
Cyclic codes over $\mathbb{Z}_4+u\mathbb{Z}_4$
arXiv admin note: text overlap with arXiv:1412.3751
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we have studied cyclic codes over the ring $R=\mathbb{Z}_4+u\mathbb{Z}_4$, $u^2=0$. We have considered cyclic codes of odd lengths. A sufficient condition for a cyclic code over $R$ to be a $\mathbb{Z}_4$-free module is presented. We have provided the general form of the generators of a cyclic code over $R$ and determined a formula for the ranks of such codes. In this paper we have mainly focused on principally generated cyclic codes of odd length over $R$. We have determined a necessary condition and a sufficient condition for cyclic codes of odd lengths over $R$ to be $R$-free.
[ { "created": "Tue, 6 Jan 2015 22:19:02 GMT", "version": "v1" } ]
2015-01-08
[ [ "Bandi", "Rama Krishna", "" ], [ "Bhaintwal", "Maheshanand", "" ] ]
In this paper, we have studied cyclic codes over the ring $R=\mathbb{Z}_4+u\mathbb{Z}_4$, $u^2=0$. We have considered cyclic codes of odd lengths. A sufficient condition for a cyclic code over $R$ to be a $\mathbb{Z}_4$-free module is presented. We have provided the general form of the generators of a cyclic code over $R$ and determined a formula for the ranks of such codes. In this paper we have mainly focused on principally generated cyclic codes of odd length over $R$. We have determined a necessary condition and a sufficient condition for cyclic codes of odd lengths over $R$ to be $R$-free.
1709.08318
Ari Stern
Ari Stern and Alexander Tettenhorst
Hodge decomposition and the Shapley value of a cooperative game
21 pages; v2: rewrote Section 2.2 to be a more elementary introduction to the combinatorial Hodge decomposition, added Section 3.5 on explicit decomposition via discrete Green's functions, other minor edits
Games Econom. Behav., 113 (2019), 186-198
10.1016/j.geb.2018.09.006
null
cs.GT math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that a cooperative game may be decomposed into a sum of component games, one for each player, using the combinatorial Hodge decomposition on a graph. This decomposition is shown to satisfy certain efficiency, null-player, symmetry, and linearity properties. Consequently, we obtain a new characterization of the classical Shapley value as the value of the grand coalition in each player's component game. We also relate this decomposition to a least-squares problem involving inessential games (in a similar spirit to previous work on least-squares and minimum-norm solution concepts) and to the graph Laplacian. Finally, we generalize this approach to games with weights and/or constraints on coalition formation.
[ { "created": "Mon, 25 Sep 2017 04:51:45 GMT", "version": "v1" }, { "created": "Tue, 18 Sep 2018 21:29:58 GMT", "version": "v2" } ]
2019-03-28
[ [ "Stern", "Ari", "" ], [ "Tettenhorst", "Alexander", "" ] ]
We show that a cooperative game may be decomposed into a sum of component games, one for each player, using the combinatorial Hodge decomposition on a graph. This decomposition is shown to satisfy certain efficiency, null-player, symmetry, and linearity properties. Consequently, we obtain a new characterization of the classical Shapley value as the value of the grand coalition in each player's component game. We also relate this decomposition to a least-squares problem involving inessential games (in a similar spirit to previous work on least-squares and minimum-norm solution concepts) and to the graph Laplacian. Finally, we generalize this approach to games with weights and/or constraints on coalition formation.
2211.08371
Daniel Fried
Daniel Fried, Nicholas Tomlin, Jennifer Hu, Roma Patel, Aida Nematzadeh
Pragmatics in Language Grounding: Phenomena, Tasks, and Modeling Approaches
Findings of EMNLP 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
People rely heavily on context to enrich meaning beyond what is literally said, enabling concise but effective communication. To interact successfully and naturally with people, user-facing artificial intelligence systems will require similar skills in pragmatics: relying on various types of context -- from shared linguistic goals and conventions, to the visual and embodied world -- to use language effectively. We survey existing grounded settings and pragmatic modeling approaches and analyze how the task goals, environmental contexts, and communicative affordances in each work enrich linguistic meaning. We present recommendations for future grounded task design to naturally elicit pragmatic phenomena, and suggest directions that focus on a broader range of communicative contexts and affordances.
[ { "created": "Tue, 15 Nov 2022 18:21:46 GMT", "version": "v1" }, { "created": "Sun, 21 May 2023 23:34:39 GMT", "version": "v2" }, { "created": "Tue, 21 Nov 2023 23:04:53 GMT", "version": "v3" } ]
2023-11-23
[ [ "Fried", "Daniel", "" ], [ "Tomlin", "Nicholas", "" ], [ "Hu", "Jennifer", "" ], [ "Patel", "Roma", "" ], [ "Nematzadeh", "Aida", "" ] ]
People rely heavily on context to enrich meaning beyond what is literally said, enabling concise but effective communication. To interact successfully and naturally with people, user-facing artificial intelligence systems will require similar skills in pragmatics: relying on various types of context -- from shared linguistic goals and conventions, to the visual and embodied world -- to use language effectively. We survey existing grounded settings and pragmatic modeling approaches and analyze how the task goals, environmental contexts, and communicative affordances in each work enrich linguistic meaning. We present recommendations for future grounded task design to naturally elicit pragmatic phenomena, and suggest directions that focus on a broader range of communicative contexts and affordances.
2010.00970
Paul Ferm\'e
Siddharth Barman, Omar Fawzi, Paul Ferm\'e
Tight Approximation Guarantees for Concave Coverage Problems
33 pages. v3 minor corrections and added FPT hardness
null
10.4230/LIPIcs.STACS.2021.9
null
cs.DS cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the maximum coverage problem, we are given subsets $T_1, \ldots, T_m$ of a universe $[n]$ along with an integer $k$ and the objective is to find a subset $S \subseteq [m]$ of size $k$ that maximizes $C(S) := \Big|\bigcup_{i \in S} T_i\Big|$. It is a classic result that the greedy algorithm for this problem achieves an optimal approximation ratio of $1-e^{-1}$. In this work we consider a generalization of this problem wherein an element $a$ can contribute by an amount that depends on the number of times it is covered. Given a concave, nondecreasing function $\varphi$, we define $C^{\varphi}(S) := \sum_{a \in [n]}w_a\varphi(|S|_a)$, where $|S|_a = |\{i \in S : a \in T_i\}|$. The standard maximum coverage problem corresponds to taking $\varphi(j) = \min\{j,1\}$. For any such $\varphi$, we provide an efficient algorithm that achieves an approximation ratio equal to the Poisson concavity ratio of $\varphi$, defined by $\alpha_{\varphi} := \min_{x \in \mathbb{N}^*} \frac{\mathbb{E}[\varphi(\text{Poi}(x))]}{\varphi(\mathbb{E}[\text{Poi}(x)])}$. Complementing this approximation guarantee, we establish a matching NP-hardness result when $\varphi$ grows in a sublinear way. As special cases, we improve the result of [Barman et al., IPCO, 2020] about maximum multi-coverage, that was based on the unique games conjecture, and we recover the result of [Dudycz et al., IJCAI, 2020] on multi-winner approval-based voting for geometrically dominant rules. Our result goes beyond these special cases and we illustrate it with applications to distributed resource allocation problems, welfare maximization problems and approval-based voting for general rules.
[ { "created": "Fri, 2 Oct 2020 13:03:04 GMT", "version": "v1" }, { "created": "Fri, 13 Nov 2020 13:19:35 GMT", "version": "v2" }, { "created": "Mon, 18 Jan 2021 10:36:21 GMT", "version": "v3" } ]
2021-05-04
[ [ "Barman", "Siddharth", "" ], [ "Fawzi", "Omar", "" ], [ "Fermé", "Paul", "" ] ]
In the maximum coverage problem, we are given subsets $T_1, \ldots, T_m$ of a universe $[n]$ along with an integer $k$ and the objective is to find a subset $S \subseteq [m]$ of size $k$ that maximizes $C(S) := \Big|\bigcup_{i \in S} T_i\Big|$. It is a classic result that the greedy algorithm for this problem achieves an optimal approximation ratio of $1-e^{-1}$. In this work we consider a generalization of this problem wherein an element $a$ can contribute by an amount that depends on the number of times it is covered. Given a concave, nondecreasing function $\varphi$, we define $C^{\varphi}(S) := \sum_{a \in [n]}w_a\varphi(|S|_a)$, where $|S|_a = |\{i \in S : a \in T_i\}|$. The standard maximum coverage problem corresponds to taking $\varphi(j) = \min\{j,1\}$. For any such $\varphi$, we provide an efficient algorithm that achieves an approximation ratio equal to the Poisson concavity ratio of $\varphi$, defined by $\alpha_{\varphi} := \min_{x \in \mathbb{N}^*} \frac{\mathbb{E}[\varphi(\text{Poi}(x))]}{\varphi(\mathbb{E}[\text{Poi}(x)])}$. Complementing this approximation guarantee, we establish a matching NP-hardness result when $\varphi$ grows in a sublinear way. As special cases, we improve the result of [Barman et al., IPCO, 2020] about maximum multi-coverage, that was based on the unique games conjecture, and we recover the result of [Dudycz et al., IJCAI, 2020] on multi-winner approval-based voting for geometrically dominant rules. Our result goes beyond these special cases and we illustrate it with applications to distributed resource allocation problems, welfare maximization problems and approval-based voting for general rules.
2403.07691
Jiwoo Hong
Jiwoo Hong, Noah Lee, James Thorne
ORPO: Monolithic Preference Optimization without Reference Model
Preprint
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we study the crucial role of SFT within the context of preference alignment, emphasizing that a minor penalty for the disfavored generation style is sufficient for preference-aligned SFT. Building on this foundation, we introduce a straightforward and innovative reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the necessity for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across the diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 12.20% on $\text{AlpacaEval}_{2.0}$ (Figure 1), 66.19% on IFEval (instruction-level loose, Table 6), and 7.32 in MT-Bench (Figure 12). We release code and model checkpoints for Mistral-ORPO-$\alpha$ (7B) and Mistral-ORPO-$\beta$ (7B).
[ { "created": "Tue, 12 Mar 2024 14:34:08 GMT", "version": "v1" }, { "created": "Thu, 14 Mar 2024 07:47:08 GMT", "version": "v2" } ]
2024-03-15
[ [ "Hong", "Jiwoo", "" ], [ "Lee", "Noah", "" ], [ "Thorne", "James", "" ] ]
While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we study the crucial role of SFT within the context of preference alignment, emphasizing that a minor penalty for the disfavored generation style is sufficient for preference-aligned SFT. Building on this foundation, we introduce a straightforward and innovative reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the necessity for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across the diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 12.20% on $\text{AlpacaEval}_{2.0}$ (Figure 1), 66.19% on IFEval (instruction-level loose, Table 6), and 7.32 in MT-Bench (Figure 12). We release code and model checkpoints for Mistral-ORPO-$\alpha$ (7B) and Mistral-ORPO-$\beta$ (7B).
2404.18533
Meng Li
Meng Li, Haoran Jin, Ruixuan Huang, Zhihao Xu, Defu Lian, Zijia Lin, Di Zhang, Xiting Wang
Evaluating Concept-based Explanations of Language Models: A Study on Faithfulness and Readability
null
null
null
null
cs.AI cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the surprisingly high intelligence exhibited by Large Language Models (LLMs), we are somehow intimidated to fully deploy them into real-life applications considering their black-box nature. Concept-based explanations arise as a promising avenue for explaining what the LLMs have learned, making them more transparent to humans. However, current evaluations for concepts tend to be heuristic and non-deterministic, e.g. case study or human evaluation, hindering the development of the field. To bridge the gap, we approach concept-based explanation evaluation via faithfulness and readability. We first introduce a formal definition of concept generalizable to diverse concept-based explanations. Based on this, we quantify faithfulness via the difference in the output upon perturbation. We then provide an automatic measure for readability, by measuring the coherence of patterns that maximally activate a concept. This measure serves as a cost-effective and reliable substitute for human evaluation. Finally, based on measurement theory, we describe a meta-evaluation method for evaluating the above measures via reliability and validity, which can be generalized to other tasks as well. Extensive experimental analysis has been conducted to validate and inform the selection of concept evaluation measures.
[ { "created": "Mon, 29 Apr 2024 09:20:25 GMT", "version": "v1" }, { "created": "Tue, 30 Apr 2024 03:31:51 GMT", "version": "v2" } ]
2024-05-01
[ [ "Li", "Meng", "" ], [ "Jin", "Haoran", "" ], [ "Huang", "Ruixuan", "" ], [ "Xu", "Zhihao", "" ], [ "Lian", "Defu", "" ], [ "Lin", "Zijia", "" ], [ "Zhang", "Di", "" ], [ "Wang", "Xiting", "" ] ]
Despite the surprisingly high intelligence exhibited by Large Language Models (LLMs), we are somehow intimidated to fully deploy them into real-life applications considering their black-box nature. Concept-based explanations arise as a promising avenue for explaining what the LLMs have learned, making them more transparent to humans. However, current evaluations for concepts tend to be heuristic and non-deterministic, e.g. case study or human evaluation, hindering the development of the field. To bridge the gap, we approach concept-based explanation evaluation via faithfulness and readability. We first introduce a formal definition of concept generalizable to diverse concept-based explanations. Based on this, we quantify faithfulness via the difference in the output upon perturbation. We then provide an automatic measure for readability, by measuring the coherence of patterns that maximally activate a concept. This measure serves as a cost-effective and reliable substitute for human evaluation. Finally, based on measurement theory, we describe a meta-evaluation method for evaluating the above measures via reliability and validity, which can be generalized to other tasks as well. Extensive experimental analysis has been conducted to validate and inform the selection of concept evaluation measures.
1502.01220
Tarek Lahlou
Tarek A. Lahlou and Alan V. Oppenheim
Unveiling The Tree: A Convex Framework for Sparse Problems
null
null
10.1109/ICASSP.2015.7178687
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a general framework for generating greedy algorithms for solving convex constraint satisfaction problems for sparse solutions by mapping the satisfaction problem into one of graph traversal on a rooted tree of unknown topology. For every pre-walk of the tree an initial set of generally dense feasible solutions is processed in such a way that the sparsity of each solution increases with each generation unveiled. The specific computation performed at any particular child node is shown to correspond to an embedding of a polytope into the polytope received from that nodes parent. Several issues related to pre-walk order selection, computational complexity and tractability, and the use of heuristic and/or side information is discussed. An example of a single-path, depth-first algorithm on a tree with randomized vertex reduction and a run-time path selection algorithm is presented in the context of sparse lowpass filter design.
[ { "created": "Wed, 4 Feb 2015 14:54:37 GMT", "version": "v1" } ]
2015-09-16
[ [ "Lahlou", "Tarek A.", "" ], [ "Oppenheim", "Alan V.", "" ] ]
This paper presents a general framework for generating greedy algorithms for solving convex constraint satisfaction problems for sparse solutions by mapping the satisfaction problem into one of graph traversal on a rooted tree of unknown topology. For every pre-walk of the tree an initial set of generally dense feasible solutions is processed in such a way that the sparsity of each solution increases with each generation unveiled. The specific computation performed at any particular child node is shown to correspond to an embedding of a polytope into the polytope received from that nodes parent. Several issues related to pre-walk order selection, computational complexity and tractability, and the use of heuristic and/or side information is discussed. An example of a single-path, depth-first algorithm on a tree with randomized vertex reduction and a run-time path selection algorithm is presented in the context of sparse lowpass filter design.
1808.05500
Mostafa Mehdipour Ghazi
Mostafa Mehdipour Ghazi, Mads Nielsen, Akshay Pai, M. Jorge Cardoso, Marc Modat, Sebastien Ourselin, Lauge S{\o}rensen
Robust training of recurrent neural networks to handle missing data for disease progression modeling
9 pages, 1 figure, MIDL conference
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Disease progression modeling (DPM) using longitudinal data is a challenging task in machine learning for healthcare that can provide clinicians with better tools for diagnosis and monitoring of disease. Existing DPM algorithms neglect temporal dependencies among measurements and make parametric assumptions about biomarker trajectories. In addition, they do not model multiple biomarkers jointly and need to align subjects' trajectories. In this paper, recurrent neural networks (RNNs) are utilized to address these issues. However, in many cases, longitudinal cohorts contain incomplete data, which hinders the application of standard RNNs and requires a pre-processing step such as imputation of the missing values. We, therefore, propose a generalized training rule for the most widely used RNN architecture, long short-term memory (LSTM) networks, that can handle missing values in both target and predictor variables. This algorithm is applied for modeling the progression of Alzheimer's disease (AD) using magnetic resonance imaging (MRI) biomarkers. The results show that the proposed LSTM algorithm achieves a lower mean absolute error for prediction of measurements across all considered MRI biomarkers compared to using standard LSTM networks with data imputation or using a regression-based DPM method. Moreover, applying linear discriminant analysis to the biomarkers' values predicted by the proposed algorithm results in a larger area under the receiver operating characteristic curve (AUC) for clinical diagnosis of AD compared to the same alternatives, and the AUC is comparable to state-of-the-art AUCs from a recent cross-sectional medical image classification challenge. This paper shows that built-in handling of missing values in LSTM network training paves the way for application of RNNs in disease progression modeling.
[ { "created": "Thu, 16 Aug 2018 14:09:22 GMT", "version": "v1" } ]
2018-08-17
[ [ "Ghazi", "Mostafa Mehdipour", "" ], [ "Nielsen", "Mads", "" ], [ "Pai", "Akshay", "" ], [ "Cardoso", "M. Jorge", "" ], [ "Modat", "Marc", "" ], [ "Ourselin", "Sebastien", "" ], [ "Sørensen", "Lauge", "" ] ]
Disease progression modeling (DPM) using longitudinal data is a challenging task in machine learning for healthcare that can provide clinicians with better tools for diagnosis and monitoring of disease. Existing DPM algorithms neglect temporal dependencies among measurements and make parametric assumptions about biomarker trajectories. In addition, they do not model multiple biomarkers jointly and need to align subjects' trajectories. In this paper, recurrent neural networks (RNNs) are utilized to address these issues. However, in many cases, longitudinal cohorts contain incomplete data, which hinders the application of standard RNNs and requires a pre-processing step such as imputation of the missing values. We, therefore, propose a generalized training rule for the most widely used RNN architecture, long short-term memory (LSTM) networks, that can handle missing values in both target and predictor variables. This algorithm is applied for modeling the progression of Alzheimer's disease (AD) using magnetic resonance imaging (MRI) biomarkers. The results show that the proposed LSTM algorithm achieves a lower mean absolute error for prediction of measurements across all considered MRI biomarkers compared to using standard LSTM networks with data imputation or using a regression-based DPM method. Moreover, applying linear discriminant analysis to the biomarkers' values predicted by the proposed algorithm results in a larger area under the receiver operating characteristic curve (AUC) for clinical diagnosis of AD compared to the same alternatives, and the AUC is comparable to state-of-the-art AUCs from a recent cross-sectional medical image classification challenge. This paper shows that built-in handling of missing values in LSTM network training paves the way for application of RNNs in disease progression modeling.
2311.18496
Tengjin Weng
Tengjin Weng, Yang Shen, Zhidong Zhao, Zhiming Cheng, Shuai Wang
Accurate Segmentation of Optic Disc And Cup from Multiple Pseudo-labels by Noise-aware Learning
CSCWD 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Optic disc and cup segmentation plays a crucial role in automating the screening and diagnosis of optic glaucoma. While data-driven convolutional neural networks (CNNs) show promise in this area, the inherent ambiguity of segmenting objects and background boundaries in the task of optic disc and cup segmentation leads to noisy annotations that impact model performance. To address this, we propose an innovative label-denoising method of Multiple Pseudo-labels Noise-aware Network (MPNN) for accurate optic disc and cup segmentation. Specifically, the Multiple Pseudo-labels Generation and Guided Denoising (MPGGD) module generates pseudo-labels by multiple different initialization networks trained on true labels, and the pixel-level consensus information extracted from these pseudo-labels guides to differentiate clean pixels from noisy pixels. The training framework of the MPNN is constructed by a teacher-student architecture to learn segmentation from clean pixels and noisy pixels. Particularly, such a framework adeptly leverages (i) reliable and fundamental insight from clean pixels and (ii) the supplementary knowledge within noisy pixels via multiple perturbation-based unsupervised consistency. Compared to other label-denoising methods, comprehensive experimental results on the RIGA dataset demonstrate our method's excellent performance. The code is available at https://github.com/wwwtttjjj/MPNN
[ { "created": "Thu, 30 Nov 2023 12:17:16 GMT", "version": "v1" }, { "created": "Fri, 15 Mar 2024 08:38:04 GMT", "version": "v2" } ]
2024-03-18
[ [ "Weng", "Tengjin", "" ], [ "Shen", "Yang", "" ], [ "Zhao", "Zhidong", "" ], [ "Cheng", "Zhiming", "" ], [ "Wang", "Shuai", "" ] ]
Optic disc and cup segmentation plays a crucial role in automating the screening and diagnosis of optic glaucoma. While data-driven convolutional neural networks (CNNs) show promise in this area, the inherent ambiguity of segmenting objects and background boundaries in the task of optic disc and cup segmentation leads to noisy annotations that impact model performance. To address this, we propose an innovative label-denoising method of Multiple Pseudo-labels Noise-aware Network (MPNN) for accurate optic disc and cup segmentation. Specifically, the Multiple Pseudo-labels Generation and Guided Denoising (MPGGD) module generates pseudo-labels by multiple different initialization networks trained on true labels, and the pixel-level consensus information extracted from these pseudo-labels guides to differentiate clean pixels from noisy pixels. The training framework of the MPNN is constructed by a teacher-student architecture to learn segmentation from clean pixels and noisy pixels. Particularly, such a framework adeptly leverages (i) reliable and fundamental insight from clean pixels and (ii) the supplementary knowledge within noisy pixels via multiple perturbation-based unsupervised consistency. Compared to other label-denoising methods, comprehensive experimental results on the RIGA dataset demonstrate our method's excellent performance. The code is available at https://github.com/wwwtttjjj/MPNN
2008.08193
Sudip Poddar
Sudip Poddar, Anirban Mukhopadhyay
EXCLUVIS: A MATLAB GUI Software for Comparative Study of Clustering and Visualization of Gene Expression Data
19 pages, 18 figures
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clustering is a popular data mining technique that aims to partition an input space into multiple homogeneous regions. There exist several clustering algorithms in the literature. The performance of a clustering algorithm depends on its input parameters which can substantially affect the behavior of the algorithm. Cluster validity indices determine the partitioning that best fits the underlying data. In bioinformatics, microarray gene expression technology has made it possible to measure the gene expression levels of thousands of genes simultaneously. Many genomic studies, which aim to analyze the functions of some genes, highly rely on some clustering technique for grouping similarly expressed genes in one cluster or partitioning tissue samples based on similar expression values of genes. In this work, an application package called EXCLUVIS (gene EXpression data CLUstering and VISualization) has been developed using MATLAB Graphical User Interface (GUI) environment for analyzing the performances of different clustering algorithms on gene expression datasets. In this application package, the user needs to select a number of parameters such as internal validity indices, external validity indices and number of clusters from the active windows for evaluating the performance of the clustering algorithms. EXCLUVIS compares the performances of K-means, fuzzy C-means, hierarchical clustering and multiobjective evolutionary clustering algorithms. Heatmap and cluster profile plots are used for visualizing the results. EXCLUVIS allows the users to easily find the goodness of clustering solutions as well as provides visual representations of the clustering outcomes.
[ { "created": "Tue, 18 Aug 2020 23:34:57 GMT", "version": "v1" } ]
2020-08-20
[ [ "Poddar", "Sudip", "" ], [ "Mukhopadhyay", "Anirban", "" ] ]
Clustering is a popular data mining technique that aims to partition an input space into multiple homogeneous regions. There exist several clustering algorithms in the literature. The performance of a clustering algorithm depends on its input parameters which can substantially affect the behavior of the algorithm. Cluster validity indices determine the partitioning that best fits the underlying data. In bioinformatics, microarray gene expression technology has made it possible to measure the gene expression levels of thousands of genes simultaneously. Many genomic studies, which aim to analyze the functions of some genes, highly rely on some clustering technique for grouping similarly expressed genes in one cluster or partitioning tissue samples based on similar expression values of genes. In this work, an application package called EXCLUVIS (gene EXpression data CLUstering and VISualization) has been developed using MATLAB Graphical User Interface (GUI) environment for analyzing the performances of different clustering algorithms on gene expression datasets. In this application package, the user needs to select a number of parameters such as internal validity indices, external validity indices and number of clusters from the active windows for evaluating the performance of the clustering algorithms. EXCLUVIS compares the performances of K-means, fuzzy C-means, hierarchical clustering and multiobjective evolutionary clustering algorithms. Heatmap and cluster profile plots are used for visualizing the results. EXCLUVIS allows the users to easily find the goodness of clustering solutions as well as provides visual representations of the clustering outcomes.
cs/0011038
Ming-Yang Kao
Miklos Csuros, Ming-Yang Kao
Provably Fast and Accurate Recovery of Evolutionary Trees through Harmonic Greedy Triplets
The paper will appear in SIAM Journal on Computing
null
null
null
cs.DS cs.LG
null
We give a greedy learning algorithm for reconstructing an evolutionary tree based on a certain harmonic average on triplets of terminal taxa. After the pairwise distances between terminal taxa are estimated from sequence data, the algorithm runs in O(n^2) time using O(n) work space, where n is the number of terminal taxa. These time and space complexities are optimal in the sense that the size of an input distance matrix is n^2 and the size of an output tree is n. Moreover, in the Jukes-Cantor model of evolution, the algorithm recovers the correct tree topology with high probability using sample sequences of length polynomial in (1) n, (2) the logarithm of the error probability, and (3) the inverses of two small parameters.
[ { "created": "Thu, 23 Nov 2000 14:48:53 GMT", "version": "v1" } ]
2007-05-23
[ [ "Csuros", "Miklos", "" ], [ "Kao", "Ming-Yang", "" ] ]
We give a greedy learning algorithm for reconstructing an evolutionary tree based on a certain harmonic average on triplets of terminal taxa. After the pairwise distances between terminal taxa are estimated from sequence data, the algorithm runs in O(n^2) time using O(n) work space, where n is the number of terminal taxa. These time and space complexities are optimal in the sense that the size of an input distance matrix is n^2 and the size of an output tree is n. Moreover, in the Jukes-Cantor model of evolution, the algorithm recovers the correct tree topology with high probability using sample sequences of length polynomial in (1) n, (2) the logarithm of the error probability, and (3) the inverses of two small parameters.
1207.1417
Michal Rosen-Zvi
Michal Rosen-Zvi, Michael I. Jordan, Alan Yuille
The DLR Hierarchy of Approximate Inference
Appears in Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence (UAI2005)
null
null
UAI-P-2005-PG-493-500
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a hierarchy for approximate inference based on the Dobrushin, Lanford, Ruelle (DLR) equations. This hierarchy includes existing algorithms, such as belief propagation, and also motivates novel algorithms such as factorized neighbors (FN) algorithms and variants of mean field (MF) algorithms. In particular, we show that extrema of the Bethe free energy correspond to approximate solutions of the DLR equations. In addition, we demonstrate a close connection between these approximate algorithms and Gibbs sampling. Finally, we compare and contrast various of the algorithms in the DLR hierarchy on spin-glass problems. The experiments show that algorithms higher up in the hierarchy give more accurate results when they converge but tend to be less stable.
[ { "created": "Wed, 4 Jul 2012 16:25:12 GMT", "version": "v1" } ]
2015-03-20
[ [ "Rosen-Zvi", "Michal", "" ], [ "Jordan", "Michael I.", "" ], [ "Yuille", "Alan", "" ] ]
We propose a hierarchy for approximate inference based on the Dobrushin, Lanford, Ruelle (DLR) equations. This hierarchy includes existing algorithms, such as belief propagation, and also motivates novel algorithms such as factorized neighbors (FN) algorithms and variants of mean field (MF) algorithms. In particular, we show that extrema of the Bethe free energy correspond to approximate solutions of the DLR equations. In addition, we demonstrate a close connection between these approximate algorithms and Gibbs sampling. Finally, we compare and contrast various of the algorithms in the DLR hierarchy on spin-glass problems. The experiments show that algorithms higher up in the hierarchy give more accurate results when they converge but tend to be less stable.