id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.17218 | Jiachen Li | Jiachen Li and Xiaojin Gong | Prototypical Contrastive Learning-based CLIP Fine-tuning for Object
Re-identification | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This work aims to adapt large-scale pre-trained vision-language models, such
as contrastive language-image pretraining (CLIP), to enhance the performance of
object reidentification (Re-ID) across various supervision settings. Although
prompt learning has enabled a recent work named CLIP-ReID to achieve promising
performance, the underlying mechanisms and the necessity of prompt learning
remain unclear due to the absence of semantic labels in ReID tasks. In this
work, we first analyze the role prompt learning in CLIP-ReID and identify its
limitations. Based on our investigations, we propose a simple yet effective
approach to adapt CLIP for supervised object Re-ID. Our approach directly
fine-tunes the image encoder of CLIP using a prototypical contrastive learning
(PCL) loss, eliminating the need for prompt learning. Experimental results on
both person and vehicle Re-ID datasets demonstrate the competitiveness of our
method compared to CLIP-ReID. Furthermore, we extend our PCL-based CLIP
fine-tuning approach to unsupervised scenarios, where we achieve state-of-the
art performance.
| [
{
"created": "Thu, 26 Oct 2023 08:12:53 GMT",
"version": "v1"
}
] | 2023-10-27 | [
[
"Li",
"Jiachen",
""
],
[
"Gong",
"Xiaojin",
""
]
] | This work aims to adapt large-scale pre-trained vision-language models, such as contrastive language-image pretraining (CLIP), to enhance the performance of object reidentification (Re-ID) across various supervision settings. Although prompt learning has enabled a recent work named CLIP-ReID to achieve promising performance, the underlying mechanisms and the necessity of prompt learning remain unclear due to the absence of semantic labels in ReID tasks. In this work, we first analyze the role prompt learning in CLIP-ReID and identify its limitations. Based on our investigations, we propose a simple yet effective approach to adapt CLIP for supervised object Re-ID. Our approach directly fine-tunes the image encoder of CLIP using a prototypical contrastive learning (PCL) loss, eliminating the need for prompt learning. Experimental results on both person and vehicle Re-ID datasets demonstrate the competitiveness of our method compared to CLIP-ReID. Furthermore, we extend our PCL-based CLIP fine-tuning approach to unsupervised scenarios, where we achieve state-of-the art performance. |
2402.13254 | Jianrui Zhang | Jianrui Zhang, Mu Cai, Tengyang Xie, Yong Jae Lee | CounterCurate: Enhancing Physical and Semantic Visio-Linguistic
Compositional Reasoning via Counterfactual Examples | 15 pages, 6 figures, 12 tables, Project Page:
https://countercurate.github.io/ | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose CounterCurate, a framework to comprehensively improve the
visio-linguistic compositional reasoning capability for both contrastive and
generative multimodal models. In particular, we identify two critical
under-explored problems: the neglect of the physically grounded reasoning
(counting and position understanding) and the potential of using highly capable
text and image generation models for semantic counterfactual fine-tuning. Our
work pioneers an approach that addresses these gaps. We first spotlight the
near-chance performance of multimodal models like CLIP and LLaVA in physically
grounded compositional reasoning. We then apply simple data augmentation using
grounded image generation model GLIGEN to generate fine-tuning data, resulting
in significant performance improvements: +33% and +37% for CLIP and LLaVA,
respectively, on our newly curated Flickr30k-Positions benchmark. Moreover, we
exploit the capabilities of high-performing text generation and image
generation models, specifically GPT-4V and DALLE-3, to curate challenging
semantic counterfactuals, thereby further enhancing compositional reasoning
capabilities on benchmarks such as SugarCrepe, where CounterCurate outperforms
GPT-4V. To facilitate future research, we release our code, dataset, benchmark,
and checkpoints at https://countercurate.github.io.
| [
{
"created": "Tue, 20 Feb 2024 18:59:55 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Mar 2024 17:59:56 GMT",
"version": "v2"
},
{
"created": "Mon, 10 Jun 2024 17:59:55 GMT",
"version": "v3"
},
{
"created": "Wed, 12 Jun 2024 17:59:55 GMT",
"version": "v4"
}
] | 2024-06-13 | [
[
"Zhang",
"Jianrui",
""
],
[
"Cai",
"Mu",
""
],
[
"Xie",
"Tengyang",
""
],
[
"Lee",
"Yong Jae",
""
]
] | We propose CounterCurate, a framework to comprehensively improve the visio-linguistic compositional reasoning capability for both contrastive and generative multimodal models. In particular, we identify two critical under-explored problems: the neglect of the physically grounded reasoning (counting and position understanding) and the potential of using highly capable text and image generation models for semantic counterfactual fine-tuning. Our work pioneers an approach that addresses these gaps. We first spotlight the near-chance performance of multimodal models like CLIP and LLaVA in physically grounded compositional reasoning. We then apply simple data augmentation using grounded image generation model GLIGEN to generate fine-tuning data, resulting in significant performance improvements: +33% and +37% for CLIP and LLaVA, respectively, on our newly curated Flickr30k-Positions benchmark. Moreover, we exploit the capabilities of high-performing text generation and image generation models, specifically GPT-4V and DALLE-3, to curate challenging semantic counterfactuals, thereby further enhancing compositional reasoning capabilities on benchmarks such as SugarCrepe, where CounterCurate outperforms GPT-4V. To facilitate future research, we release our code, dataset, benchmark, and checkpoints at https://countercurate.github.io. |
2404.17340 | Chengliang Liu | Chengliang Liu, Jie Wen, Yabo Liu, Chao Huang, Zhihao Wu, Xiaoling
Luo, Yong Xu | Masked Two-channel Decoupling Framework for Incomplete Multi-view Weak
Multi-label Learning | Accepted at NeurIPS 2023. Email: liucl1996@163.com | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-view learning has become a popular research topic in recent years, but
research on the cross-application of classic multi-label classification and
multi-view learning is still in its early stages. In this paper, we focus on
the complex yet highly realistic task of incomplete multi-view weak multi-label
learning and propose a masked two-channel decoupling framework based on deep
neural networks to solve this problem. The core innovation of our method lies
in decoupling the single-channel view-level representation, which is common in
deep multi-view learning methods, into a shared representation and a
view-proprietary representation. We also design a cross-channel contrastive
loss to enhance the semantic property of the two channels. Additionally, we
exploit supervised information to design a label-guided graph regularization
loss, helping the extracted embedding features preserve the geometric structure
among samples. Inspired by the success of masking mechanisms in image and text
analysis, we develop a random fragment masking strategy for vector features to
improve the learning ability of encoders. Finally, it is important to emphasize
that our model is fully adaptable to arbitrary view and label absences while
also performing well on the ideal full data. We have conducted sufficient and
convincing experiments to confirm the effectiveness and advancement of our
model.
| [
{
"created": "Fri, 26 Apr 2024 11:39:50 GMT",
"version": "v1"
}
] | 2024-04-29 | [
[
"Liu",
"Chengliang",
""
],
[
"Wen",
"Jie",
""
],
[
"Liu",
"Yabo",
""
],
[
"Huang",
"Chao",
""
],
[
"Wu",
"Zhihao",
""
],
[
"Luo",
"Xiaoling",
""
],
[
"Xu",
"Yong",
""
]
] | Multi-view learning has become a popular research topic in recent years, but research on the cross-application of classic multi-label classification and multi-view learning is still in its early stages. In this paper, we focus on the complex yet highly realistic task of incomplete multi-view weak multi-label learning and propose a masked two-channel decoupling framework based on deep neural networks to solve this problem. The core innovation of our method lies in decoupling the single-channel view-level representation, which is common in deep multi-view learning methods, into a shared representation and a view-proprietary representation. We also design a cross-channel contrastive loss to enhance the semantic property of the two channels. Additionally, we exploit supervised information to design a label-guided graph regularization loss, helping the extracted embedding features preserve the geometric structure among samples. Inspired by the success of masking mechanisms in image and text analysis, we develop a random fragment masking strategy for vector features to improve the learning ability of encoders. Finally, it is important to emphasize that our model is fully adaptable to arbitrary view and label absences while also performing well on the ideal full data. We have conducted sufficient and convincing experiments to confirm the effectiveness and advancement of our model. |
2005.12743 | Arushi Gupta | Arushi Gupta | Inherent Noise in Gradient Based Methods | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous work has examined the ability of larger capacity neural networks to
generalize better than smaller ones, even without explicit regularizers, by
analyzing gradient based algorithms such as GD and SGD. The presence of noise
and its effect on robustness to parameter perturbations has been linked to
generalization. We examine a property of GD and SGD, namely that instead of
iterating through all scalar weights in the network and updating them one by
one, GD (and SGD) updates all the parameters at the same time. As a result,
each parameter $w^i$ calculates its partial derivative at the stale parameter
$\mathbf{w_t}$, but then suffers loss $\hat{L}(\mathbf{w_{t+1}})$. We show that
this causes noise to be introduced into the optimization. We find that this
noise penalizes models that are sensitive to perturbations in the weights. We
find that penalties are most pronounced for batches that are currently being
used to update, and are higher for larger models.
| [
{
"created": "Tue, 26 May 2020 14:12:22 GMT",
"version": "v1"
}
] | 2020-05-27 | [
[
"Gupta",
"Arushi",
""
]
] | Previous work has examined the ability of larger capacity neural networks to generalize better than smaller ones, even without explicit regularizers, by analyzing gradient based algorithms such as GD and SGD. The presence of noise and its effect on robustness to parameter perturbations has been linked to generalization. We examine a property of GD and SGD, namely that instead of iterating through all scalar weights in the network and updating them one by one, GD (and SGD) updates all the parameters at the same time. As a result, each parameter $w^i$ calculates its partial derivative at the stale parameter $\mathbf{w_t}$, but then suffers loss $\hat{L}(\mathbf{w_{t+1}})$. We show that this causes noise to be introduced into the optimization. We find that this noise penalizes models that are sensitive to perturbations in the weights. We find that penalties are most pronounced for batches that are currently being used to update, and are higher for larger models. |
2110.01686 | Lam Duc Nguyen | Beatriz Soret, Lam D. Nguyen, Jan Seeger, Arne Br\"oring, Chaouki Ben
Issaid, Sumudu Samarakoon, Anis El Gabli, Vivek Kulkarni, Mehdi Bennis, and
Petar Popovski | Learning, Computing, and Trustworthiness in Intelligent IoT
Environments: Performance-Energy Tradeoffs | Accepted for publication in IEEE Transactions on Green Communication
and Networking | IEEE Transactions on Green Communications and Networking 2021 | 10.1109/TGCN.2021.3138792 | null | cs.DC cs.AI cs.NI | http://creativecommons.org/licenses/by/4.0/ | An Intelligent IoT Environment (iIoTe) is comprised of heterogeneous devices
that can collaboratively execute semi-autonomous IoT applications, examples of
which include highly automated manufacturing cells or autonomously interacting
harvesting machines. Energy efficiency is key in such edge environments, since
they are often based on an infrastructure that consists of wireless and
battery-run devices, e.g., e-tractors, drones, Automated Guided Vehicle (AGV)s
and robots. The total energy consumption draws contributions from multipleiIoTe
technologies that enable edge computing and communication, distributed
learning, as well as distributed ledgers and smart contracts. This paper
provides a state-of-the-art overview of these technologies and illustrates
their functionality and performance, with special attention to the tradeoff
among resources, latency, privacy and energy consumption. Finally, the paper
provides a vision for integrating these enabling technologies in
energy-efficient iIoTe and a roadmap to address the open research challenges
| [
{
"created": "Mon, 4 Oct 2021 19:41:42 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Dec 2021 08:40:23 GMT",
"version": "v2"
}
] | 2022-02-23 | [
[
"Soret",
"Beatriz",
""
],
[
"Nguyen",
"Lam D.",
""
],
[
"Seeger",
"Jan",
""
],
[
"Bröring",
"Arne",
""
],
[
"Issaid",
"Chaouki Ben",
""
],
[
"Samarakoon",
"Sumudu",
""
],
[
"Gabli",
"Anis El",
""
],
[
"Kulkarni",
"Vivek",
""
],
[
"Bennis",
"Mehdi",
""
],
[
"Popovski",
"Petar",
""
]
] | An Intelligent IoT Environment (iIoTe) is comprised of heterogeneous devices that can collaboratively execute semi-autonomous IoT applications, examples of which include highly automated manufacturing cells or autonomously interacting harvesting machines. Energy efficiency is key in such edge environments, since they are often based on an infrastructure that consists of wireless and battery-run devices, e.g., e-tractors, drones, Automated Guided Vehicle (AGV)s and robots. The total energy consumption draws contributions from multipleiIoTe technologies that enable edge computing and communication, distributed learning, as well as distributed ledgers and smart contracts. This paper provides a state-of-the-art overview of these technologies and illustrates their functionality and performance, with special attention to the tradeoff among resources, latency, privacy and energy consumption. Finally, the paper provides a vision for integrating these enabling technologies in energy-efficient iIoTe and a roadmap to address the open research challenges |
2303.12079 | Dongdong Chen | Junke Wang and Dongdong Chen and Zuxuan Wu and Chong Luo and Xiyang
Dai and Lu Yuan and Yu-Gang Jiang | OmniTracker: Unifying Object Tracking by Tracking-with-Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object tracking (OT) aims to estimate the positions of target objects in a
video sequence. Depending on whether the initial states of target objects are
specified by provided annotations in the first frame or the categories, OT
could be classified as instance tracking (e.g., SOT and VOS) and category
tracking (e.g., MOT, MOTS, and VIS) tasks. Combing the advantages of the best
practices developed in both communities, we propose a novel
tracking-with-detection paradigm, where tracking supplements appearance priors
for detection and detection provides tracking with candidate bounding boxes for
association. Equipped with such a design, a unified tracking model,
OmniTracker, is further presented to resolve all the tracking tasks with a
fully shared network architecture, model weights, and inference pipeline.
Extensive experiments on 7 tracking datasets, including LaSOT, TrackingNet,
DAVIS16-17, MOT17, MOTS20, and YTVIS19, demonstrate that OmniTracker achieves
on-par or even better results than both task-specific and unified tracking
models.
| [
{
"created": "Tue, 21 Mar 2023 17:59:57 GMT",
"version": "v1"
}
] | 2023-03-22 | [
[
"Wang",
"Junke",
""
],
[
"Chen",
"Dongdong",
""
],
[
"Wu",
"Zuxuan",
""
],
[
"Luo",
"Chong",
""
],
[
"Dai",
"Xiyang",
""
],
[
"Yuan",
"Lu",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] | Object tracking (OT) aims to estimate the positions of target objects in a video sequence. Depending on whether the initial states of target objects are specified by provided annotations in the first frame or the categories, OT could be classified as instance tracking (e.g., SOT and VOS) and category tracking (e.g., MOT, MOTS, and VIS) tasks. Combing the advantages of the best practices developed in both communities, we propose a novel tracking-with-detection paradigm, where tracking supplements appearance priors for detection and detection provides tracking with candidate bounding boxes for association. Equipped with such a design, a unified tracking model, OmniTracker, is further presented to resolve all the tracking tasks with a fully shared network architecture, model weights, and inference pipeline. Extensive experiments on 7 tracking datasets, including LaSOT, TrackingNet, DAVIS16-17, MOT17, MOTS20, and YTVIS19, demonstrate that OmniTracker achieves on-par or even better results than both task-specific and unified tracking models. |
2006.09347 | Paul Vicol | Jens Behrmann, Paul Vicol, Kuan-Chieh Wang, Roger Grosse,
J\"orn-Henrik Jacobsen | Understanding and Mitigating Exploding Inverses in Invertible Neural
Networks | AISTATS 2021 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Invertible neural networks (INNs) have been used to design generative models,
implement memory-saving gradient computation, and solve inverse problems. In
this work, we show that commonly-used INN architectures suffer from exploding
inverses and are thus prone to becoming numerically non-invertible. Across a
wide range of INN use-cases, we reveal failures including the non-applicability
of the change-of-variables formula on in- and out-of-distribution (OOD) data,
incorrect gradients for memory-saving backprop, and the inability to sample
from normalizing flow models. We further derive bi-Lipschitz properties of
atomic building blocks of common architectures. These insights into the
stability of INNs then provide ways forward to remedy these failures. For tasks
where local invertibility is sufficient, like memory-saving backprop, we
propose a flexible and efficient regularizer. For problems where global
invertibility is necessary, such as applying normalizing flows on OOD data, we
show the importance of designing stable INN building blocks.
| [
{
"created": "Tue, 16 Jun 2020 17:44:28 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Dec 2021 17:26:10 GMT",
"version": "v2"
}
] | 2021-12-28 | [
[
"Behrmann",
"Jens",
""
],
[
"Vicol",
"Paul",
""
],
[
"Wang",
"Kuan-Chieh",
""
],
[
"Grosse",
"Roger",
""
],
[
"Jacobsen",
"Jörn-Henrik",
""
]
] | Invertible neural networks (INNs) have been used to design generative models, implement memory-saving gradient computation, and solve inverse problems. In this work, we show that commonly-used INN architectures suffer from exploding inverses and are thus prone to becoming numerically non-invertible. Across a wide range of INN use-cases, we reveal failures including the non-applicability of the change-of-variables formula on in- and out-of-distribution (OOD) data, incorrect gradients for memory-saving backprop, and the inability to sample from normalizing flow models. We further derive bi-Lipschitz properties of atomic building blocks of common architectures. These insights into the stability of INNs then provide ways forward to remedy these failures. For tasks where local invertibility is sufficient, like memory-saving backprop, we propose a flexible and efficient regularizer. For problems where global invertibility is necessary, such as applying normalizing flows on OOD data, we show the importance of designing stable INN building blocks. |
2402.01765 | Ivan P Yamshchikov | Aleksandra Sorokovikova, Natalia Fedorova, Sharwin Rezagholi, Ivan P.
Yamshchikov | LLMs Simulate Big Five Personality Traits: Further Evidence | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An empirical investigation into the simulation of the Big Five personality
traits by large language models (LLMs), namely Llama2, GPT4, and Mixtral, is
presented. We analyze the personality traits simulated by these models and
their stability. This contributes to the broader understanding of the
capabilities of LLMs to simulate personality traits and the respective
implications for personalized human-computer interaction.
| [
{
"created": "Wed, 31 Jan 2024 13:45:25 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Sorokovikova",
"Aleksandra",
""
],
[
"Fedorova",
"Natalia",
""
],
[
"Rezagholi",
"Sharwin",
""
],
[
"Yamshchikov",
"Ivan P.",
""
]
] | An empirical investigation into the simulation of the Big Five personality traits by large language models (LLMs), namely Llama2, GPT4, and Mixtral, is presented. We analyze the personality traits simulated by these models and their stability. This contributes to the broader understanding of the capabilities of LLMs to simulate personality traits and the respective implications for personalized human-computer interaction. |
1905.03691 | Wei Yan | Wei Yan, Yiting shao, Shan Liu, Thomas H Li, Zhu Li, Ge Li | Deep AutoEncoder-based Lossy Geometry Compression for Point Clouds | null | null | null | null | cs.CV cs.MM eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Point cloud is a fundamental 3D representation which is widely used in real
world applications such as autonomous driving. As a newly-developed media
format which is characterized by complexity and irregularity, point cloud
creates a need for compression algorithms which are more flexible than existing
codecs. Recently, autoencoders(AEs) have shown their effectiveness in many
visual analysis tasks as well as image compression, which inspires us to employ
it in point cloud compression. In this paper, we propose a general
autoencoder-based architecture for lossy geometry point cloud compression. To
the best of our knowledge, it is the first autoencoder-based geometry
compression codec that directly takes point clouds as input rather than voxel
grids or collections of images. Compared with handcrafted codecs, this approach
adapts much more quickly to previously unseen media contents and media formats,
meanwhile achieving competitive performance. Our architecture consists of a
pointnet-based encoder, a uniform quantizer, an entropy estimation block and a
nonlinear synthesis transformation module. In lossy geometry compression of
point cloud, results show that the proposed method outperforms the test model
for categories 1 and 3 (TMC13) published by MPEG-3DG group on the 125th
meeting, and on average a 73.15\% BD-rate gain is achieved.
| [
{
"created": "Thu, 18 Apr 2019 02:44:50 GMT",
"version": "v1"
}
] | 2019-05-10 | [
[
"Yan",
"Wei",
""
],
[
"shao",
"Yiting",
""
],
[
"Liu",
"Shan",
""
],
[
"Li",
"Thomas H",
""
],
[
"Li",
"Zhu",
""
],
[
"Li",
"Ge",
""
]
] | Point cloud is a fundamental 3D representation which is widely used in real world applications such as autonomous driving. As a newly-developed media format which is characterized by complexity and irregularity, point cloud creates a need for compression algorithms which are more flexible than existing codecs. Recently, autoencoders(AEs) have shown their effectiveness in many visual analysis tasks as well as image compression, which inspires us to employ it in point cloud compression. In this paper, we propose a general autoencoder-based architecture for lossy geometry point cloud compression. To the best of our knowledge, it is the first autoencoder-based geometry compression codec that directly takes point clouds as input rather than voxel grids or collections of images. Compared with handcrafted codecs, this approach adapts much more quickly to previously unseen media contents and media formats, meanwhile achieving competitive performance. Our architecture consists of a pointnet-based encoder, a uniform quantizer, an entropy estimation block and a nonlinear synthesis transformation module. In lossy geometry compression of point cloud, results show that the proposed method outperforms the test model for categories 1 and 3 (TMC13) published by MPEG-3DG group on the 125th meeting, and on average a 73.15\% BD-rate gain is achieved. |
1907.12698 | Antonio Mora Dr. | A.M. Mora and A.I. Esparcia-Alc\'azar | EVO* 2019 -- Late-Breaking Abstracts Volume | LBAs accepted in EVO* 2019 | null | null | null | cs.NE cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This volume contains the Late-Breaking Abstracts submitted to the EVO* 2019
Conference, that took place in Leipzig, from 24 to 26 of April. These papers
where presented as short talks and also at the poster session of the conference
together with other regular submissions. All of them present ongoing research
and preliminary results investigating on the application of different
approaches of Evolutionary Computation to different problems, most of them real
world ones.
| [
{
"created": "Tue, 30 Jul 2019 01:42:42 GMT",
"version": "v1"
}
] | 2019-07-31 | [
[
"Mora",
"A. M.",
""
],
[
"Esparcia-Alcázar",
"A. I.",
""
]
] | This volume contains the Late-Breaking Abstracts submitted to the EVO* 2019 Conference, that took place in Leipzig, from 24 to 26 of April. These papers where presented as short talks and also at the poster session of the conference together with other regular submissions. All of them present ongoing research and preliminary results investigating on the application of different approaches of Evolutionary Computation to different problems, most of them real world ones. |
1003.4628 | Pedro Gonnet | Pedro Gonnet | Efficient Construction, Update and Downdate Of The Coefficients Of
Interpolants Based On Polynomials Satisfying A Three-Term Recurrence Relation | 18 pages, submitted to the Journal of Scientific Computing. | null | null | null | cs.NA cs.MS math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider methods to compute the coefficients of
interpolants relative to a basis of polynomials satisfying a three-term
recurrence relation. Two new algorithms are presented: the first constructs the
coefficients of the interpolation incrementally and can be used to update the
coefficients whenever a nodes is added to or removed from the interpolation.
The second algorithm, which constructs the interpolation coefficients by
decomposing the Vandermonde-like matrix iteratively, can not be used to update
or downdate an interpolation, yet is more numerically stable than the first
algorithm and is more efficient when the coefficients of multiple
interpolations are to be computed over the same set of nodes.
| [
{
"created": "Wed, 24 Mar 2010 12:46:52 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Mar 2010 14:49:29 GMT",
"version": "v2"
}
] | 2010-03-31 | [
[
"Gonnet",
"Pedro",
""
]
] | In this paper, we consider methods to compute the coefficients of interpolants relative to a basis of polynomials satisfying a three-term recurrence relation. Two new algorithms are presented: the first constructs the coefficients of the interpolation incrementally and can be used to update the coefficients whenever a nodes is added to or removed from the interpolation. The second algorithm, which constructs the interpolation coefficients by decomposing the Vandermonde-like matrix iteratively, can not be used to update or downdate an interpolation, yet is more numerically stable than the first algorithm and is more efficient when the coefficients of multiple interpolations are to be computed over the same set of nodes. |
2311.11328 | Emile Visser | E. Visser, C.E. van Daalen, J.C. Schoeman | LABCAT: Locally adaptive Bayesian optimization using
principal-component-aligned trust regions | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Bayesian optimization (BO) is a popular method for optimizing expensive
black-box functions. BO has several well-documented shortcomings, including
computational slowdown with longer optimization runs, poor suitability for
non-stationary or ill-conditioned objective functions, and poor convergence
characteristics. Several algorithms have been proposed that incorporate local
strategies, such as trust regions, into BO to mitigate these limitations;
however, none address all of them satisfactorily. To address these
shortcomings, we propose the LABCAT algorithm, which extends trust-region-based
BO by adding a rotation aligning the trust region with the weighted principal
components and an adaptive rescaling strategy based on the length-scales of a
local Gaussian process surrogate model with automatic relevance determination.
Through extensive numerical experiments using a set of synthetic test functions
and the well-known COCO benchmarking software, we show that the LABCAT
algorithm outperforms several state-of-the-art BO and other black-box
optimization algorithms.
| [
{
"created": "Sun, 19 Nov 2023 13:56:24 GMT",
"version": "v1"
},
{
"created": "Sun, 16 Jun 2024 10:22:52 GMT",
"version": "v2"
}
] | 2024-06-18 | [
[
"Visser",
"E.",
""
],
[
"van Daalen",
"C. E.",
""
],
[
"Schoeman",
"J. C.",
""
]
] | Bayesian optimization (BO) is a popular method for optimizing expensive black-box functions. BO has several well-documented shortcomings, including computational slowdown with longer optimization runs, poor suitability for non-stationary or ill-conditioned objective functions, and poor convergence characteristics. Several algorithms have been proposed that incorporate local strategies, such as trust regions, into BO to mitigate these limitations; however, none address all of them satisfactorily. To address these shortcomings, we propose the LABCAT algorithm, which extends trust-region-based BO by adding a rotation aligning the trust region with the weighted principal components and an adaptive rescaling strategy based on the length-scales of a local Gaussian process surrogate model with automatic relevance determination. Through extensive numerical experiments using a set of synthetic test functions and the well-known COCO benchmarking software, we show that the LABCAT algorithm outperforms several state-of-the-art BO and other black-box optimization algorithms. |
2308.13505 | Jiaming Zhang | Jiaming Zhang, Yutao Cui, Gangshan Wu, Limin Wang | Joint Modeling of Feature, Correspondence, and a Compressed Memory for
Video Object Segmentation | 9 pages, 8 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Current prevailing Video Object Segmentation (VOS) methods usually perform
dense matching between the current and reference frames after extracting their
features. One on hand, the decoupled modeling restricts the targets information
propagation only at high-level feature space. On the other hand, the pixel-wise
matching leads to a lack of holistic understanding of the targets. To overcome
these issues, we propose a unified VOS framework, coined as JointFormer, for
joint modeling the three elements of feature, correspondence, and a compressed
memory. The core design is the Joint Block, utilizing the flexibility of
attention to simultaneously extract feature and propagate the targets
information to the current tokens and the compressed memory token. This scheme
allows to perform extensive information propagation and discriminative feature
learning. To incorporate the long-term temporal targets information, we also
devise a customized online updating mechanism for the compressed memory token,
which can prompt the information flow along the temporal dimension and thus
improve the global modeling capability. Under the design, our method achieves a
new state-of-art performance on DAVIS 2017 val/test-dev (89.7% and 87.6%) and
YouTube-VOS 2018/2019 val (87.0% and 87.0%) benchmarks, outperforming existing
works by a large margin.
| [
{
"created": "Fri, 25 Aug 2023 17:30:08 GMT",
"version": "v1"
}
] | 2023-08-28 | [
[
"Zhang",
"Jiaming",
""
],
[
"Cui",
"Yutao",
""
],
[
"Wu",
"Gangshan",
""
],
[
"Wang",
"Limin",
""
]
] | Current prevailing Video Object Segmentation (VOS) methods usually perform dense matching between the current and reference frames after extracting their features. One on hand, the decoupled modeling restricts the targets information propagation only at high-level feature space. On the other hand, the pixel-wise matching leads to a lack of holistic understanding of the targets. To overcome these issues, we propose a unified VOS framework, coined as JointFormer, for joint modeling the three elements of feature, correspondence, and a compressed memory. The core design is the Joint Block, utilizing the flexibility of attention to simultaneously extract feature and propagate the targets information to the current tokens and the compressed memory token. This scheme allows to perform extensive information propagation and discriminative feature learning. To incorporate the long-term temporal targets information, we also devise a customized online updating mechanism for the compressed memory token, which can prompt the information flow along the temporal dimension and thus improve the global modeling capability. Under the design, our method achieves a new state-of-art performance on DAVIS 2017 val/test-dev (89.7% and 87.6%) and YouTube-VOS 2018/2019 val (87.0% and 87.0%) benchmarks, outperforming existing works by a large margin. |
2103.08193 | Masato Ishii | Masato Ishii | Semi-supervised learning by selective training with pseudo labels via
confidence estimation | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel semi-supervised learning (SSL) method that adopts
selective training with pseudo labels. In our method, we generate hard
pseudo-labels and also estimate their confidence, which represents how likely
each pseudo-label is to be correct. Then, we explicitly select which
pseudo-labeled data should be used to update the model. Specifically, assuming
that loss on incorrectly pseudo-labeled data sensitively increase against data
augmentation, we select the data corresponding to relatively small loss after
applying data augmentation. The confidence is used not only for screening
candidates of pseudo-labeled data to be selected but also for automatically
deciding how many pseudo-labeled data should be selected within a mini-batch.
Since accurate estimation of the confidence is crucial in our method, we also
propose a new data augmentation method, called MixConf, that enables us to
obtain confidence-calibrated models even when the number of training data is
small. Experimental results with several benchmark datasets validate the
advantage of our SSL method as well as MixConf.
| [
{
"created": "Mon, 15 Mar 2021 08:00:33 GMT",
"version": "v1"
}
] | 2021-03-16 | [
[
"Ishii",
"Masato",
""
]
] | We propose a novel semi-supervised learning (SSL) method that adopts selective training with pseudo labels. In our method, we generate hard pseudo-labels and also estimate their confidence, which represents how likely each pseudo-label is to be correct. Then, we explicitly select which pseudo-labeled data should be used to update the model. Specifically, assuming that loss on incorrectly pseudo-labeled data sensitively increase against data augmentation, we select the data corresponding to relatively small loss after applying data augmentation. The confidence is used not only for screening candidates of pseudo-labeled data to be selected but also for automatically deciding how many pseudo-labeled data should be selected within a mini-batch. Since accurate estimation of the confidence is crucial in our method, we also propose a new data augmentation method, called MixConf, that enables us to obtain confidence-calibrated models even when the number of training data is small. Experimental results with several benchmark datasets validate the advantage of our SSL method as well as MixConf. |
2007.03647 | Ardavan Bidgoli | Ardavan Bidgoli, Manuel Ladron De Guevara, Cinnie Hsiung, Jean Oh,
Eunsu Kang | Artistic Style in Robotic Painting; a Machine Learning Approach to
Learning Brushstroke from Human Artists | The 29th IEEE International Conference on Robot & Human Interactive
Communication | null | null | null | cs.RO cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robotic painting has been a subject of interest among both artists and
roboticists since the 1970s. Researchers and interdisciplinary artists have
employed various painting techniques and human-robot collaboration models to
create visual mediums on canvas. One of the challenges of robotic painting is
to apply a desired artistic style to the painting. Style transfer techniques
with machine learning models have helped us address this challenge with the
visual style of a specific painting. However, other manual elements of style,
i.e., painting techniques and brushstrokes of an artist, have not been fully
addressed. We propose a method to integrate an artistic style to the
brushstrokes and the painting process through collaboration with a human
artist. In this paper, we describe our approach to 1) collect brushstrokes and
hand-brush motion samples from an artist, and 2) train a generative model to
generate brushstrokes that pertains to the artist's style, and 3) fine tune a
stroke-based rendering model to work with our robotic painting setup. We will
report on the integration of these three steps in a separate publication. In a
preliminary study, 71% of human evaluators find our reconstructed brushstrokes
are pertaining to the characteristics of the artist's style. Moreover, 58% of
participants could not distinguish a painting made by our method from a
visually similar painting created by a human artist.
| [
{
"created": "Tue, 7 Jul 2020 17:35:38 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Jul 2020 04:05:51 GMT",
"version": "v2"
}
] | 2020-07-29 | [
[
"Bidgoli",
"Ardavan",
""
],
[
"De Guevara",
"Manuel Ladron",
""
],
[
"Hsiung",
"Cinnie",
""
],
[
"Oh",
"Jean",
""
],
[
"Kang",
"Eunsu",
""
]
] | Robotic painting has been a subject of interest among both artists and roboticists since the 1970s. Researchers and interdisciplinary artists have employed various painting techniques and human-robot collaboration models to create visual mediums on canvas. One of the challenges of robotic painting is to apply a desired artistic style to the painting. Style transfer techniques with machine learning models have helped us address this challenge with the visual style of a specific painting. However, other manual elements of style, i.e., painting techniques and brushstrokes of an artist, have not been fully addressed. We propose a method to integrate an artistic style to the brushstrokes and the painting process through collaboration with a human artist. In this paper, we describe our approach to 1) collect brushstrokes and hand-brush motion samples from an artist, and 2) train a generative model to generate brushstrokes that pertains to the artist's style, and 3) fine tune a stroke-based rendering model to work with our robotic painting setup. We will report on the integration of these three steps in a separate publication. In a preliminary study, 71% of human evaluators find our reconstructed brushstrokes are pertaining to the characteristics of the artist's style. Moreover, 58% of participants could not distinguish a painting made by our method from a visually similar painting created by a human artist. |
2407.05377 | Eleni Nisioti | Eleni Nisioti, Sebastian Risi, Ida Momennejad, Pierre-Yves Oudeyer and
Cl\'ement Moulin-Frier | Collective Innovation in Groups of Large Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human culture relies on collective innovation: our ability to continuously
explore how existing elements in our environment can be combined to create new
ones. Language is hypothesized to play a key role in human culture, driving
individual cognitive capacities and shaping communication. Yet the majority of
models of collective innovation assign no cognitive capacities or language
abilities to agents. Here, we contribute a computational study of collective
innovation where agents are Large Language Models (LLMs) that play Little
Alchemy 2, a creative video game originally developed for humans that, as we
argue, captures useful aspects of innovation landscapes not present in previous
test-beds. We, first, study an LLM in isolation and discover that it exhibits
both useful skills and crucial limitations. We, then, study groups of LLMs that
share information related to their behaviour and focus on the effect of social
connectivity on collective performance. In agreement with previous human and
computational studies, we observe that groups with dynamic connectivity
out-compete fully-connected groups. Our work reveals opportunities and
challenges for future studies of collective innovation that are becoming
increasingly relevant as Generative Artificial Intelligence algorithms and
humans innovate alongside each other.
| [
{
"created": "Sun, 7 Jul 2024 13:59:46 GMT",
"version": "v1"
}
] | 2024-07-09 | [
[
"Nisioti",
"Eleni",
""
],
[
"Risi",
"Sebastian",
""
],
[
"Momennejad",
"Ida",
""
],
[
"Oudeyer",
"Pierre-Yves",
""
],
[
"Moulin-Frier",
"Clément",
""
]
] | Human culture relies on collective innovation: our ability to continuously explore how existing elements in our environment can be combined to create new ones. Language is hypothesized to play a key role in human culture, driving individual cognitive capacities and shaping communication. Yet the majority of models of collective innovation assign no cognitive capacities or language abilities to agents. Here, we contribute a computational study of collective innovation where agents are Large Language Models (LLMs) that play Little Alchemy 2, a creative video game originally developed for humans that, as we argue, captures useful aspects of innovation landscapes not present in previous test-beds. We, first, study an LLM in isolation and discover that it exhibits both useful skills and crucial limitations. We, then, study groups of LLMs that share information related to their behaviour and focus on the effect of social connectivity on collective performance. In agreement with previous human and computational studies, we observe that groups with dynamic connectivity out-compete fully-connected groups. Our work reveals opportunities and challenges for future studies of collective innovation that are becoming increasingly relevant as Generative Artificial Intelligence algorithms and humans innovate alongside each other. |
2308.09082 | Rongfei Fan | Rongfei Fan, Xuming An, Shiyuan Zuo, and Han Hu | Over-the-Air Computation Aided Federated Learning with the Aggregation
of Normalized Gradient | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over-the-air computation is a communication-efficient solution for federated
learning (FL). In such a system, iterative procedure is performed: Local
gradient of private loss function is updated, amplified and then transmitted by
every mobile device; the server receives the aggregated gradient all-at-once,
generates and then broadcasts updated model parameters to every mobile device.
In terms of amplification factor selection, most related works suppose the
local gradient's maximal norm always happens although it actually fluctuates
over iterations, which may degrade convergence performance. To circumvent this
problem, we propose to turn local gradient to be normalized one before
amplifying it. Under our proposed method, when the loss function is smooth, we
prove our proposed method can converge to stationary point at sub-linear rate.
In case of smooth and strongly convex loss function, we prove our proposed
method can achieve minimal training loss at linear rate with any small positive
tolerance. Moreover, a tradeoff between convergence rate and the tolerance is
discovered. To speedup convergence, problems optimizing system parameters are
also formulated for above two cases. Although being non-convex, optimal
solution with polynomial complexity of the formulated problems are derived.
Experimental results show our proposed method can outperform benchmark methods
on convergence performance.
| [
{
"created": "Thu, 17 Aug 2023 16:15:47 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Sep 2023 03:59:53 GMT",
"version": "v2"
}
] | 2023-09-06 | [
[
"Fan",
"Rongfei",
""
],
[
"An",
"Xuming",
""
],
[
"Zuo",
"Shiyuan",
""
],
[
"Hu",
"Han",
""
]
] | Over-the-air computation is a communication-efficient solution for federated learning (FL). In such a system, iterative procedure is performed: Local gradient of private loss function is updated, amplified and then transmitted by every mobile device; the server receives the aggregated gradient all-at-once, generates and then broadcasts updated model parameters to every mobile device. In terms of amplification factor selection, most related works suppose the local gradient's maximal norm always happens although it actually fluctuates over iterations, which may degrade convergence performance. To circumvent this problem, we propose to turn local gradient to be normalized one before amplifying it. Under our proposed method, when the loss function is smooth, we prove our proposed method can converge to stationary point at sub-linear rate. In case of smooth and strongly convex loss function, we prove our proposed method can achieve minimal training loss at linear rate with any small positive tolerance. Moreover, a tradeoff between convergence rate and the tolerance is discovered. To speedup convergence, problems optimizing system parameters are also formulated for above two cases. Although being non-convex, optimal solution with polynomial complexity of the formulated problems are derived. Experimental results show our proposed method can outperform benchmark methods on convergence performance. |
2201.05698 | Abhishek Patange | Aditya M. Medhi, Abhishek D. Patange, Sujit S. Pardeshi, R.
Jegadeeshwaran, Mustafa Kuntoglu | Overview of contemporary systems driven by open-design movement | 27 pages, 10 Figures, 1 Table | null | null | null | cs.AR cs.CY cs.SE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The movement for open-design focuses on the creation of machines, physical
systems, and products using design information shared publicly. It consists of
the development of systems incorporating open-source hardware and software
which can be easily/freely customized and implemented. Generally, this movement
is adopted through the Internet and usually executed without economic
recompense. The aim and idea of this movement is similar to the open-source
movement, however is employed for designing & developing physical systems
instead of software system alone. This design necessitates co-creating the end
product, which is expected to be designed by the users, in place of an outdoor
investor for example a private business. In tune with this, the comprehensive
review is carried out wherein a variety of contemporary systems driven by
open-design movement for diverse applications is discussed.
| [
{
"created": "Fri, 14 Jan 2022 22:56:38 GMT",
"version": "v1"
}
] | 2022-01-19 | [
[
"Medhi",
"Aditya M.",
""
],
[
"Patange",
"Abhishek D.",
""
],
[
"Pardeshi",
"Sujit S.",
""
],
[
"Jegadeeshwaran",
"R.",
""
],
[
"Kuntoglu",
"Mustafa",
""
]
] | The movement for open-design focuses on the creation of machines, physical systems, and products using design information shared publicly. It consists of the development of systems incorporating open-source hardware and software which can be easily/freely customized and implemented. Generally, this movement is adopted through the Internet and usually executed without economic recompense. The aim and idea of this movement is similar to the open-source movement, however is employed for designing & developing physical systems instead of software system alone. This design necessitates co-creating the end product, which is expected to be designed by the users, in place of an outdoor investor for example a private business. In tune with this, the comprehensive review is carried out wherein a variety of contemporary systems driven by open-design movement for diverse applications is discussed. |
2006.13321 | Gy\"orgy Csom\'os | Gyorgy Csomos | Introducing recalibrated academic performance indicators in the
evaluation of individuals' research performance: A case study from Eastern
Europe | null | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Hungary, the highest and most prestigious scientific qualification is
considered to be the Doctor of Science (DSc) title being awarded by the
Hungarian Academy of Sciences. The academic performance indicators of the DSc
title are of high importance in the evaluation of individuals' research
performance not only when a researcher applies for obtaining a DSc title, but
also during promotions and appointments at universities, and in the case of the
evaluation of applications for scientific titles and degrees, and the
assessment of applications for funding. In the Section of Earth Sciences
encompassing nine related disciplines, rather than carrying out a
straightforward bibliometric analysis, the performance indicators were designed
as a result of a consensual agreement between leading academicians, each of
whom represented a particular discipline. Therefore, the minimum values of the
indicators, required to be fulfilled if one is applying for a DSc title, do not
adequately reflect the actual discipline-specific performance of researchers.
This problem may generate tension between researchers during the evaluation
process. The main goal of this paper is to recalibrate the minimum values of
four major performance indicators by taking the actual discipline-specific
distance ratios into account. In addition, each minimum value will be defined
by employing integer and fractional counting methods as well. The research
outcome of this study can provide impetus for the Section of Earth Sciences to
optimize the minimum values of the DSc title performance indicators by taking
the specifics of each discipline into account. Because academic performance
indicators are also employed in other Eastern European countries in the
evaluation of individuals' research performance, the methods used in that paper
can be placed into a wider geographical context.
| [
{
"created": "Tue, 23 Jun 2020 20:40:35 GMT",
"version": "v1"
}
] | 2020-06-25 | [
[
"Csomos",
"Gyorgy",
""
]
] | In Hungary, the highest and most prestigious scientific qualification is considered to be the Doctor of Science (DSc) title being awarded by the Hungarian Academy of Sciences. The academic performance indicators of the DSc title are of high importance in the evaluation of individuals' research performance not only when a researcher applies for obtaining a DSc title, but also during promotions and appointments at universities, and in the case of the evaluation of applications for scientific titles and degrees, and the assessment of applications for funding. In the Section of Earth Sciences encompassing nine related disciplines, rather than carrying out a straightforward bibliometric analysis, the performance indicators were designed as a result of a consensual agreement between leading academicians, each of whom represented a particular discipline. Therefore, the minimum values of the indicators, required to be fulfilled if one is applying for a DSc title, do not adequately reflect the actual discipline-specific performance of researchers. This problem may generate tension between researchers during the evaluation process. The main goal of this paper is to recalibrate the minimum values of four major performance indicators by taking the actual discipline-specific distance ratios into account. In addition, each minimum value will be defined by employing integer and fractional counting methods as well. The research outcome of this study can provide impetus for the Section of Earth Sciences to optimize the minimum values of the DSc title performance indicators by taking the specifics of each discipline into account. Because academic performance indicators are also employed in other Eastern European countries in the evaluation of individuals' research performance, the methods used in that paper can be placed into a wider geographical context. |
2305.19421 | Ines Dutra | Mariana Pinto, In\^es Dutra and Joaquim Fonseca | Data and Knowledge for Overtaking Scenarios in Autonomous Driving | 24 pages | null | null | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Autonomous driving has become one of the most popular research topics within
Artificial Intelligence. An autonomous vehicle is understood as a system that
combines perception, decision-making, planning, and control. All of those tasks
require that the vehicle collects surrounding data in order to make a good
decision and action. In particular, the overtaking maneuver is one of the most
critical actions of driving. The process involves lane changes, acceleration
and deceleration actions, and estimation of the speed and distance of the
vehicle in front or in the lane in which it is moving. Despite the amount of
work available in the literature, just a few handle overtaking maneuvers and,
because overtaking can be risky, no real-world dataset is available. This work
contributes in this area by presenting a new synthetic dataset whose focus is
the overtaking maneuver. We start by performing a thorough review of the state
of the art in autonomous driving and then explore the main datasets found in
the literature (public and private, synthetic and real), highlighting their
limitations, and suggesting a new set of features whose focus is the overtaking
maneuver.
| [
{
"created": "Tue, 30 May 2023 21:27:05 GMT",
"version": "v1"
}
] | 2023-06-01 | [
[
"Pinto",
"Mariana",
""
],
[
"Dutra",
"Inês",
""
],
[
"Fonseca",
"Joaquim",
""
]
] | Autonomous driving has become one of the most popular research topics within Artificial Intelligence. An autonomous vehicle is understood as a system that combines perception, decision-making, planning, and control. All of those tasks require that the vehicle collects surrounding data in order to make a good decision and action. In particular, the overtaking maneuver is one of the most critical actions of driving. The process involves lane changes, acceleration and deceleration actions, and estimation of the speed and distance of the vehicle in front or in the lane in which it is moving. Despite the amount of work available in the literature, just a few handle overtaking maneuvers and, because overtaking can be risky, no real-world dataset is available. This work contributes in this area by presenting a new synthetic dataset whose focus is the overtaking maneuver. We start by performing a thorough review of the state of the art in autonomous driving and then explore the main datasets found in the literature (public and private, synthetic and real), highlighting their limitations, and suggesting a new set of features whose focus is the overtaking maneuver. |
2110.04111 | Kwanyong Park | KwanYong Park, Sanghyun Woo, Inkyu Shin, In So Kweon | Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for
Semantic Segmentation | NeurIPS 2020 | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Unsupervised domain adaptation (UDA) for semantic segmentation has been
attracting attention recently, as it could be beneficial for various
label-scarce real-world scenarios (e.g., robot control, autonomous driving,
medical imaging, etc.). Despite the significant progress in this field, current
works mainly focus on a single-source single-target setting, which cannot
handle more practical settings of multiple targets or even unseen targets. In
this paper, we investigate open compound domain adaptation (OCDA), which deals
with mixed and novel situations at the same time, for semantic segmentation. We
present a novel framework based on three main design principles: discover,
hallucinate, and adapt. The scheme first clusters compound target data based on
style, discovering multiple latent domains (discover). Then, it hallucinates
multiple latent target domains in source by using image-translation
(hallucinate). This step ensures the latent domains in the source and the
target to be paired. Finally, target-to-source alignment is learned separately
between domains (adapt). In high-level, our solution replaces a hard OCDA
problem with much easier multiple UDA problems. We evaluate our solution on
standard benchmark GTA to C-driving, and achieved new state-of-the-art results.
| [
{
"created": "Fri, 8 Oct 2021 13:20:09 GMT",
"version": "v1"
}
] | 2021-10-11 | [
[
"Park",
"KwanYong",
""
],
[
"Woo",
"Sanghyun",
""
],
[
"Shin",
"Inkyu",
""
],
[
"Kweon",
"In So",
""
]
] | Unsupervised domain adaptation (UDA) for semantic segmentation has been attracting attention recently, as it could be beneficial for various label-scarce real-world scenarios (e.g., robot control, autonomous driving, medical imaging, etc.). Despite the significant progress in this field, current works mainly focus on a single-source single-target setting, which cannot handle more practical settings of multiple targets or even unseen targets. In this paper, we investigate open compound domain adaptation (OCDA), which deals with mixed and novel situations at the same time, for semantic segmentation. We present a novel framework based on three main design principles: discover, hallucinate, and adapt. The scheme first clusters compound target data based on style, discovering multiple latent domains (discover). Then, it hallucinates multiple latent target domains in source by using image-translation (hallucinate). This step ensures the latent domains in the source and the target to be paired. Finally, target-to-source alignment is learned separately between domains (adapt). In high-level, our solution replaces a hard OCDA problem with much easier multiple UDA problems. We evaluate our solution on standard benchmark GTA to C-driving, and achieved new state-of-the-art results. |
1912.00271 | Shervin Minaee | Shervin Minaee, Amirali Abdolrashidi, Hang Su, Mohammed Bennamoun,
David Zhang | Biometrics Recognition Using Deep Learning: A Survey | Under Review | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning-based models have been very successful in achieving
state-of-the-art results in many of the computer vision, speech recognition,
and natural language processing tasks in the last few years. These models seem
a natural fit for handling the ever-increasing scale of biometric recognition
problems, from cellphone authentication to airport security systems. Deep
learning-based models have increasingly been leveraged to improve the accuracy
of different biometric recognition systems in recent years. In this work, we
provide a comprehensive survey of more than 120 promising works on biometric
recognition (including face, fingerprint, iris, palmprint, ear, voice,
signature, and gait recognition), which deploy deep learning models, and show
their strengths and potentials in different applications. For each biometric,
we first introduce the available datasets that are widely used in the
literature and their characteristics. We will then talk about several promising
deep learning works developed for that biometric, and show their performance on
popular public benchmarks. We will also discuss some of the main challenges
while using these models for biometric recognition, and possible future
directions to which research in this area is headed.
| [
{
"created": "Sat, 30 Nov 2019 22:00:57 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Apr 2020 14:59:36 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Feb 2021 19:24:49 GMT",
"version": "v3"
}
] | 2021-02-10 | [
[
"Minaee",
"Shervin",
""
],
[
"Abdolrashidi",
"Amirali",
""
],
[
"Su",
"Hang",
""
],
[
"Bennamoun",
"Mohammed",
""
],
[
"Zhang",
"David",
""
]
] | Deep learning-based models have been very successful in achieving state-of-the-art results in many of the computer vision, speech recognition, and natural language processing tasks in the last few years. These models seem a natural fit for handling the ever-increasing scale of biometric recognition problems, from cellphone authentication to airport security systems. Deep learning-based models have increasingly been leveraged to improve the accuracy of different biometric recognition systems in recent years. In this work, we provide a comprehensive survey of more than 120 promising works on biometric recognition (including face, fingerprint, iris, palmprint, ear, voice, signature, and gait recognition), which deploy deep learning models, and show their strengths and potentials in different applications. For each biometric, we first introduce the available datasets that are widely used in the literature and their characteristics. We will then talk about several promising deep learning works developed for that biometric, and show their performance on popular public benchmarks. We will also discuss some of the main challenges while using these models for biometric recognition, and possible future directions to which research in this area is headed. |
2210.01878 | Abhishek Kulkarni | Abhishek N. Kulkarni and Jie Fu | Opportunistic Qualitative Planning in Stochastic Systems with Incomplete
Preferences over Reachability Objectives | 7 pages, 3 figures, under review for IEEE ACC 2023 | null | null | null | cs.AI cs.GT cs.RO cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Preferences play a key role in determining what goals/constraints to satisfy
when not all constraints can be satisfied simultaneously. In this paper, we
study how to synthesize preference satisfying plans in stochastic systems,
modeled as an MDP, given a (possibly incomplete) combinative preference model
over temporally extended goals. We start by introducing new semantics to
interpret preferences over infinite plays of the stochastic system. Then, we
introduce a new notion of improvement to enable comparison between two prefixes
of an infinite play. Based on this, we define two solution concepts called safe
and positively improving (SPI) and safe and almost-surely improving (SASI) that
enforce improvements with a positive probability and with probability one,
respectively. We construct a model called an improvement MDP, in which the
synthesis of SPI and SASI strategies that guarantee at least one improvement
reduces to computing positive and almost-sure winning strategies in an MDP. We
present an algorithm to synthesize the SPI and SASI strategies that induce
multiple sequential improvements. We demonstrate the proposed approach using a
robot motion planning problem.
| [
{
"created": "Tue, 4 Oct 2022 19:53:08 GMT",
"version": "v1"
}
] | 2022-10-06 | [
[
"Kulkarni",
"Abhishek N.",
""
],
[
"Fu",
"Jie",
""
]
] | Preferences play a key role in determining what goals/constraints to satisfy when not all constraints can be satisfied simultaneously. In this paper, we study how to synthesize preference satisfying plans in stochastic systems, modeled as an MDP, given a (possibly incomplete) combinative preference model over temporally extended goals. We start by introducing new semantics to interpret preferences over infinite plays of the stochastic system. Then, we introduce a new notion of improvement to enable comparison between two prefixes of an infinite play. Based on this, we define two solution concepts called safe and positively improving (SPI) and safe and almost-surely improving (SASI) that enforce improvements with a positive probability and with probability one, respectively. We construct a model called an improvement MDP, in which the synthesis of SPI and SASI strategies that guarantee at least one improvement reduces to computing positive and almost-sure winning strategies in an MDP. We present an algorithm to synthesize the SPI and SASI strategies that induce multiple sequential improvements. We demonstrate the proposed approach using a robot motion planning problem. |
2406.00942 | Max Kreminski | Max Kreminski | Cheap and Easy Open-Ended Text Input for Interactive Emergent Narrative | Presented as a demo at FDG 2024 | null | null | null | cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present a demonstration of Play What I Mean (PWIM): a novel, AI-supported
interaction technique for interactive emergent narrative (IEN) games and play
experiences. By assisting players in translating high-level gameplay intents
(expressed as short, unstructured text strings) into concrete game actions,
PWIM aims to support open-ended player input while mitigating the overwhelm
that players sometimes feel when confronting the large action spaces that
characterize IEN gameplay. In matching player intents to game actions, PWIM
makes use of an off-the-shelf sentence embedding model that is lightweight
enough to run locally on a player's device, and wraps this model in a simple
user interface that allows the player to work around occasional classification
errors.
| [
{
"created": "Mon, 3 Jun 2024 02:41:20 GMT",
"version": "v1"
}
] | 2024-06-04 | [
[
"Kreminski",
"Max",
""
]
] | We present a demonstration of Play What I Mean (PWIM): a novel, AI-supported interaction technique for interactive emergent narrative (IEN) games and play experiences. By assisting players in translating high-level gameplay intents (expressed as short, unstructured text strings) into concrete game actions, PWIM aims to support open-ended player input while mitigating the overwhelm that players sometimes feel when confronting the large action spaces that characterize IEN gameplay. In matching player intents to game actions, PWIM makes use of an off-the-shelf sentence embedding model that is lightweight enough to run locally on a player's device, and wraps this model in a simple user interface that allows the player to work around occasional classification errors. |
1810.08578 | Luke Godfrey | Luke B. Godfrey and Michael S. Gashler | Leveraging Product as an Activation Function in Deep Networks | 6 pages, 3 figures, IEEE SMC 2018 | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Product unit neural networks (PUNNs) are powerful representational models
with a strong theoretical basis, but have proven to be difficult to train with
gradient-based optimizers. We present windowed product unit neural networks
(WPUNNs), a simple method of leveraging product as a nonlinearity in a neural
network. Windowing the product tames the complex gradient surface and enables
WPUNNs to learn effectively, solving the problems faced by PUNNs. WPUNNs use
product layers between traditional sum layers, capturing the representational
power of product units and using the product itself as a nonlinearity. We find
the result that this method works as well as traditional nonlinearities like
ReLU on the MNIST dataset. We demonstrate that WPUNNs can also generalize gated
units in recurrent neural networks, yielding results comparable to LSTM
networks.
| [
{
"created": "Fri, 19 Oct 2018 16:43:26 GMT",
"version": "v1"
}
] | 2018-10-22 | [
[
"Godfrey",
"Luke B.",
""
],
[
"Gashler",
"Michael S.",
""
]
] | Product unit neural networks (PUNNs) are powerful representational models with a strong theoretical basis, but have proven to be difficult to train with gradient-based optimizers. We present windowed product unit neural networks (WPUNNs), a simple method of leveraging product as a nonlinearity in a neural network. Windowing the product tames the complex gradient surface and enables WPUNNs to learn effectively, solving the problems faced by PUNNs. WPUNNs use product layers between traditional sum layers, capturing the representational power of product units and using the product itself as a nonlinearity. We find the result that this method works as well as traditional nonlinearities like ReLU on the MNIST dataset. We demonstrate that WPUNNs can also generalize gated units in recurrent neural networks, yielding results comparable to LSTM networks. |
1907.13528 | Allyson Ettinger | Allyson Ettinger | What BERT is not: Lessons from a new suite of psycholinguistic
diagnostics for language models | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pre-training by language modeling has become a popular and successful
approach to NLP tasks, but we have yet to understand exactly what linguistic
capacities these pre-training processes confer upon models. In this paper we
introduce a suite of diagnostics drawn from human language experiments, which
allow us to ask targeted questions about the information used by language
models for generating predictions in context. As a case study, we apply these
diagnostics to the popular BERT model, finding that it can generally
distinguish good from bad completions involving shared category or role
reversal, albeit with less sensitivity than humans, and it robustly retrieves
noun hypernyms, but it struggles with challenging inferences and role-based
event prediction -- and in particular, it shows clear insensitivity to the
contextual impacts of negation.
| [
{
"created": "Wed, 31 Jul 2019 14:37:32 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Jul 2020 15:21:20 GMT",
"version": "v2"
}
] | 2020-07-14 | [
[
"Ettinger",
"Allyson",
""
]
] | Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon models. In this paper we introduce a suite of diagnostics drawn from human language experiments, which allow us to ask targeted questions about the information used by language models for generating predictions in context. As a case study, we apply these diagnostics to the popular BERT model, finding that it can generally distinguish good from bad completions involving shared category or role reversal, albeit with less sensitivity than humans, and it robustly retrieves noun hypernyms, but it struggles with challenging inferences and role-based event prediction -- and in particular, it shows clear insensitivity to the contextual impacts of negation. |
1601.03004 | Rasika Lakmal Hettiarachchige Don | Rasika Lakmal Hettiarachchige Don and Jagath Samarabandu | Novel velocity model to improve indoor localization using inertial
navigation with sensors on a smartphone | null | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a generalized velocity model to improve localization when using an
Inertial Navigation System (INS). This algorithm was applied to correct the
velocity of a smart phone based indoor INS system to increase the accuracy by
counteracting the accumulation of large drift caused by sensor reading errors.
We investigated the accuracy of the algorithm with three different velocity
models which were derived from the actual velocity measured at the hip of
walking person. Our results show that the proposed method with Gaussian
velocity model achieves competitive accuracy with a 50\% less variance over
Step and Heading approach proving the accuracy and robustness of proposed
method. We also investigated the frequency of applying corrections and found
that a minimum of 5\% corrections per step is sufficient for improved accuracy.
The proposed method is applicable in indoor localization and tracking
applications based on smart phone where traditional approaches such as GNSS
suffers from many issues.
| [
{
"created": "Tue, 12 Jan 2016 19:20:21 GMT",
"version": "v1"
}
] | 2016-01-13 | [
[
"Don",
"Rasika Lakmal Hettiarachchige",
""
],
[
"Samarabandu",
"Jagath",
""
]
] | We present a generalized velocity model to improve localization when using an Inertial Navigation System (INS). This algorithm was applied to correct the velocity of a smart phone based indoor INS system to increase the accuracy by counteracting the accumulation of large drift caused by sensor reading errors. We investigated the accuracy of the algorithm with three different velocity models which were derived from the actual velocity measured at the hip of walking person. Our results show that the proposed method with Gaussian velocity model achieves competitive accuracy with a 50\% less variance over Step and Heading approach proving the accuracy and robustness of proposed method. We also investigated the frequency of applying corrections and found that a minimum of 5\% corrections per step is sufficient for improved accuracy. The proposed method is applicable in indoor localization and tracking applications based on smart phone where traditional approaches such as GNSS suffers from many issues. |
1211.2265 | Yihong Wu | T. Tony Cai and Yihong Wu | Optimal Detection For Sparse Mixtures | submitted to IEEE Transactions on Information Theory | null | null | null | cs.IT math.IT math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detection of sparse signals arises in a wide range of modern scientific
studies. The focus so far has been mainly on Gaussian mixture models. In this
paper, we consider the detection problem under a general sparse mixture model
and obtain an explicit expression for the detection boundary. It is shown that
the fundamental limits of detection is governed by the behavior of the
log-likelihood ratio evaluated at an appropriate quantile of the null
distribution. We also establish the adaptive optimality of the higher criticism
procedure across all sparse mixtures satisfying certain mild regularity
conditions. In particular, the general results obtained in this paper recover
and extend in a unified manner the previously known results on sparse detection
far beyond the conventional Gaussian model and other exponential families.
| [
{
"created": "Fri, 9 Nov 2012 23:31:47 GMT",
"version": "v1"
}
] | 2012-11-13 | [
[
"Cai",
"T. Tony",
""
],
[
"Wu",
"Yihong",
""
]
] | Detection of sparse signals arises in a wide range of modern scientific studies. The focus so far has been mainly on Gaussian mixture models. In this paper, we consider the detection problem under a general sparse mixture model and obtain an explicit expression for the detection boundary. It is shown that the fundamental limits of detection is governed by the behavior of the log-likelihood ratio evaluated at an appropriate quantile of the null distribution. We also establish the adaptive optimality of the higher criticism procedure across all sparse mixtures satisfying certain mild regularity conditions. In particular, the general results obtained in this paper recover and extend in a unified manner the previously known results on sparse detection far beyond the conventional Gaussian model and other exponential families. |
2002.02058 | Takahiro Yabe | Toru Shimizu, Takahiro Yabe, Kota Tsubouchi | Learning Fine Grained Place Embeddings with Spatial Hierarchy from Human
Mobility Trajectories | submitted to IJCAI 2020 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Place embeddings generated from human mobility trajectories have become a
popular method to understand the functionality of places. Place embeddings with
high spatial resolution are desirable for many applications, however,
downscaling the spatial resolution deteriorates the quality of embeddings due
to data sparsity, especially in less populated areas. We address this issue by
proposing a method that generates fine grained place embeddings, which
leverages spatial hierarchical information according to the local density of
observed data points. The effectiveness of our fine grained place embeddings
are compared to baseline methods via next place prediction tasks using real
world trajectory data from 3 cities in Japan. In addition, we demonstrate the
value of our fine grained place embeddings for land use classification
applications. We believe that our technique of incorporating spatial
hierarchical information can complement and reinforce various place embedding
generating methods.
| [
{
"created": "Thu, 6 Feb 2020 01:37:40 GMT",
"version": "v1"
}
] | 2020-02-07 | [
[
"Shimizu",
"Toru",
""
],
[
"Yabe",
"Takahiro",
""
],
[
"Tsubouchi",
"Kota",
""
]
] | Place embeddings generated from human mobility trajectories have become a popular method to understand the functionality of places. Place embeddings with high spatial resolution are desirable for many applications, however, downscaling the spatial resolution deteriorates the quality of embeddings due to data sparsity, especially in less populated areas. We address this issue by proposing a method that generates fine grained place embeddings, which leverages spatial hierarchical information according to the local density of observed data points. The effectiveness of our fine grained place embeddings are compared to baseline methods via next place prediction tasks using real world trajectory data from 3 cities in Japan. In addition, we demonstrate the value of our fine grained place embeddings for land use classification applications. We believe that our technique of incorporating spatial hierarchical information can complement and reinforce various place embedding generating methods. |
2308.04014 | Benjamin Th\'erien | Kshitij Gupta, Benjamin Th\'erien, Adam Ibrahim, Mats L. Richter,
Quentin Anthony, Eugene Belilovsky, Irina Rish, Timoth\'ee Lesort | Continual Pre-Training of Large Language Models: How to (re)warm your
model? | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) are routinely pre-trained on billions of tokens,
only to restart the process over again once new data becomes available. A much
cheaper and more efficient solution would be to enable the continual
pre-training of these models, i.e. updating pre-trained models with new data
instead of re-training them from scratch. However, the distribution shift
induced by novel data typically results in degraded performance on past data.
Taking a step towards efficient continual pre-training, in this work, we
examine the effect of different warm-up strategies. Our hypothesis is that the
learning rate must be re-increased to improve compute efficiency when training
on a new dataset. We study the warmup phase of models pre-trained on the Pile
(upstream data, 300B tokens) as we continue to pre-train on SlimPajama
(downstream data, 297B tokens), following a linear warmup and cosine decay
schedule. We conduct all experiments on the Pythia 410M language model
architecture and evaluate performance through validation perplexity. We
experiment with different pre-training checkpoints, various maximum learning
rates, and various warmup lengths. Our results show that while rewarming models
first increases the loss on upstream and downstream data, in the longer run it
improves the downstream performance, outperforming models trained from
scratch$\unicode{x2013}$even for a large downstream dataset.
| [
{
"created": "Tue, 8 Aug 2023 03:18:18 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Sep 2023 23:13:07 GMT",
"version": "v2"
}
] | 2023-09-08 | [
[
"Gupta",
"Kshitij",
""
],
[
"Thérien",
"Benjamin",
""
],
[
"Ibrahim",
"Adam",
""
],
[
"Richter",
"Mats L.",
""
],
[
"Anthony",
"Quentin",
""
],
[
"Belilovsky",
"Eugene",
""
],
[
"Rish",
"Irina",
""
],
[
"Lesort",
"Timothée",
""
]
] | Large language models (LLMs) are routinely pre-trained on billions of tokens, only to restart the process over again once new data becomes available. A much cheaper and more efficient solution would be to enable the continual pre-training of these models, i.e. updating pre-trained models with new data instead of re-training them from scratch. However, the distribution shift induced by novel data typically results in degraded performance on past data. Taking a step towards efficient continual pre-training, in this work, we examine the effect of different warm-up strategies. Our hypothesis is that the learning rate must be re-increased to improve compute efficiency when training on a new dataset. We study the warmup phase of models pre-trained on the Pile (upstream data, 300B tokens) as we continue to pre-train on SlimPajama (downstream data, 297B tokens), following a linear warmup and cosine decay schedule. We conduct all experiments on the Pythia 410M language model architecture and evaluate performance through validation perplexity. We experiment with different pre-training checkpoints, various maximum learning rates, and various warmup lengths. Our results show that while rewarming models first increases the loss on upstream and downstream data, in the longer run it improves the downstream performance, outperforming models trained from scratch$\unicode{x2013}$even for a large downstream dataset. |
2211.01963 | Milan Pavlovi\'c | Sr{\dj}an \v{S}obot, Vukan Ninkovi\'c, Dejan Vukobratovi\'c, Milan
Pavlovi\'c, Milo\v{s} Radovanovi\'c | Machine Learning Methods for Device Identification Using Wireless
Fingerprinting | 7 pages, 9 figures, 2 tables, preprint | null | null | null | cs.LG cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Industrial Internet of Things (IoT) systems increasingly rely on wireless
communication standards. In a common industrial scenario, indoor wireless IoT
devices communicate with access points to deliver data collected from
industrial sensors, robots and factory machines. Due to static or quasi-static
locations of IoT devices and access points, historical observations of IoT
device channel conditions provide a possibility to precisely identify the
device without observing its traditional identifiers (e.g., MAC or IP address).
Such device identification methods based on wireless fingerprinting gained
increased attention lately as an additional cyber-security mechanism for
critical IoT infrastructures. In this paper, we perform a systematic study of a
large class of machine learning algorithms for device identification using
wireless fingerprints for the most popular cellular and Wi-Fi IoT technologies.
We design, implement, deploy, collect relevant data sets, train and test a
multitude of machine learning algorithms, as a part of the complete end-to-end
solution design for device identification via wireless fingerprinting. The
proposed solution is currently being deployed in a real-world industrial IoT
environment as part of H2020 project COLLABS.
| [
{
"created": "Thu, 3 Nov 2022 16:42:41 GMT",
"version": "v1"
}
] | 2022-11-04 | [
[
"Šobot",
"Srđan",
""
],
[
"Ninković",
"Vukan",
""
],
[
"Vukobratović",
"Dejan",
""
],
[
"Pavlović",
"Milan",
""
],
[
"Radovanović",
"Miloš",
""
]
] | Industrial Internet of Things (IoT) systems increasingly rely on wireless communication standards. In a common industrial scenario, indoor wireless IoT devices communicate with access points to deliver data collected from industrial sensors, robots and factory machines. Due to static or quasi-static locations of IoT devices and access points, historical observations of IoT device channel conditions provide a possibility to precisely identify the device without observing its traditional identifiers (e.g., MAC or IP address). Such device identification methods based on wireless fingerprinting gained increased attention lately as an additional cyber-security mechanism for critical IoT infrastructures. In this paper, we perform a systematic study of a large class of machine learning algorithms for device identification using wireless fingerprints for the most popular cellular and Wi-Fi IoT technologies. We design, implement, deploy, collect relevant data sets, train and test a multitude of machine learning algorithms, as a part of the complete end-to-end solution design for device identification via wireless fingerprinting. The proposed solution is currently being deployed in a real-world industrial IoT environment as part of H2020 project COLLABS. |
2305.01457 | Giovanni Ballarin | Giovanni Ballarin, Lyudmila Grigoryeva, Juan-Pablo Ortega | Memory of recurrent networks: Do we compute it right? | 31 pages, 6 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerical evaluations of the memory capacity (MC) of recurrent neural
networks reported in the literature often contradict well-established
theoretical bounds. In this paper, we study the case of linear echo state
networks, for which the total memory capacity has been proven to be equal to
the rank of the corresponding Kalman controllability matrix. We shed light on
various reasons for the inaccurate numerical estimations of the memory, and we
show that these issues, often overlooked in the recent literature, are of an
exclusively numerical nature. More explicitly, we prove that when the Krylov
structure of the linear MC is ignored, a gap between the theoretical MC and its
empirical counterpart is introduced. As a solution, we develop robust numerical
approaches by exploiting a result of MC neutrality with respect to the input
mask matrix. Simulations show that the memory curves that are recovered using
the proposed methods fully agree with the theory.
| [
{
"created": "Tue, 2 May 2023 14:37:52 GMT",
"version": "v1"
}
] | 2023-05-03 | [
[
"Ballarin",
"Giovanni",
""
],
[
"Grigoryeva",
"Lyudmila",
""
],
[
"Ortega",
"Juan-Pablo",
""
]
] | Numerical evaluations of the memory capacity (MC) of recurrent neural networks reported in the literature often contradict well-established theoretical bounds. In this paper, we study the case of linear echo state networks, for which the total memory capacity has been proven to be equal to the rank of the corresponding Kalman controllability matrix. We shed light on various reasons for the inaccurate numerical estimations of the memory, and we show that these issues, often overlooked in the recent literature, are of an exclusively numerical nature. More explicitly, we prove that when the Krylov structure of the linear MC is ignored, a gap between the theoretical MC and its empirical counterpart is introduced. As a solution, we develop robust numerical approaches by exploiting a result of MC neutrality with respect to the input mask matrix. Simulations show that the memory curves that are recovered using the proposed methods fully agree with the theory. |
2401.15935 | Viktor Moskvoretskii | Viktor Moskvoretskii, Dmitry Osin, Egor Shvetsov, Igor Udovichenko,
Maxim Zhelnin, Andrey Dukhovny, Anna Zhimerikina, Evgeny Burnaev | MLEM: Generative and Contrastive Learning as Distinct Modalities for
Event Sequences | 11 pages, 9 figures | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | This study explores the application of self-supervised learning techniques
for event sequences. It is a key modality in various applications such as
banking, e-commerce, and healthcare. However, there is limited research on
self-supervised learning for event sequences, and methods from other domains
like images, texts, and speech may not easily transfer. To determine the most
suitable approach, we conduct a detailed comparative analysis of previously
identified best-performing methods. We find that neither the contrastive nor
generative method is superior. Our assessment includes classifying event
sequences, predicting the next event, and evaluating embedding quality. These
results further highlight the potential benefits of combining both methods.
Given the lack of research on hybrid models in this domain, we initially adapt
the baseline model from another domain. However, upon observing its
underperformance, we develop a novel method called the Multimodal-Learning
Event Model (MLEM). MLEM treats contrastive learning and generative modeling as
distinct yet complementary modalities, aligning their embeddings. The results
of our study demonstrate that combining contrastive and generative approaches
into one procedure with MLEM achieves superior performance across multiple
metrics.
| [
{
"created": "Mon, 29 Jan 2024 07:50:28 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jan 2024 07:54:24 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Jun 2024 09:26:41 GMT",
"version": "v3"
},
{
"created": "Wed, 3 Jul 2024 09:28:50 GMT",
"version": "v4"
}
] | 2024-07-04 | [
[
"Moskvoretskii",
"Viktor",
""
],
[
"Osin",
"Dmitry",
""
],
[
"Shvetsov",
"Egor",
""
],
[
"Udovichenko",
"Igor",
""
],
[
"Zhelnin",
"Maxim",
""
],
[
"Dukhovny",
"Andrey",
""
],
[
"Zhimerikina",
"Anna",
""
],
[
"Burnaev",
"Evgeny",
""
]
] | This study explores the application of self-supervised learning techniques for event sequences. It is a key modality in various applications such as banking, e-commerce, and healthcare. However, there is limited research on self-supervised learning for event sequences, and methods from other domains like images, texts, and speech may not easily transfer. To determine the most suitable approach, we conduct a detailed comparative analysis of previously identified best-performing methods. We find that neither the contrastive nor generative method is superior. Our assessment includes classifying event sequences, predicting the next event, and evaluating embedding quality. These results further highlight the potential benefits of combining both methods. Given the lack of research on hybrid models in this domain, we initially adapt the baseline model from another domain. However, upon observing its underperformance, we develop a novel method called the Multimodal-Learning Event Model (MLEM). MLEM treats contrastive learning and generative modeling as distinct yet complementary modalities, aligning their embeddings. The results of our study demonstrate that combining contrastive and generative approaches into one procedure with MLEM achieves superior performance across multiple metrics. |
1410.4074 | Sahasranand K. R. | Sahasranand K. R. and Vinod Sharma | Distributed Nonparametric Sequential Spectrum Sensing under
Electromagnetic Interference | 8 pages; 6 figures; Version 2 has the proofs for the theorems.
Version 3 contains a new section on approximation analysis | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A nonparametric distributed sequential algorithm for quick detection of
spectral holes in a Cognitive Radio set up is proposed. Two or more local nodes
make decisions and inform the fusion centre (FC) over a reporting Multiple
Access Channel (MAC), which then makes the final decision. The local nodes use
energy detection and the FC uses mean detection in the presence of fading,
heavy-tailed electromagnetic interference (EMI) and outliers. The statistics of
the primary signal, channel gain or the EMI is not known. Different
nonparametric sequential algorithms are compared to choose appropriate
algorithms to be used at the local nodes and the FC. Modification of a recently
developed random walk test is selected for the local nodes for energy detection
as well as at the fusion centre for mean detection. It is shown via simulations
and analysis that the nonparametric distributed algorithm developed performs
well in the presence of fading, EMI and is robust to outliers. The algorithm is
iterative in nature making the computation and storage requirements minimal.
| [
{
"created": "Tue, 14 Oct 2014 18:27:53 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Nov 2014 20:30:49 GMT",
"version": "v2"
},
{
"created": "Thu, 30 Apr 2015 13:01:54 GMT",
"version": "v3"
}
] | 2015-05-01 | [
[
"R.",
"Sahasranand K.",
""
],
[
"Sharma",
"Vinod",
""
]
] | A nonparametric distributed sequential algorithm for quick detection of spectral holes in a Cognitive Radio set up is proposed. Two or more local nodes make decisions and inform the fusion centre (FC) over a reporting Multiple Access Channel (MAC), which then makes the final decision. The local nodes use energy detection and the FC uses mean detection in the presence of fading, heavy-tailed electromagnetic interference (EMI) and outliers. The statistics of the primary signal, channel gain or the EMI is not known. Different nonparametric sequential algorithms are compared to choose appropriate algorithms to be used at the local nodes and the FC. Modification of a recently developed random walk test is selected for the local nodes for energy detection as well as at the fusion centre for mean detection. It is shown via simulations and analysis that the nonparametric distributed algorithm developed performs well in the presence of fading, EMI and is robust to outliers. The algorithm is iterative in nature making the computation and storage requirements minimal. |
2206.04153 | Yunyi Zhang | Yunyi Zhang, Fang Guo, Jiaming Shen, Jiawei Han | Unsupervised Key Event Detection from Massive Text Corpora | Accepted to KDD 2022 Research Track | null | 10.1145/3534678.3539395 | null | cs.CL cs.IR | http://creativecommons.org/licenses/by/4.0/ | Automated event detection from news corpora is a crucial task towards mining
fast-evolving structured knowledge. As real-world events have different
granularities, from the top-level themes to key events and then to event
mentions corresponding to concrete actions, there are generally two lines of
research: (1) theme detection identifies from a news corpus major themes (e.g.,
"2019 Hong Kong Protests" vs. "2020 U.S. Presidential Election") that have very
distinct semantics; and (2) action extraction extracts from one document
mention-level actions (e.g., "the police hit the left arm of the protester")
that are too fine-grained for comprehending the event. In this paper, we
propose a new task, key event detection at the intermediate level, aiming to
detect from a news corpus key events (e.g., "HK Airport Protest on Aug.
12-14"), each happening at a particular time/location and focusing on the same
topic. This task can bridge event understanding and structuring and is
inherently challenging because of the thematic and temporal closeness of key
events and the scarcity of labeled data due to the fast-evolving nature of news
articles. To address these challenges, we develop an unsupervised key event
detection framework, EvMine, that (1) extracts temporally frequent peak phrases
using a novel ttf-itf score, (2) merges peak phrases into event-indicative
feature sets by detecting communities from our designed peak phrase graph that
captures document co-occurrences, semantic similarities, and temporal closeness
signals, and (3) iteratively retrieves documents related to each key event by
training a classifier with automatically generated pseudo labels from the
event-indicative feature sets and refining the detected key events using the
retrieved documents. Extensive experiments and case studies show EvMine
outperforms all the baseline methods and its ablations on two real-world news
corpora.
| [
{
"created": "Wed, 8 Jun 2022 20:31:02 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Jul 2022 19:52:08 GMT",
"version": "v2"
}
] | 2022-07-05 | [
[
"Zhang",
"Yunyi",
""
],
[
"Guo",
"Fang",
""
],
[
"Shen",
"Jiaming",
""
],
[
"Han",
"Jiawei",
""
]
] | Automated event detection from news corpora is a crucial task towards mining fast-evolving structured knowledge. As real-world events have different granularities, from the top-level themes to key events and then to event mentions corresponding to concrete actions, there are generally two lines of research: (1) theme detection identifies from a news corpus major themes (e.g., "2019 Hong Kong Protests" vs. "2020 U.S. Presidential Election") that have very distinct semantics; and (2) action extraction extracts from one document mention-level actions (e.g., "the police hit the left arm of the protester") that are too fine-grained for comprehending the event. In this paper, we propose a new task, key event detection at the intermediate level, aiming to detect from a news corpus key events (e.g., "HK Airport Protest on Aug. 12-14"), each happening at a particular time/location and focusing on the same topic. This task can bridge event understanding and structuring and is inherently challenging because of the thematic and temporal closeness of key events and the scarcity of labeled data due to the fast-evolving nature of news articles. To address these challenges, we develop an unsupervised key event detection framework, EvMine, that (1) extracts temporally frequent peak phrases using a novel ttf-itf score, (2) merges peak phrases into event-indicative feature sets by detecting communities from our designed peak phrase graph that captures document co-occurrences, semantic similarities, and temporal closeness signals, and (3) iteratively retrieves documents related to each key event by training a classifier with automatically generated pseudo labels from the event-indicative feature sets and refining the detected key events using the retrieved documents. Extensive experiments and case studies show EvMine outperforms all the baseline methods and its ablations on two real-world news corpora. |
2202.03645 | Kaushik Rangadurai | Kaushik Rangadurai, Yiqun Liu, Siddarth Malreddy, Xiaoyi Liu, Piyush
Maheshwari, Vishwanath Sangale, Fedor Borisyuk | NxtPost: User to Post Recommendations in Facebook Groups | 9 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present NxtPost, a deployed user-to-post content-based
sequential recommender system for Facebook Groups. Inspired by recent advances
in NLP, we have adapted a Transformer-based model to the domain of sequential
recommendation. We explore causal masked multi-head attention that optimizes
both short and long-term user interests. From a user's past activities
validated by defined safety process, NxtPost seeks to learn a representation
for the user's dynamic content preference and to predict the next post user may
be interested in. In contrast to previous Transformer-based methods, we do not
assume that the recommendable posts have a fixed corpus. Accordingly, we use an
external item/token embedding to extend a sequence-based approach to a large
vocabulary. We achieve 49% abs. improvement in offline evaluation. As a result
of NxtPost deployment, 0.6% more users are meeting new people, engaging with
the community, sharing knowledge and getting support. The paper shares our
experience in developing a personalized sequential recommender system, lessons
deploying the model for cold start users, how to deal with freshness, and
tuning strategies to reach higher efficiency in online A/B experiments.
| [
{
"created": "Tue, 8 Feb 2022 04:59:56 GMT",
"version": "v1"
}
] | 2022-02-09 | [
[
"Rangadurai",
"Kaushik",
""
],
[
"Liu",
"Yiqun",
""
],
[
"Malreddy",
"Siddarth",
""
],
[
"Liu",
"Xiaoyi",
""
],
[
"Maheshwari",
"Piyush",
""
],
[
"Sangale",
"Vishwanath",
""
],
[
"Borisyuk",
"Fedor",
""
]
] | In this paper, we present NxtPost, a deployed user-to-post content-based sequential recommender system for Facebook Groups. Inspired by recent advances in NLP, we have adapted a Transformer-based model to the domain of sequential recommendation. We explore causal masked multi-head attention that optimizes both short and long-term user interests. From a user's past activities validated by defined safety process, NxtPost seeks to learn a representation for the user's dynamic content preference and to predict the next post user may be interested in. In contrast to previous Transformer-based methods, we do not assume that the recommendable posts have a fixed corpus. Accordingly, we use an external item/token embedding to extend a sequence-based approach to a large vocabulary. We achieve 49% abs. improvement in offline evaluation. As a result of NxtPost deployment, 0.6% more users are meeting new people, engaging with the community, sharing knowledge and getting support. The paper shares our experience in developing a personalized sequential recommender system, lessons deploying the model for cold start users, how to deal with freshness, and tuning strategies to reach higher efficiency in online A/B experiments. |
2106.15797 | Yong Guo | Yong Guo, Yaofo Chen, Mingkui Tan, Kui Jia, Jian Chen, Jingdong Wang | Content-Aware Convolutional Neural Networks | Accepted by Neural Networks | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Neural Networks (CNNs) have achieved great success due to the
powerful feature learning ability of convolution layers. Specifically, the
standard convolution traverses the input images/features using a sliding window
scheme to extract features. However, not all the windows contribute equally to
the prediction results of CNNs. In practice, the convolutional operation on
some of the windows (e.g., smooth windows that contain very similar pixels) can
be very redundant and may introduce noises into the computation. Such
redundancy may not only deteriorate the performance but also incur the
unnecessary computational cost. Thus, it is important to reduce the
computational redundancy of convolution to improve the performance. To this
end, we propose a Content-aware Convolution (CAC) that automatically detects
the smooth windows and applies a 1x1 convolutional kernel to replace the
original large kernel. In this sense, we are able to effectively avoid the
redundant computation on similar pixels. By replacing the standard convolution
in CNNs with our CAC, the resultant models yield significantly better
performance and lower computational cost than the baseline models with the
standard convolution. More critically, we are able to dynamically allocate
suitable computation resources according to the data smoothness of different
images, making it possible for content-aware computation. Extensive experiments
on various computer vision tasks demonstrate the superiority of our method over
existing methods.
| [
{
"created": "Wed, 30 Jun 2021 03:54:35 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Jul 2021 07:32:54 GMT",
"version": "v2"
}
] | 2021-07-26 | [
[
"Guo",
"Yong",
""
],
[
"Chen",
"Yaofo",
""
],
[
"Tan",
"Mingkui",
""
],
[
"Jia",
"Kui",
""
],
[
"Chen",
"Jian",
""
],
[
"Wang",
"Jingdong",
""
]
] | Convolutional Neural Networks (CNNs) have achieved great success due to the powerful feature learning ability of convolution layers. Specifically, the standard convolution traverses the input images/features using a sliding window scheme to extract features. However, not all the windows contribute equally to the prediction results of CNNs. In practice, the convolutional operation on some of the windows (e.g., smooth windows that contain very similar pixels) can be very redundant and may introduce noises into the computation. Such redundancy may not only deteriorate the performance but also incur the unnecessary computational cost. Thus, it is important to reduce the computational redundancy of convolution to improve the performance. To this end, we propose a Content-aware Convolution (CAC) that automatically detects the smooth windows and applies a 1x1 convolutional kernel to replace the original large kernel. In this sense, we are able to effectively avoid the redundant computation on similar pixels. By replacing the standard convolution in CNNs with our CAC, the resultant models yield significantly better performance and lower computational cost than the baseline models with the standard convolution. More critically, we are able to dynamically allocate suitable computation resources according to the data smoothness of different images, making it possible for content-aware computation. Extensive experiments on various computer vision tasks demonstrate the superiority of our method over existing methods. |
2301.01848 | Ilya Vorobyev | Ilya Vorobyev, Alexey Lebedev, Vladimir Lebedev, Christian Deppe | Correcting one error in channels with feedback | null | null | null | null | cs.IT math.CO math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of correcting a single error in an arbitrary discrete
memoryless channel with error-free instantaneous feedback. For the case of a
one-time feedback, we propose a method for constructing optimal transmission
strategies. The obtained result allows us to prove that for a binary channel,
two feedbacks are sufficient to transmit the same number of messages as in the
case of complete feedback. We also apply the developed techniques to a binary
asymmetric channel to construct transmission strategies for small lengths.
| [
{
"created": "Wed, 4 Jan 2023 23:16:33 GMT",
"version": "v1"
}
] | 2023-01-06 | [
[
"Vorobyev",
"Ilya",
""
],
[
"Lebedev",
"Alexey",
""
],
[
"Lebedev",
"Vladimir",
""
],
[
"Deppe",
"Christian",
""
]
] | We address the problem of correcting a single error in an arbitrary discrete memoryless channel with error-free instantaneous feedback. For the case of a one-time feedback, we propose a method for constructing optimal transmission strategies. The obtained result allows us to prove that for a binary channel, two feedbacks are sufficient to transmit the same number of messages as in the case of complete feedback. We also apply the developed techniques to a binary asymmetric channel to construct transmission strategies for small lengths. |
2305.01454 | Ningyu He | Shangtong Cao, Ningyu He, Yao Guo, Haoyu Wang | A General Static Binary Rewriting Framework for WebAssembly | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Binary rewriting is a widely adopted technique in software analysis.
WebAssembly (Wasm), as an emerging bytecode format, has attracted great
attention from our community. Unfortunately, there is no general-purpose binary
rewriting framework for Wasm, and existing effort on Wasm binary modification
is error-prone and tedious. In this paper, we present BREWasm, the first
general purpose static binary rewriting framework for Wasm, which has addressed
inherent challenges of Wasm rewriting including high complicated binary
structure, strict static syntax verification, and coupling among sections. We
perform extensive evaluation on diverse Wasm applications to show the
efficiency, correctness and effectiveness of BREWasm. We further show the
promising direction of implementing a diverse set of binary rewriting tasks
based on BREWasm in an effortless and user-friendly manner.
| [
{
"created": "Tue, 2 May 2023 14:34:20 GMT",
"version": "v1"
}
] | 2023-05-03 | [
[
"Cao",
"Shangtong",
""
],
[
"He",
"Ningyu",
""
],
[
"Guo",
"Yao",
""
],
[
"Wang",
"Haoyu",
""
]
] | Binary rewriting is a widely adopted technique in software analysis. WebAssembly (Wasm), as an emerging bytecode format, has attracted great attention from our community. Unfortunately, there is no general-purpose binary rewriting framework for Wasm, and existing effort on Wasm binary modification is error-prone and tedious. In this paper, we present BREWasm, the first general purpose static binary rewriting framework for Wasm, which has addressed inherent challenges of Wasm rewriting including high complicated binary structure, strict static syntax verification, and coupling among sections. We perform extensive evaluation on diverse Wasm applications to show the efficiency, correctness and effectiveness of BREWasm. We further show the promising direction of implementing a diverse set of binary rewriting tasks based on BREWasm in an effortless and user-friendly manner. |
2403.19856 | Alexandre Rademaker | Valeria de Paiva, Alexandre Rademaker | Towards a Brazilian History Knowledge Graph | null | null | null | null | cs.AI cs.DL | http://creativecommons.org/licenses/by/4.0/ | This short paper describes the first steps in a project to construct a
knowledge graph for Brazilian history based on the Brazilian Dictionary of
Historical Biographies (DHBB) and Wikipedia/Wikidata. We contend that large
repositories of Brazilian-named entities (people, places, organizations, and
political events and movements) would be beneficial for extracting information
from Portuguese texts. We show that many of the terms/entities described in the
DHBB do not have corresponding concepts (or Q items) in Wikidata, the largest
structured database of entities associated with Wikipedia. We describe previous
work on extracting information from the DHBB and outline the steps to construct
a Wikidata-based historical knowledge graph.
| [
{
"created": "Thu, 28 Mar 2024 22:05:32 GMT",
"version": "v1"
}
] | 2024-04-01 | [
[
"de Paiva",
"Valeria",
""
],
[
"Rademaker",
"Alexandre",
""
]
] | This short paper describes the first steps in a project to construct a knowledge graph for Brazilian history based on the Brazilian Dictionary of Historical Biographies (DHBB) and Wikipedia/Wikidata. We contend that large repositories of Brazilian-named entities (people, places, organizations, and political events and movements) would be beneficial for extracting information from Portuguese texts. We show that many of the terms/entities described in the DHBB do not have corresponding concepts (or Q items) in Wikidata, the largest structured database of entities associated with Wikipedia. We describe previous work on extracting information from the DHBB and outline the steps to construct a Wikidata-based historical knowledge graph. |
2011.12338 | Carlo Michaelis | Carlo Michaelis | PeleNet: A Reservoir Computing Framework for Loihi | null | null | null | null | cs.NE | http://creativecommons.org/licenses/by/4.0/ | High-level frameworks for spiking neural networks are a key factor for fast
prototyping and efficient development of complex algorithms. Such frameworks
have emerged in the last years for traditional computers, but programming
neuromorphic hardware is still a challenge. Often low level programming with
knowledge about the hardware of the neuromorphic chip is required. The PeleNet
framework aims to simplify reservoir computing for the neuromorphic hardware
Loihi. It is build on top of the NxSDK from Intel and is written in Python. The
framework manages weight matrices, parameters and probes. In particular, it
provides an automatic and efficient distribution of networks over several cores
and chips. With this, the user is not confronted with technical details and can
concentrate on experiments.
| [
{
"created": "Tue, 24 Nov 2020 19:33:08 GMT",
"version": "v1"
}
] | 2020-11-26 | [
[
"Michaelis",
"Carlo",
""
]
] | High-level frameworks for spiking neural networks are a key factor for fast prototyping and efficient development of complex algorithms. Such frameworks have emerged in the last years for traditional computers, but programming neuromorphic hardware is still a challenge. Often low level programming with knowledge about the hardware of the neuromorphic chip is required. The PeleNet framework aims to simplify reservoir computing for the neuromorphic hardware Loihi. It is build on top of the NxSDK from Intel and is written in Python. The framework manages weight matrices, parameters and probes. In particular, it provides an automatic and efficient distribution of networks over several cores and chips. With this, the user is not confronted with technical details and can concentrate on experiments. |
1601.05647 | Milos Cernak | Milos Cernak, Afsaneh Asaei, Herv\'e Bourlard | On Structured Sparsity of Phonological Posteriors for Linguistic Parsing | null | Speech Communication, Volume 84, November 2016, Pages 36-45 | 10.1016/j.specom.2016.08.004 | Idiap-RR-07-2016 | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The speech signal conveys information on different time scales from short
time scale or segmental, associated to phonological and phonetic information to
long time scale or supra segmental, associated to syllabic and prosodic
information. Linguistic and neurocognitive studies recognize the phonological
classes at segmental level as the essential and invariant representations used
in speech temporal organization. In the context of speech processing, a deep
neural network (DNN) is an effective computational method to infer the
probability of individual phonological classes from a short segment of speech
signal. A vector of all phonological class probabilities is referred to as
phonological posterior. There are only very few classes comprising a short term
speech signal; hence, the phonological posterior is a sparse vector. Although
the phonological posteriors are estimated at segmental level, we claim that
they convey supra-segmental information. Specifically, we demonstrate that
phonological posteriors are indicative of syllabic and prosodic events.
Building on findings from converging linguistic evidence on the gestural model
of Articulatory Phonology as well as the neural basis of speech perception, we
hypothesize that phonological posteriors convey properties of linguistic
classes at multiple time scales, and this information is embedded in their
support (index) of active coefficients. To verify this hypothesis, we obtain a
binary representation of phonological posteriors at the segmental level which
is referred to as first-order sparsity structure; the high-order structures are
obtained by the concatenation of first-order binary vectors. It is then
confirmed that the classification of supra-segmental linguistic events, the
problem known as linguistic parsing, can be achieved with high accuracy using
asimple binary pattern matching of first-order or high-order structures.
| [
{
"created": "Thu, 21 Jan 2016 14:15:41 GMT",
"version": "v1"
},
{
"created": "Wed, 18 May 2016 14:08:02 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Aug 2016 09:23:58 GMT",
"version": "v3"
}
] | 2016-09-16 | [
[
"Cernak",
"Milos",
""
],
[
"Asaei",
"Afsaneh",
""
],
[
"Bourlard",
"Hervé",
""
]
] | The speech signal conveys information on different time scales from short time scale or segmental, associated to phonological and phonetic information to long time scale or supra segmental, associated to syllabic and prosodic information. Linguistic and neurocognitive studies recognize the phonological classes at segmental level as the essential and invariant representations used in speech temporal organization. In the context of speech processing, a deep neural network (DNN) is an effective computational method to infer the probability of individual phonological classes from a short segment of speech signal. A vector of all phonological class probabilities is referred to as phonological posterior. There are only very few classes comprising a short term speech signal; hence, the phonological posterior is a sparse vector. Although the phonological posteriors are estimated at segmental level, we claim that they convey supra-segmental information. Specifically, we demonstrate that phonological posteriors are indicative of syllabic and prosodic events. Building on findings from converging linguistic evidence on the gestural model of Articulatory Phonology as well as the neural basis of speech perception, we hypothesize that phonological posteriors convey properties of linguistic classes at multiple time scales, and this information is embedded in their support (index) of active coefficients. To verify this hypothesis, we obtain a binary representation of phonological posteriors at the segmental level which is referred to as first-order sparsity structure; the high-order structures are obtained by the concatenation of first-order binary vectors. It is then confirmed that the classification of supra-segmental linguistic events, the problem known as linguistic parsing, can be achieved with high accuracy using asimple binary pattern matching of first-order or high-order structures. |
2304.06758 | Vidya Sagar | Vidya Sagar, Ritumoni Sarma | Codes over the non-unital non-commutative ring $E$ using simplicial
complexes | 20 pages. arXiv admin note: substantial text overlap with 2211.15747 | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | There are exactly two non-commutative rings of size $4$, namely, $E = \langle
a, b ~\vert ~ 2a = 2b = 0, a^2 = a, b^2 = b, ab= a, ba = b\rangle$ and its
opposite ring $F$. These rings are non-unital. A subset $D$ of $E^m$ is defined
with the help of simplicial complexes, and utilized to construct linear
left-$E$-codes $C^L_D=\{(v\cdot d)_{d\in D} : v\in E^m\}$ and right-$E$-codes
$C^R_D=\{(d\cdot v)_{d\in D} : v\in E^m\}$. We study their corresponding binary
codes obtained via a Gray map. The weight distributions of all these codes are
computed. We achieve a couple of infinite families of optimal codes with
respect to the Griesmer bound. Ashikhmin-Barg's condition for minimality of a
linear code is satisfied by most of the binary codes we constructed here. All
the binary codes in this article are few-weight codes, and self-orthogonal
codes under certain mild conditions. This is the first attempt to study the
structure of linear codes over non-unital non-commutative rings using
simplicial complexes.
| [
{
"created": "Thu, 13 Apr 2023 18:01:41 GMT",
"version": "v1"
}
] | 2023-09-20 | [
[
"Sagar",
"Vidya",
""
],
[
"Sarma",
"Ritumoni",
""
]
] | There are exactly two non-commutative rings of size $4$, namely, $E = \langle a, b ~\vert ~ 2a = 2b = 0, a^2 = a, b^2 = b, ab= a, ba = b\rangle$ and its opposite ring $F$. These rings are non-unital. A subset $D$ of $E^m$ is defined with the help of simplicial complexes, and utilized to construct linear left-$E$-codes $C^L_D=\{(v\cdot d)_{d\in D} : v\in E^m\}$ and right-$E$-codes $C^R_D=\{(d\cdot v)_{d\in D} : v\in E^m\}$. We study their corresponding binary codes obtained via a Gray map. The weight distributions of all these codes are computed. We achieve a couple of infinite families of optimal codes with respect to the Griesmer bound. Ashikhmin-Barg's condition for minimality of a linear code is satisfied by most of the binary codes we constructed here. All the binary codes in this article are few-weight codes, and self-orthogonal codes under certain mild conditions. This is the first attempt to study the structure of linear codes over non-unital non-commutative rings using simplicial complexes. |
1305.4045 | Hiroki Kuzuno | Hiroki Kuzuno and Satoshi Tonami | Signature Generation for Sensitive Information Leakage in Android
Applications | 8 pages, 4 figures | 29th IEEE International Conference on Data Engineering (ICDE)
Workshops 2013 Mobile Data Analytics (MoDA) | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, there has been rapid growth in mobile devices such as
smartphones, and a number of applications are developed specifically for the
smartphone market. In particular, there are many applications that are ``free''
to the user, but depend on advertisement services for their revenue. Such
applications include an advertisement module - a library provided by the
advertisement service - that can collect a user's sensitive information and
transmit it across the network. Users accept this business model, but in most
cases the applications do not require the user's acknowledgment in order to
transmit sensitive information. Therefore, such applications' behavior becomes
an invasion of privacy. In our analysis of 1,188 Android applications' network
traffic and permissions, 93% of the applications we analyzed connected to
multiple destinations when using the network. 61% required a permission
combination that included both access to sensitive information and use of
networking services. These applications have the potential to leak the user's
sensitive information. In an effort to enable users to control the transmission
of their private information, we propose a system which, using a novel
clustering method based on the HTTP packet destination and content distances,
generates signatures from the clustering result and uses them to detect
sensitive information leakage from Android applications. Our system does not
require an Android framework modification or any special privileges. Thus users
can easily introduce our system to their devices, and manage suspicious
applications' network behavior in a fine grained manner. Our system accurately
detected 94% of the sensitive information leakage from the applications
evaluated and produced only 5% false negative results, and less than 3% false
positive results.
| [
{
"created": "Fri, 17 May 2013 10:43:03 GMT",
"version": "v1"
}
] | 2013-05-20 | [
[
"Kuzuno",
"Hiroki",
""
],
[
"Tonami",
"Satoshi",
""
]
] | In recent years, there has been rapid growth in mobile devices such as smartphones, and a number of applications are developed specifically for the smartphone market. In particular, there are many applications that are ``free'' to the user, but depend on advertisement services for their revenue. Such applications include an advertisement module - a library provided by the advertisement service - that can collect a user's sensitive information and transmit it across the network. Users accept this business model, but in most cases the applications do not require the user's acknowledgment in order to transmit sensitive information. Therefore, such applications' behavior becomes an invasion of privacy. In our analysis of 1,188 Android applications' network traffic and permissions, 93% of the applications we analyzed connected to multiple destinations when using the network. 61% required a permission combination that included both access to sensitive information and use of networking services. These applications have the potential to leak the user's sensitive information. In an effort to enable users to control the transmission of their private information, we propose a system which, using a novel clustering method based on the HTTP packet destination and content distances, generates signatures from the clustering result and uses them to detect sensitive information leakage from Android applications. Our system does not require an Android framework modification or any special privileges. Thus users can easily introduce our system to their devices, and manage suspicious applications' network behavior in a fine grained manner. Our system accurately detected 94% of the sensitive information leakage from the applications evaluated and produced only 5% false negative results, and less than 3% false positive results. |
2108.11942 | Miguel Arana-Catania | M. Arana-Catania, F.A. Van Lier, Rob Procter | Machine Learning for Mediation in Armed Conflicts | 24 pages, 16 figures, 2 tables, to be presented in Data for Policy
conference | null | null | null | cs.CL cs.CY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today's conflicts are becoming increasingly complex, fluid and fragmented,
often involving a host of national and international actors with multiple and
often divergent interests. This development poses significant challenges for
conflict mediation, as mediators struggle to make sense of conflict dynamics,
such as the range of conflict parties and the evolution of their political
positions, the distinction between relevant and less relevant actors in peace
making, or the identification of key conflict issues and their interdependence.
International peace efforts appear increasingly ill-equipped to successfully
address these challenges. While technology is being increasingly used in a
range of conflict related fields, such as conflict predicting or information
gathering, less attention has been given to how technology can contribute to
conflict mediation. This case study is the first to apply state-of-the-art
machine learning technologies to data from an ongoing mediation process. Using
dialogue transcripts from peace negotiations in Yemen, this study shows how
machine-learning tools can effectively support international mediators by
managing knowledge and offering additional conflict analysis tools to assess
complex information. Apart from illustrating the potential of machine learning
tools in conflict mediation, the paper also emphasises the importance of
interdisciplinary and participatory research design for the development of
context-sensitive and targeted tools and to ensure meaningful and responsible
implementation.
| [
{
"created": "Thu, 26 Aug 2021 17:53:37 GMT",
"version": "v1"
}
] | 2021-08-27 | [
[
"Arana-Catania",
"M.",
""
],
[
"Van Lier",
"F. A.",
""
],
[
"Procter",
"Rob",
""
]
] | Today's conflicts are becoming increasingly complex, fluid and fragmented, often involving a host of national and international actors with multiple and often divergent interests. This development poses significant challenges for conflict mediation, as mediators struggle to make sense of conflict dynamics, such as the range of conflict parties and the evolution of their political positions, the distinction between relevant and less relevant actors in peace making, or the identification of key conflict issues and their interdependence. International peace efforts appear increasingly ill-equipped to successfully address these challenges. While technology is being increasingly used in a range of conflict related fields, such as conflict predicting or information gathering, less attention has been given to how technology can contribute to conflict mediation. This case study is the first to apply state-of-the-art machine learning technologies to data from an ongoing mediation process. Using dialogue transcripts from peace negotiations in Yemen, this study shows how machine-learning tools can effectively support international mediators by managing knowledge and offering additional conflict analysis tools to assess complex information. Apart from illustrating the potential of machine learning tools in conflict mediation, the paper also emphasises the importance of interdisciplinary and participatory research design for the development of context-sensitive and targeted tools and to ensure meaningful and responsible implementation. |
2111.01865 | Dogan Can Cicek | Dogan C. Cicek, Enes Duran, Baturay Saglam, Furkan B. Mutlu, Suleyman
S. Kozat | Off-Policy Correction for Deep Deterministic Policy Gradient Algorithms
via Batch Prioritized Experience Replay | Accepted at The 33rd IEEE International Conference on Tools with
Artificial Intelligence (ICTAI 2021) | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The experience replay mechanism allows agents to use the experiences multiple
times. In prior works, the sampling probability of the transitions was adjusted
according to their importance. Reassigning sampling probabilities for every
transition in the replay buffer after each iteration is highly inefficient.
Therefore, experience replay prioritization algorithms recalculate the
significance of a transition when the corresponding transition is sampled to
gain computational efficiency. However, the importance level of the transitions
changes dynamically as the policy and the value function of the agent are
updated. In addition, experience replay stores the transitions are generated by
the previous policies of the agent that may significantly deviate from the most
recent policy of the agent. Higher deviation from the most recent policy of the
agent leads to more off-policy updates, which is detrimental for the agent. In
this paper, we develop a novel algorithm, Batch Prioritizing Experience Replay
via KL Divergence (KLPER), which prioritizes batch of transitions rather than
directly prioritizing each transition. Moreover, to reduce the off-policyness
of the updates, our algorithm selects one batch among a certain number of
batches and forces the agent to learn through the batch that is most likely
generated by the most recent policy of the agent. We combine our algorithm with
Deep Deterministic Policy Gradient and Twin Delayed Deep Deterministic Policy
Gradient and evaluate it on various continuous control tasks. KLPER provides
promising improvements for deep deterministic continuous control algorithms in
terms of sample efficiency, final performance, and stability of the policy
during the training.
| [
{
"created": "Tue, 2 Nov 2021 19:51:59 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Nov 2021 15:49:10 GMT",
"version": "v2"
}
] | 2021-11-15 | [
[
"Cicek",
"Dogan C.",
""
],
[
"Duran",
"Enes",
""
],
[
"Saglam",
"Baturay",
""
],
[
"Mutlu",
"Furkan B.",
""
],
[
"Kozat",
"Suleyman S.",
""
]
] | The experience replay mechanism allows agents to use the experiences multiple times. In prior works, the sampling probability of the transitions was adjusted according to their importance. Reassigning sampling probabilities for every transition in the replay buffer after each iteration is highly inefficient. Therefore, experience replay prioritization algorithms recalculate the significance of a transition when the corresponding transition is sampled to gain computational efficiency. However, the importance level of the transitions changes dynamically as the policy and the value function of the agent are updated. In addition, experience replay stores the transitions are generated by the previous policies of the agent that may significantly deviate from the most recent policy of the agent. Higher deviation from the most recent policy of the agent leads to more off-policy updates, which is detrimental for the agent. In this paper, we develop a novel algorithm, Batch Prioritizing Experience Replay via KL Divergence (KLPER), which prioritizes batch of transitions rather than directly prioritizing each transition. Moreover, to reduce the off-policyness of the updates, our algorithm selects one batch among a certain number of batches and forces the agent to learn through the batch that is most likely generated by the most recent policy of the agent. We combine our algorithm with Deep Deterministic Policy Gradient and Twin Delayed Deep Deterministic Policy Gradient and evaluate it on various continuous control tasks. KLPER provides promising improvements for deep deterministic continuous control algorithms in terms of sample efficiency, final performance, and stability of the policy during the training. |
1404.6620 | Daniel Lucani | Daniel E. Lucani, Morten V. Pedersen, Diego Ruano, Chres W.
S{\o}rensen, Frank H. P. Fitzek, Janus Heide, Olav Geil | Fulcrum Network Codes: A Code for Fluid Allocation of Complexity | 30 pages, 12 figures, Submitted to the IEEE Transactions on
Communications | null | null | null | cs.IT cs.NI math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes Fulcrum network codes, a network coding framework that
achieves three seemingly conflicting objectives: (i) to reduce the coding
coefficient overhead to almost n bits per packet in a generation of n packets;
(ii) to operate the network using only GF(2) operations at intermediate nodes
if necessary, dramatically reducing complexity in the network; (iii) to deliver
an end-to-end performance that is close to that of a high-field network coding
system for high-end receivers while simultaneously catering to low-end
receivers that decode in GF(2). As a consequence of (ii) and (iii), Fulcrum
codes have a unique trait missing so far in the network coding literature: they
provide the network with the flexibility to spread computational complexity
over different devices depending on their current load, network conditions, or
even energy targets in a decentralized way. At the core of our framework lies
the idea of precoding at the sources using an expansion field GF(2h) to
increase the number of dimensions seen by the network using a linear mapping.
Fulcrum codes can use any high-field linear code for precoding, e.g.,
Reed-Solomon, with the structure of the precode determining some of the key
features of the resulting code. For example, a systematic structure provides
the ability to manage heterogeneous receivers while using the same data stream.
Our analysis shows that the number of additional dimensions created during
precoding controls the trade-off between delay, overhead, and complexity. Our
implementation and measurements show that Fulcrum achieves similar decoding
probability as high field Random Linear Network Coding (RLNC) approaches but
with encoders/decoders that are an order of magnitude faster.
| [
{
"created": "Sat, 26 Apr 2014 07:31:52 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Nov 2015 09:48:53 GMT",
"version": "v2"
}
] | 2015-11-19 | [
[
"Lucani",
"Daniel E.",
""
],
[
"Pedersen",
"Morten V.",
""
],
[
"Ruano",
"Diego",
""
],
[
"Sørensen",
"Chres W.",
""
],
[
"Fitzek",
"Frank H. P.",
""
],
[
"Heide",
"Janus",
""
],
[
"Geil",
"Olav",
""
]
] | This paper proposes Fulcrum network codes, a network coding framework that achieves three seemingly conflicting objectives: (i) to reduce the coding coefficient overhead to almost n bits per packet in a generation of n packets; (ii) to operate the network using only GF(2) operations at intermediate nodes if necessary, dramatically reducing complexity in the network; (iii) to deliver an end-to-end performance that is close to that of a high-field network coding system for high-end receivers while simultaneously catering to low-end receivers that decode in GF(2). As a consequence of (ii) and (iii), Fulcrum codes have a unique trait missing so far in the network coding literature: they provide the network with the flexibility to spread computational complexity over different devices depending on their current load, network conditions, or even energy targets in a decentralized way. At the core of our framework lies the idea of precoding at the sources using an expansion field GF(2h) to increase the number of dimensions seen by the network using a linear mapping. Fulcrum codes can use any high-field linear code for precoding, e.g., Reed-Solomon, with the structure of the precode determining some of the key features of the resulting code. For example, a systematic structure provides the ability to manage heterogeneous receivers while using the same data stream. Our analysis shows that the number of additional dimensions created during precoding controls the trade-off between delay, overhead, and complexity. Our implementation and measurements show that Fulcrum achieves similar decoding probability as high field Random Linear Network Coding (RLNC) approaches but with encoders/decoders that are an order of magnitude faster. |
1311.6877 | Gaurav Vijayvargiya | Gaurav Vijayvargiya, Sanjay Silakari and Rajeev Pandey | A Survey: Various Techniques of Image Compression | 5 pages | null | null | null | cs.IT cs.MM math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses about various image compression techniques. On the basis
of analyzing the various image compression techniques this paper presents a
survey of existing research papers. In this paper we analyze different types of
existing method of image compression. Compression of an image is significantly
different then compression of binary raw data. To solve these use different
types of techniques for image compression. Now there is question may be arise
that how to image compress and which types of technique is used. For this
purpose there are basically two types are method are introduced namely lossless
and lossy image compression techniques. In present time some other techniques
are added with basic method. In some area neural network genetic algorithms are
used for image compression.
Keywords-Image Compression; Lossless; Lossy; Redundancy; Benefits of
Compression.
| [
{
"created": "Wed, 27 Nov 2013 06:40:45 GMT",
"version": "v1"
}
] | 2013-11-28 | [
[
"Vijayvargiya",
"Gaurav",
""
],
[
"Silakari",
"Sanjay",
""
],
[
"Pandey",
"Rajeev",
""
]
] | This paper addresses about various image compression techniques. On the basis of analyzing the various image compression techniques this paper presents a survey of existing research papers. In this paper we analyze different types of existing method of image compression. Compression of an image is significantly different then compression of binary raw data. To solve these use different types of techniques for image compression. Now there is question may be arise that how to image compress and which types of technique is used. For this purpose there are basically two types are method are introduced namely lossless and lossy image compression techniques. In present time some other techniques are added with basic method. In some area neural network genetic algorithms are used for image compression. Keywords-Image Compression; Lossless; Lossy; Redundancy; Benefits of Compression. |
1506.01930 | Benjamin Lucien Kaminski | Benjamin Lucien Kaminski, Joost-Pieter Katoen | On the Hardness of Almost-Sure Termination | MFCS 2015. arXiv admin note: text overlap with arXiv:1410.7225 | null | null | null | cs.LO cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the computational hardness of computing expected
outcomes and deciding (universal) (positive) almost-sure termination of
probabilistic programs. It is shown that computing lower and upper bounds of
expected outcomes is $\Sigma_1^0$- and $\Sigma_2^0$-complete, respectively.
Deciding (universal) almost-sure termination as well as deciding whether the
expected outcome of a program equals a given rational value is shown to be
$\Pi^0_2$-complete. Finally, it is shown that deciding (universal) positive
almost-sure termination is $\Sigma_2^0$-complete ($\Pi_3^0$-complete).
| [
{
"created": "Fri, 5 Jun 2015 14:52:51 GMT",
"version": "v1"
}
] | 2015-06-08 | [
[
"Kaminski",
"Benjamin Lucien",
""
],
[
"Katoen",
"Joost-Pieter",
""
]
] | This paper considers the computational hardness of computing expected outcomes and deciding (universal) (positive) almost-sure termination of probabilistic programs. It is shown that computing lower and upper bounds of expected outcomes is $\Sigma_1^0$- and $\Sigma_2^0$-complete, respectively. Deciding (universal) almost-sure termination as well as deciding whether the expected outcome of a program equals a given rational value is shown to be $\Pi^0_2$-complete. Finally, it is shown that deciding (universal) positive almost-sure termination is $\Sigma_2^0$-complete ($\Pi_3^0$-complete). |
2303.09219 | Qiao Wu | Qiao Wu, Jiaqi Yang, Kun Sun, Chu'ai Zhang, Yanning Zhang, Mathieu
Salzmann | MixCycle: Mixup Assisted Semi-Supervised 3D Single Object Tracking with
Cycle Consistency | Accepted by ICCV23 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D single object tracking (SOT) is an indispensable part of automated
driving. Existing approaches rely heavily on large, densely labeled datasets.
However, annotating point clouds is both costly and time-consuming. Inspired by
the great success of cycle tracking in unsupervised 2D SOT, we introduce the
first semi-supervised approach to 3D SOT. Specifically, we introduce two
cycle-consistency strategies for supervision: 1) Self tracking cycles, which
leverage labels to help the model converge better in the early stages of
training; 2) forward-backward cycles, which strengthen the tracker's robustness
to motion variations and the template noise caused by the template update
strategy. Furthermore, we propose a data augmentation strategy named SOTMixup
to improve the tracker's robustness to point cloud diversity. SOTMixup
generates training samples by sampling points in two point clouds with a mixing
rate and assigns a reasonable loss weight for training according to the mixing
rate. The resulting MixCycle approach generalizes to appearance matching-based
trackers. On the KITTI benchmark, based on the P2B tracker, MixCycle trained
with $\textbf{10\%}$ labels outperforms P2B trained with $\textbf{100\%}$
labels, and achieves a $\textbf{28.4\%}$ precision improvement when using
$\textbf{1\%}$ labels. Our code will be released at
\url{https://github.com/Mumuqiao/MixCycle}.
| [
{
"created": "Thu, 16 Mar 2023 10:48:59 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Aug 2023 14:12:42 GMT",
"version": "v2"
}
] | 2023-08-17 | [
[
"Wu",
"Qiao",
""
],
[
"Yang",
"Jiaqi",
""
],
[
"Sun",
"Kun",
""
],
[
"Zhang",
"Chu'ai",
""
],
[
"Zhang",
"Yanning",
""
],
[
"Salzmann",
"Mathieu",
""
]
] | 3D single object tracking (SOT) is an indispensable part of automated driving. Existing approaches rely heavily on large, densely labeled datasets. However, annotating point clouds is both costly and time-consuming. Inspired by the great success of cycle tracking in unsupervised 2D SOT, we introduce the first semi-supervised approach to 3D SOT. Specifically, we introduce two cycle-consistency strategies for supervision: 1) Self tracking cycles, which leverage labels to help the model converge better in the early stages of training; 2) forward-backward cycles, which strengthen the tracker's robustness to motion variations and the template noise caused by the template update strategy. Furthermore, we propose a data augmentation strategy named SOTMixup to improve the tracker's robustness to point cloud diversity. SOTMixup generates training samples by sampling points in two point clouds with a mixing rate and assigns a reasonable loss weight for training according to the mixing rate. The resulting MixCycle approach generalizes to appearance matching-based trackers. On the KITTI benchmark, based on the P2B tracker, MixCycle trained with $\textbf{10\%}$ labels outperforms P2B trained with $\textbf{100\%}$ labels, and achieves a $\textbf{28.4\%}$ precision improvement when using $\textbf{1\%}$ labels. Our code will be released at \url{https://github.com/Mumuqiao/MixCycle}. |
0811.4257 | Juan Tapiador | Julio C. Hernandez-Castro, Juan M. E. Tapiador, Pedro Peris-Lopez,
Jean-Jacques Quisquater | Cryptanalysis of the SASI Ultralightweight RFID Authentication Protocol
with Modular Rotations | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we present the first passive attack over the SASI lightweight
authentication protocol with modular rotations. This can be used to fully
recover the secret $ID$ of the RFID tag, which is the value the protocol is
designed to conceal. The attack is described initially for recovering $\lfloor
log_2(96) \rfloor=6$ bits of the secret value $ID$, a result that by itself
allows to mount traceability attacks on any given tag. However, the proposed
scheme can be extended to obtain any amount of bits of the secret $ID$,
provided a sufficiently large number of successful consecutive sessions are
eavesdropped. We also present results on the attack's efficiency, and some
ideas to secure this version of the SASI protocol.
| [
{
"created": "Wed, 26 Nov 2008 09:40:18 GMT",
"version": "v1"
}
] | 2008-11-27 | [
[
"Hernandez-Castro",
"Julio C.",
""
],
[
"Tapiador",
"Juan M. E.",
""
],
[
"Peris-Lopez",
"Pedro",
""
],
[
"Quisquater",
"Jean-Jacques",
""
]
] | In this work we present the first passive attack over the SASI lightweight authentication protocol with modular rotations. This can be used to fully recover the secret $ID$ of the RFID tag, which is the value the protocol is designed to conceal. The attack is described initially for recovering $\lfloor log_2(96) \rfloor=6$ bits of the secret value $ID$, a result that by itself allows to mount traceability attacks on any given tag. However, the proposed scheme can be extended to obtain any amount of bits of the secret $ID$, provided a sufficiently large number of successful consecutive sessions are eavesdropped. We also present results on the attack's efficiency, and some ideas to secure this version of the SASI protocol. |
2306.09104 | Zhanke Zhou | Zhanke Zhou, Chenyu Zhou, Xuan Li, Jiangchao Yao, Quanming Yao, Bo Han | On Strengthening and Defending Graph Reconstruction Attack with Markov
Chain Approximation | Accepted by ICML 2023 | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although powerful graph neural networks (GNNs) have boosted numerous
real-world applications, the potential privacy risk is still underexplored. To
close this gap, we perform the first comprehensive study of graph
reconstruction attack that aims to reconstruct the adjacency of nodes. We show
that a range of factors in GNNs can lead to the surprising leakage of private
links. Especially by taking GNNs as a Markov chain and attacking GNNs via a
flexible chain approximation, we systematically explore the underneath
principles of graph reconstruction attack, and propose two information
theory-guided mechanisms: (1) the chain-based attack method with adaptive
designs for extracting more private information; (2) the chain-based defense
method that sharply reduces the attack fidelity with moderate accuracy loss.
Such two objectives disclose a critical belief that to recover better in
attack, you must extract more multi-aspect knowledge from the trained GNN;
while to learn safer for defense, you must forget more link-sensitive
information in training GNNs. Empirically, we achieve state-of-the-art results
on six datasets and three common GNNs. The code is publicly available at:
https://github.com/tmlr-group/MC-GRA.
| [
{
"created": "Thu, 15 Jun 2023 13:00:56 GMT",
"version": "v1"
}
] | 2023-06-16 | [
[
"Zhou",
"Zhanke",
""
],
[
"Zhou",
"Chenyu",
""
],
[
"Li",
"Xuan",
""
],
[
"Yao",
"Jiangchao",
""
],
[
"Yao",
"Quanming",
""
],
[
"Han",
"Bo",
""
]
] | Although powerful graph neural networks (GNNs) have boosted numerous real-world applications, the potential privacy risk is still underexplored. To close this gap, we perform the first comprehensive study of graph reconstruction attack that aims to reconstruct the adjacency of nodes. We show that a range of factors in GNNs can lead to the surprising leakage of private links. Especially by taking GNNs as a Markov chain and attacking GNNs via a flexible chain approximation, we systematically explore the underneath principles of graph reconstruction attack, and propose two information theory-guided mechanisms: (1) the chain-based attack method with adaptive designs for extracting more private information; (2) the chain-based defense method that sharply reduces the attack fidelity with moderate accuracy loss. Such two objectives disclose a critical belief that to recover better in attack, you must extract more multi-aspect knowledge from the trained GNN; while to learn safer for defense, you must forget more link-sensitive information in training GNNs. Empirically, we achieve state-of-the-art results on six datasets and three common GNNs. The code is publicly available at: https://github.com/tmlr-group/MC-GRA. |
1805.07252 | Xiaojian Ma | Mingxuan Jing, Xiaojian Ma, Fuchun Sun, Huaping Liu | Learning and Inferring Movement with Deep Generative Model | Mingxuan Jing and Xiaojian Ma contributed equally to this work | null | null | null | cs.LG cs.RO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning and inference movement is a very challenging problem due to its high
dimensionality and dependency to varied environments or tasks. In this paper,
we propose an effective probabilistic method for learning and inference of
basic movements. The motion planning problem is formulated as learning on a
directed graphic model and deep generative model is used to perform learning
and inference from demonstrations. An important characteristic of this method
is that it flexibly incorporates the task descriptors and context information
for long-term planning and it can be combined with dynamic systems for robot
control. The experimental validations on robotic approaching path planning
tasks show the advantages over the base methods with limited training data.
| [
{
"created": "Fri, 18 May 2018 14:50:26 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Oct 2018 01:25:57 GMT",
"version": "v2"
}
] | 2018-10-30 | [
[
"Jing",
"Mingxuan",
""
],
[
"Ma",
"Xiaojian",
""
],
[
"Sun",
"Fuchun",
""
],
[
"Liu",
"Huaping",
""
]
] | Learning and inference movement is a very challenging problem due to its high dimensionality and dependency to varied environments or tasks. In this paper, we propose an effective probabilistic method for learning and inference of basic movements. The motion planning problem is formulated as learning on a directed graphic model and deep generative model is used to perform learning and inference from demonstrations. An important characteristic of this method is that it flexibly incorporates the task descriptors and context information for long-term planning and it can be combined with dynamic systems for robot control. The experimental validations on robotic approaching path planning tasks show the advantages over the base methods with limited training data. |
1912.11211 | Anna Zaitsev | Mark Ledwich, Anna Zaitsev | Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization | null | null | null | null | cs.SI cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The role that YouTube and its behind-the-scenes recommendation algorithm
plays in encouraging online radicalization has been suggested by both
journalists and academics alike. This study directly quantifies these claims by
examining the role that YouTube's algorithm plays in suggesting radicalized
content. After categorizing nearly 800 political channels, we were able to
differentiate between political schemas in order to analyze the algorithm
traffic flows out and between each group. After conducting a detailed analysis
of recommendations received by each channel type, we refute the popular
radicalization claims. To the contrary, these data suggest that YouTube's
recommendation algorithm actively discourages viewers from visiting
radicalizing or extremist content. Instead, the algorithm is shown to favor
mainstream media and cable news content over independent YouTube channels with
slant towards left-leaning or politically neutral channels. Our study thus
suggests that YouTube's recommendation algorithm fails to promote inflammatory
or radicalized content, as previously claimed by several outlets.
| [
{
"created": "Tue, 24 Dec 2019 05:09:01 GMT",
"version": "v1"
}
] | 2019-12-25 | [
[
"Ledwich",
"Mark",
""
],
[
"Zaitsev",
"Anna",
""
]
] | The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTube's algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group. After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. To the contrary, these data suggest that YouTube's recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTube's recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets. |
2407.02329 | Dewei Zhou | Dewei Zhou, You Li, Fan Ma, Zongxin Yang, Yi Yang | MIGC++: Advanced Multi-Instance Generation Controller for Image
Synthesis | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce the Multi-Instance Generation (MIG) task, which focuses on
generating multiple instances within a single image, each accurately placed at
predefined positions with attributes such as category, color, and shape,
strictly following user specifications. MIG faces three main challenges:
avoiding attribute leakage between instances, supporting diverse instance
descriptions, and maintaining consistency in iterative generation. To address
attribute leakage, we propose the Multi-Instance Generation Controller (MIGC).
MIGC generates multiple instances through a divide-and-conquer strategy,
breaking down multi-instance shading into single-instance tasks with singular
attributes, later integrated. To provide more types of instance descriptions,
we developed MIGC++. MIGC++ allows attribute control through text \& images and
position control through boxes \& masks. Lastly, we introduced the
Consistent-MIG algorithm to enhance the iterative MIG ability of MIGC and
MIGC++. This algorithm ensures consistency in unmodified regions during the
addition, deletion, or modification of instances, and preserves the identity of
instances when their attributes are changed. We introduce the COCO-MIG and
Multimodal-MIG benchmarks to evaluate these methods. Extensive experiments on
these benchmarks, along with the COCO-Position benchmark and DrawBench,
demonstrate that our methods substantially outperform existing techniques,
maintaining precise control over aspects including position, attribute, and
quantity. Project page: https://github.com/limuloo/MIGC.
| [
{
"created": "Tue, 2 Jul 2024 14:59:37 GMT",
"version": "v1"
}
] | 2024-07-03 | [
[
"Zhou",
"Dewei",
""
],
[
"Li",
"You",
""
],
[
"Ma",
"Fan",
""
],
[
"Yang",
"Zongxin",
""
],
[
"Yang",
"Yi",
""
]
] | We introduce the Multi-Instance Generation (MIG) task, which focuses on generating multiple instances within a single image, each accurately placed at predefined positions with attributes such as category, color, and shape, strictly following user specifications. MIG faces three main challenges: avoiding attribute leakage between instances, supporting diverse instance descriptions, and maintaining consistency in iterative generation. To address attribute leakage, we propose the Multi-Instance Generation Controller (MIGC). MIGC generates multiple instances through a divide-and-conquer strategy, breaking down multi-instance shading into single-instance tasks with singular attributes, later integrated. To provide more types of instance descriptions, we developed MIGC++. MIGC++ allows attribute control through text \& images and position control through boxes \& masks. Lastly, we introduced the Consistent-MIG algorithm to enhance the iterative MIG ability of MIGC and MIGC++. This algorithm ensures consistency in unmodified regions during the addition, deletion, or modification of instances, and preserves the identity of instances when their attributes are changed. We introduce the COCO-MIG and Multimodal-MIG benchmarks to evaluate these methods. Extensive experiments on these benchmarks, along with the COCO-Position benchmark and DrawBench, demonstrate that our methods substantially outperform existing techniques, maintaining precise control over aspects including position, attribute, and quantity. Project page: https://github.com/limuloo/MIGC. |
1712.10232 | Vikram Krishnamurthy | Vikram Krishnamurthy and Yan Duan | Dependence Structure Analysis Of Meta-level Metrics in YouTube Videos: A
Vine Copula Approach | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper uses vine copula to analyze the multivariate statistical
dependence in a massive YouTube dataset consisting of 6 million videos over 25
thousand channels. Specifically we study the statistical dependency of 7
YouTube meta-level metrics: view count, number of likes, number of comments,
length of video title, number of subscribers, click rates, and average
percentage watching. Dependency parameters such as the Kendall's tau and tail
dependence coefficients are computed to evaluate the pair-wise dependence of
these meta-level metrics. The vine copula model yields several interesting
dependency structures. We show that view count and number of likes' are in the
central position of the dependence structure. Conditioned on these two metrics,
the other five meta-level metrics are virtually independent of each other.
Also, Sports, Gaming, Fashion, Comedy videos have similar dependence structure
to each other, while the News category exhibits a strong tail dependence. We
also study Granger causality effects and upload dynamics and their impact on
view count. Our findings provide a useful understanding of user engagement in
YouTube.
| [
{
"created": "Fri, 29 Dec 2017 13:56:16 GMT",
"version": "v1"
}
] | 2018-01-01 | [
[
"Krishnamurthy",
"Vikram",
""
],
[
"Duan",
"Yan",
""
]
] | This paper uses vine copula to analyze the multivariate statistical dependence in a massive YouTube dataset consisting of 6 million videos over 25 thousand channels. Specifically we study the statistical dependency of 7 YouTube meta-level metrics: view count, number of likes, number of comments, length of video title, number of subscribers, click rates, and average percentage watching. Dependency parameters such as the Kendall's tau and tail dependence coefficients are computed to evaluate the pair-wise dependence of these meta-level metrics. The vine copula model yields several interesting dependency structures. We show that view count and number of likes' are in the central position of the dependence structure. Conditioned on these two metrics, the other five meta-level metrics are virtually independent of each other. Also, Sports, Gaming, Fashion, Comedy videos have similar dependence structure to each other, while the News category exhibits a strong tail dependence. We also study Granger causality effects and upload dynamics and their impact on view count. Our findings provide a useful understanding of user engagement in YouTube. |
2205.08910 | Mustapha Hamad | Mustapha Hamad, Mich\`ele Wigger, Mireille Sarkiss | Strong Converses using Change of Measure and Asymptotic Markov Chains | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The main contribution of this paper is a strong converse result for $K$-hop
distributed hypothesis testing against independence with multiple
(intermediate) decision centers under a Markov condition. Our result shows that
the set of type-II error exponents that can simultaneously be achieved at all
the terminals does not depend on the maximum permissible type-I error
probabilities. Our strong converse proof is based on a change of measure
argument and on the asymptotic proof of specific Markov chains. This proof
method can also be used for other converse proofs, and is appealing because it
does not require resorting to variational characterizations or blowing-up
methods as in previous related proofs.
| [
{
"created": "Wed, 18 May 2022 13:04:28 GMT",
"version": "v1"
}
] | 2022-05-19 | [
[
"Hamad",
"Mustapha",
""
],
[
"Wigger",
"Michèle",
""
],
[
"Sarkiss",
"Mireille",
""
]
] | The main contribution of this paper is a strong converse result for $K$-hop distributed hypothesis testing against independence with multiple (intermediate) decision centers under a Markov condition. Our result shows that the set of type-II error exponents that can simultaneously be achieved at all the terminals does not depend on the maximum permissible type-I error probabilities. Our strong converse proof is based on a change of measure argument and on the asymptotic proof of specific Markov chains. This proof method can also be used for other converse proofs, and is appealing because it does not require resorting to variational characterizations or blowing-up methods as in previous related proofs. |
1601.06426 | Kousha Kalantari | Kousha Kalantari, Lalitha Sankar, Anand Sarwate | Robust Privacy-Utility Tradeoffs under Differential Privacy and Hamming
Distortion | Extended abstract of ISIT 2016 submission | K. Kalantari, L. Sankar and A. D. Sarwate, "Robust Privacy-Utility
Tradeoffs Under Differential Privacy and Hamming Distortion," in IEEE
Transactions on Information Forensics and Security, vol. 13, no. 11, pp.
2816-2830, Nov. 2018 | 10.1109/TIFS.2018.2831619 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A privacy-utility tradeoff is developed for an arbitrary set of
finite-alphabet source distributions. Privacy is quantified using differential
privacy (DP), and utility is quantified using expected Hamming distortion
maximized over the set of distributions. The family of source distribution sets
(source sets) is categorized into three classes, based on different levels of
prior knowledge they capture. For source sets whose convex hull includes the
uniform distribution, symmetric DP mechanisms are optimal. For source sets
whose probability values have a fixed monotonic ordering, asymmetric DP
mechanisms are optimal. For all other source sets, general upper and lower
bounds on the optimal privacy leakage are developed and a necessary and
sufficient condition for tightness are established. Differentially private
leakage is an upper bound on mutual information (MI) leakage: the two criteria
are compared analytically and numerically to illustrate the effect of adopting
a stronger privacy criterion.
| [
{
"created": "Sun, 24 Jan 2016 20:12:03 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Aug 2017 23:58:12 GMT",
"version": "v2"
},
{
"created": "Wed, 1 Aug 2018 05:19:27 GMT",
"version": "v3"
}
] | 2018-08-02 | [
[
"Kalantari",
"Kousha",
""
],
[
"Sankar",
"Lalitha",
""
],
[
"Sarwate",
"Anand",
""
]
] | A privacy-utility tradeoff is developed for an arbitrary set of finite-alphabet source distributions. Privacy is quantified using differential privacy (DP), and utility is quantified using expected Hamming distortion maximized over the set of distributions. The family of source distribution sets (source sets) is categorized into three classes, based on different levels of prior knowledge they capture. For source sets whose convex hull includes the uniform distribution, symmetric DP mechanisms are optimal. For source sets whose probability values have a fixed monotonic ordering, asymmetric DP mechanisms are optimal. For all other source sets, general upper and lower bounds on the optimal privacy leakage are developed and a necessary and sufficient condition for tightness are established. Differentially private leakage is an upper bound on mutual information (MI) leakage: the two criteria are compared analytically and numerically to illustrate the effect of adopting a stronger privacy criterion. |
2404.12208 | Yuying Man | Yuying Man, Nian Li, Zhen Liu, Xiangyong Zeng | The Explicit values of the UBCT, the LBCT and the DBCT of the inverse
function | This manuscript was submitted to Finite Fields and Their Application
on April 8, 2024. arXiv admin note: text overlap with arXiv:2309.01881 | null | null | null | cs.CR cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Substitution boxes (S-boxes) play a significant role in ensuring the
resistance of block ciphers against various attacks. The Upper Boomerang
Connectivity Table (UBCT), the Lower Boomerang Connectivity Table (LBCT) and
the Double Boomerang Connectivity Table (DBCT) of a given S-box are crucial
tools to analyze its security concerning specific attacks. However, there are
currently no related results for this research. The inverse function is crucial
for constructing S-boxes of block ciphers with good cryptographic properties in
symmetric cryptography. Therefore, extensive research has been conducted on the
inverse function, exploring various properties related to standard attacks.
Thanks to the recent advancements in boomerang cryptanalysis, particularly the
introduction of concepts such as UBCT, LBCT, and DBCT, this paper aims to
further investigate the properties of the inverse function $F(x)=x^{2^n-2}$
over $\gf_{2^n}$ for arbitrary $n$. As a consequence, by carrying out certain
finer manipulations of solving specific equations over $\gf_{2^n}$, we give all
entries of the UBCT, LBCT of $F(x)$ over $\gf_{2^n}$ for arbitrary $n$.
Besides, based on the results of the UBCT and LBCT for the inverse function, we
determine that $F(x)$ is hard when $n$ is odd. Furthermore, we completely
compute all entries of the DBCT of $F(x)$ over $\gf_{2^n}$ for arbitrary $n$.
Additionally, we provide the precise number of elements with a given entry by
means of the values of some Kloosterman sums. Further, we determine the double
boomerang uniformity of $F(x)$ over $\gf_{2^n}$ for arbitrary $n$. Our in-depth
analysis of the DBCT of $F(x)$ contributes to a better evaluation of the
S-box's resistance against boomerang attacks.
| [
{
"created": "Thu, 18 Apr 2024 14:13:40 GMT",
"version": "v1"
}
] | 2024-04-19 | [
[
"Man",
"Yuying",
""
],
[
"Li",
"Nian",
""
],
[
"Liu",
"Zhen",
""
],
[
"Zeng",
"Xiangyong",
""
]
] | Substitution boxes (S-boxes) play a significant role in ensuring the resistance of block ciphers against various attacks. The Upper Boomerang Connectivity Table (UBCT), the Lower Boomerang Connectivity Table (LBCT) and the Double Boomerang Connectivity Table (DBCT) of a given S-box are crucial tools to analyze its security concerning specific attacks. However, there are currently no related results for this research. The inverse function is crucial for constructing S-boxes of block ciphers with good cryptographic properties in symmetric cryptography. Therefore, extensive research has been conducted on the inverse function, exploring various properties related to standard attacks. Thanks to the recent advancements in boomerang cryptanalysis, particularly the introduction of concepts such as UBCT, LBCT, and DBCT, this paper aims to further investigate the properties of the inverse function $F(x)=x^{2^n-2}$ over $\gf_{2^n}$ for arbitrary $n$. As a consequence, by carrying out certain finer manipulations of solving specific equations over $\gf_{2^n}$, we give all entries of the UBCT, LBCT of $F(x)$ over $\gf_{2^n}$ for arbitrary $n$. Besides, based on the results of the UBCT and LBCT for the inverse function, we determine that $F(x)$ is hard when $n$ is odd. Furthermore, we completely compute all entries of the DBCT of $F(x)$ over $\gf_{2^n}$ for arbitrary $n$. Additionally, we provide the precise number of elements with a given entry by means of the values of some Kloosterman sums. Further, we determine the double boomerang uniformity of $F(x)$ over $\gf_{2^n}$ for arbitrary $n$. Our in-depth analysis of the DBCT of $F(x)$ contributes to a better evaluation of the S-box's resistance against boomerang attacks. |
2402.16815 | Alexandre Yip Gon\c{c}alves Dias | Alexandre Yip Gon\c{c}alves Dias, Marcelo Kn\"orich Zuffo | 2+2D Texture for Full Positive Parallax Effect | null | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The representation of parallax on virtual environment is still a problem to
be studied. Common algorithms, such as Bump Mapping, Parallax Mapping and
Displacement Mapping, treats this problem for small disparity between a real
object and a simplified model. This work will introduce a new texture structure
and one possible render algorithm able to display parallax for large
disparities, it is an approach based on the four-dimensional representation of
the Light Field and was thought to positive parallax and to display the
surfaces on the inside of our simplified model. These conditions are imposed to
allow the free movement of an observer, if its movement is restrict, these
conditions may be loosen. It is a high storage low process approach possible to
be used in real time systems. As an example we will develop a scene with
several objects and simplified them by a unique sphere that encloses them all,
our system was able to run this scene with about 180fps.
| [
{
"created": "Mon, 26 Feb 2024 18:39:19 GMT",
"version": "v1"
}
] | 2024-02-27 | [
[
"Dias",
"Alexandre Yip Gonçalves",
""
],
[
"Zuffo",
"Marcelo Knörich",
""
]
] | The representation of parallax on virtual environment is still a problem to be studied. Common algorithms, such as Bump Mapping, Parallax Mapping and Displacement Mapping, treats this problem for small disparity between a real object and a simplified model. This work will introduce a new texture structure and one possible render algorithm able to display parallax for large disparities, it is an approach based on the four-dimensional representation of the Light Field and was thought to positive parallax and to display the surfaces on the inside of our simplified model. These conditions are imposed to allow the free movement of an observer, if its movement is restrict, these conditions may be loosen. It is a high storage low process approach possible to be used in real time systems. As an example we will develop a scene with several objects and simplified them by a unique sphere that encloses them all, our system was able to run this scene with about 180fps. |
1309.7817 | Yeon-geun Lim | Yeon-Geun Lim, Chan-Byoung Chae, Giuseppe Caire | Performance Analysis of Massive MIMO for Cell-Boundary Users | accepted at IEEE Transaction on Wireless Communication | null | 10.1109/TWC.2015.2460751 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider massive multiple-input multiple-output (MIMO)
systems for both downlink and uplink scenarios, where three radio units (RUs)
connected via one digital unit (DU) support multiple user equipments (UEs) at
the cell-boundary through the same radio resource, i.e., the same
time-frequency slot. For downlink transmitter options, the study considers
zero-forcing (ZF) and maximum ratio transmission (MRT), while for uplink
receiver options it considers ZF and maximum ratio combining (MRC). For the sum
rate of each of these, we derive simple closed-form formulas. In the simple but
practically relevant case where uniform power is allocated to all downlink data
streams, we observe that, for the downlink, vector normalization is better for
ZF while matrix normalization is better for MRT. For a given antenna and user
configuration, we also derive analytically the signal-to-noise-ratio (SNR)
level below which MRC should be used instead of ZF. Numerical simulations
confirm our analytical results.
| [
{
"created": "Mon, 30 Sep 2013 12:19:13 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Jul 2015 19:37:45 GMT",
"version": "v2"
}
] | 2016-11-17 | [
[
"Lim",
"Yeon-Geun",
""
],
[
"Chae",
"Chan-Byoung",
""
],
[
"Caire",
"Giuseppe",
""
]
] | In this paper, we consider massive multiple-input multiple-output (MIMO) systems for both downlink and uplink scenarios, where three radio units (RUs) connected via one digital unit (DU) support multiple user equipments (UEs) at the cell-boundary through the same radio resource, i.e., the same time-frequency slot. For downlink transmitter options, the study considers zero-forcing (ZF) and maximum ratio transmission (MRT), while for uplink receiver options it considers ZF and maximum ratio combining (MRC). For the sum rate of each of these, we derive simple closed-form formulas. In the simple but practically relevant case where uniform power is allocated to all downlink data streams, we observe that, for the downlink, vector normalization is better for ZF while matrix normalization is better for MRT. For a given antenna and user configuration, we also derive analytically the signal-to-noise-ratio (SNR) level below which MRC should be used instead of ZF. Numerical simulations confirm our analytical results. |
2407.08108 | Hossein Entezari Zarch | Hossein Entezari Zarch, Abdulla Alshabanah, Chaoyi Jiang and Murali
Annavaram | CADC: Encoding User-Item Interactions for Compressing Recommendation
Model Training Data | null | null | null | null | cs.IR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Deep learning recommendation models (DLRMs) are at the heart of the current
e-commerce industry. However, the amount of training data used to train these
large models is growing exponentially, leading to substantial training hurdles.
The training dataset contains two primary types of information: content-based
information (features of users and items) and collaborative information
(interactions between users and items). One approach to reduce the training
dataset is to remove user-item interactions. But that significantly diminishes
collaborative information, which is crucial for maintaining accuracy due to its
inclusion of interaction histories. This loss profoundly impacts DLRM
performance.
This paper makes an important observation that if one can capture the
user-item interaction history to enrich the user and item embeddings, then the
interaction history can be compressed without losing model accuracy. Thus, this
work, Collaborative Aware Data Compression (CADC), takes a two-step approach to
training dataset compression. In the first step, we use matrix factorization of
the user-item interaction matrix to create a novel embedding representation for
both the users and items. Once the user and item embeddings are enriched by the
interaction history information the approach then applies uniform random
sampling of the training dataset to drastically reduce the training dataset
size while minimizing model accuracy drop. The source code of CADC is available
at
\href{https://anonymous.4open.science/r/DSS-RM-8C1D/README.md}{https://anonymous.4open.science/r/DSS-RM-8C1D/README.md}.
| [
{
"created": "Thu, 11 Jul 2024 00:54:56 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Jul 2024 03:37:17 GMT",
"version": "v2"
}
] | 2024-07-25 | [
[
"Zarch",
"Hossein Entezari",
""
],
[
"Alshabanah",
"Abdulla",
""
],
[
"Jiang",
"Chaoyi",
""
],
[
"Annavaram",
"Murali",
""
]
] | Deep learning recommendation models (DLRMs) are at the heart of the current e-commerce industry. However, the amount of training data used to train these large models is growing exponentially, leading to substantial training hurdles. The training dataset contains two primary types of information: content-based information (features of users and items) and collaborative information (interactions between users and items). One approach to reduce the training dataset is to remove user-item interactions. But that significantly diminishes collaborative information, which is crucial for maintaining accuracy due to its inclusion of interaction histories. This loss profoundly impacts DLRM performance. This paper makes an important observation that if one can capture the user-item interaction history to enrich the user and item embeddings, then the interaction history can be compressed without losing model accuracy. Thus, this work, Collaborative Aware Data Compression (CADC), takes a two-step approach to training dataset compression. In the first step, we use matrix factorization of the user-item interaction matrix to create a novel embedding representation for both the users and items. Once the user and item embeddings are enriched by the interaction history information the approach then applies uniform random sampling of the training dataset to drastically reduce the training dataset size while minimizing model accuracy drop. The source code of CADC is available at \href{https://anonymous.4open.science/r/DSS-RM-8C1D/README.md}{https://anonymous.4open.science/r/DSS-RM-8C1D/README.md}. |
1509.02289 | Bernhard Rumpe | Carsten Kolassa, Holger Rendel, Bernhard Rumpe | Evaluation of Variability Concepts for Simulink in the Automotive Domain | 10 pages, 7 figures, 6 tables, Proceedings of 48th Hawaii
International Conference on System Sciences (HICSS), pp. 5373-5382, Kauai,
Hawaii, USA, IEEE Computer Society, 2015 | Proceedings of 48th Hawaii International Conference on System
Sciences (HICSS), pp. 5373-5382, Kauai, Hawaii, USA, IEEE Computer Society,
2015 | 10.1109/HICSS.2015.632 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling variability in Matlab/Simulink becomes more and more important. We
took the two variability modeling concepts already included in Matlab/Simulink
and our own one and evaluated them to find out which one is suited best for
modeling variability in the automotive domain. We conducted a controlled
experiment with developers at Volkswagen AG to decide which concept is
preferred by developers and if their preference aligns with measurable
performance factors. We found out that all existing concepts are viable
approaches and that the delta approach is both the preferred concept as well as
the objectively most efficient one, which makes Delta-Simulink a good solution
to model variability in the automotive domain.
| [
{
"created": "Tue, 8 Sep 2015 09:13:53 GMT",
"version": "v1"
}
] | 2016-11-18 | [
[
"Kolassa",
"Carsten",
""
],
[
"Rendel",
"Holger",
""
],
[
"Rumpe",
"Bernhard",
""
]
] | Modeling variability in Matlab/Simulink becomes more and more important. We took the two variability modeling concepts already included in Matlab/Simulink and our own one and evaluated them to find out which one is suited best for modeling variability in the automotive domain. We conducted a controlled experiment with developers at Volkswagen AG to decide which concept is preferred by developers and if their preference aligns with measurable performance factors. We found out that all existing concepts are viable approaches and that the delta approach is both the preferred concept as well as the objectively most efficient one, which makes Delta-Simulink a good solution to model variability in the automotive domain. |
2406.10989 | Abbas Heydarnoori | Mojtaba Mostafavi Ghahfarokhi, Alireza Asadi, Arash Asgari, Bardia
Mohammadi, Masih Beigi Rizi, Abbas Heydarnoori | Predicting the Understandability of Computational Notebooks through Code
Metrics Analysis | null | null | null | null | cs.SE cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Computational notebooks have become the primary coding environment for data
scientists. However, research on their code quality is still emerging, and the
code shared is often of poor quality. Given the importance of maintenance and
reusability, understanding the metrics that affect notebook code
comprehensibility is crucial. Code understandability, a qualitative variable,
is closely tied to user opinions. Traditional approaches to measuring it either
use limited questionnaires to review a few code pieces or rely on metadata such
as likes and votes in software repositories. Our approach enhances the
measurement of Jupyter notebook understandability by leveraging user comments
related to code understandability. As a case study, we used 542,051 Kaggle
Jupyter notebooks from our previous research, named DistilKaggle. We employed a
fine-tuned DistilBERT transformer to identify user comments associated with
code understandability. We established a criterion called User Opinion Code
Understandability (UOCU), which considers the number of relevant comments,
upvotes on those comments, total notebook views, and total notebook upvotes.
UOCU proved to be more effective than previous methods. Furthermore, we trained
machine learning models to predict notebook code understandability based solely
on their metrics. We collected 34 metrics for 132,723 final notebooks as
features in our dataset, using UOCU as the label. Our predictive model, using
the Random Forest classifier, achieved 89% accuracy in predicting the
understandability levels of computational notebooks.
| [
{
"created": "Sun, 16 Jun 2024 15:58:40 GMT",
"version": "v1"
}
] | 2024-06-18 | [
[
"Ghahfarokhi",
"Mojtaba Mostafavi",
""
],
[
"Asadi",
"Alireza",
""
],
[
"Asgari",
"Arash",
""
],
[
"Mohammadi",
"Bardia",
""
],
[
"Rizi",
"Masih Beigi",
""
],
[
"Heydarnoori",
"Abbas",
""
]
] | Computational notebooks have become the primary coding environment for data scientists. However, research on their code quality is still emerging, and the code shared is often of poor quality. Given the importance of maintenance and reusability, understanding the metrics that affect notebook code comprehensibility is crucial. Code understandability, a qualitative variable, is closely tied to user opinions. Traditional approaches to measuring it either use limited questionnaires to review a few code pieces or rely on metadata such as likes and votes in software repositories. Our approach enhances the measurement of Jupyter notebook understandability by leveraging user comments related to code understandability. As a case study, we used 542,051 Kaggle Jupyter notebooks from our previous research, named DistilKaggle. We employed a fine-tuned DistilBERT transformer to identify user comments associated with code understandability. We established a criterion called User Opinion Code Understandability (UOCU), which considers the number of relevant comments, upvotes on those comments, total notebook views, and total notebook upvotes. UOCU proved to be more effective than previous methods. Furthermore, we trained machine learning models to predict notebook code understandability based solely on their metrics. We collected 34 metrics for 132,723 final notebooks as features in our dataset, using UOCU as the label. Our predictive model, using the Random Forest classifier, achieved 89% accuracy in predicting the understandability levels of computational notebooks. |
2312.03911 | Pablo Lemos | Pablo Lemos, Nikolay Malkin, Will Handley, Yoshua Bengio, Yashar
Hezaveh, Laurence Perreault-Levasseur | Improving Gradient-guided Nested Sampling for Posterior Inference | 10 pages, 5 figures. Code available at
https://github.com/Pablo-Lemos/GGNS | null | null | null | cs.LG stat.CO stat.ME stat.ML | http://creativecommons.org/licenses/by/4.0/ | We present a performant, general-purpose gradient-guided nested sampling
algorithm, ${\tt GGNS}$, combining the state of the art in differentiable
programming, Hamiltonian slice sampling, clustering, mode separation, dynamic
nested sampling, and parallelization. This unique combination allows ${\tt
GGNS}$ to scale well with dimensionality and perform competitively on a variety
of synthetic and real-world problems. We also show the potential of combining
nested sampling with generative flow networks to obtain large amounts of
high-quality samples from the posterior distribution. This combination leads to
faster mode discovery and more accurate estimates of the partition function.
| [
{
"created": "Wed, 6 Dec 2023 21:09:18 GMT",
"version": "v1"
}
] | 2023-12-08 | [
[
"Lemos",
"Pablo",
""
],
[
"Malkin",
"Nikolay",
""
],
[
"Handley",
"Will",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Hezaveh",
"Yashar",
""
],
[
"Perreault-Levasseur",
"Laurence",
""
]
] | We present a performant, general-purpose gradient-guided nested sampling algorithm, ${\tt GGNS}$, combining the state of the art in differentiable programming, Hamiltonian slice sampling, clustering, mode separation, dynamic nested sampling, and parallelization. This unique combination allows ${\tt GGNS}$ to scale well with dimensionality and perform competitively on a variety of synthetic and real-world problems. We also show the potential of combining nested sampling with generative flow networks to obtain large amounts of high-quality samples from the posterior distribution. This combination leads to faster mode discovery and more accurate estimates of the partition function. |
2102.07119 | Stefan Hochwarter | Stefan Hochwarter | Sociotechnical Challenges of eHealth Technology for Patient
Self-Management: A Systematic Review | Volume 5 HEALTHINF | Proceedings of the 14th International Joint Conference on
Biomedical Engineering Systems and Technologies (2021) | null | null | cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Ageing of society and increase of time spent with chronic conditions
challenge the traditional long-term care model. Assistive technology and
eHealth are seen to play an important role when addressing these challenges.
One prominent example are patient self-management systems. These systems not
only transform the way patients with chronic conditions interact with the
healthcare system, but also change work practices of care providers. This
literature review addresses sociotechnical challenges of eHealth technologies
with a strong collaborative component. As a result, four themes are identified
and discussed.
| [
{
"created": "Sun, 14 Feb 2021 10:17:26 GMT",
"version": "v1"
}
] | 2021-02-16 | [
[
"Hochwarter",
"Stefan",
""
]
] | Ageing of society and increase of time spent with chronic conditions challenge the traditional long-term care model. Assistive technology and eHealth are seen to play an important role when addressing these challenges. One prominent example are patient self-management systems. These systems not only transform the way patients with chronic conditions interact with the healthcare system, but also change work practices of care providers. This literature review addresses sociotechnical challenges of eHealth technologies with a strong collaborative component. As a result, four themes are identified and discussed. |
2104.10213 | George Papakostas Prof. | N.-I. Galanis, P. Vafiadis, K.-G. Mirzaev, G.A. Papakostas | Machine Learning Meets Natural Language Processing -- The story so far | 13 pages, 5 figures | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Natural Language Processing (NLP) has evolved significantly over the last
decade. This paper highlights the most important milestones of this period
while trying to pinpoint the contribution of each individual model and
algorithm to the overall progress. Furthermore, it focuses on issues still
remaining to be solved, emphasizing the groundbreaking proposals of
Transformers, BERT, and all the similar attention-based models.
| [
{
"created": "Sat, 27 Mar 2021 16:41:34 GMT",
"version": "v1"
}
] | 2021-04-22 | [
[
"Galanis",
"N. -I.",
""
],
[
"Vafiadis",
"P.",
""
],
[
"Mirzaev",
"K. -G.",
""
],
[
"Papakostas",
"G. A.",
""
]
] | Natural Language Processing (NLP) has evolved significantly over the last decade. This paper highlights the most important milestones of this period while trying to pinpoint the contribution of each individual model and algorithm to the overall progress. Furthermore, it focuses on issues still remaining to be solved, emphasizing the groundbreaking proposals of Transformers, BERT, and all the similar attention-based models. |
2106.08617 | Jianhua Yang | Jianhua Yang, Yan Huang, Zhanyu Ma, Liang Wang | CMF: Cascaded Multi-model Fusion for Referring Image Segmentation | Accepted by ICIP 2021 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this work, we address the task of referring image segmentation (RIS),
which aims at predicting a segmentation mask for the object described by a
natural language expression. Most existing methods focus on establishing
unidirectional or directional relationships between visual and linguistic
features to associate two modalities together, while the multi-scale context is
ignored or insufficiently modeled. Multi-scale context is crucial to localize
and segment those objects that have large scale variations during the
multi-modal fusion process. To solve this problem, we propose a simple yet
effective Cascaded Multi-modal Fusion (CMF) module, which stacks multiple
atrous convolutional layers in parallel and further introduces a cascaded
branch to fuse visual and linguistic features. The cascaded branch can
progressively integrate multi-scale contextual information and facilitate the
alignment of two modalities during the multi-modal fusion process. Experimental
results on four benchmark datasets demonstrate that our method outperforms most
state-of-the-art methods. Code is available at
https://github.com/jianhua2022/CMF-Refseg.
| [
{
"created": "Wed, 16 Jun 2021 08:18:39 GMT",
"version": "v1"
}
] | 2021-06-17 | [
[
"Yang",
"Jianhua",
""
],
[
"Huang",
"Yan",
""
],
[
"Ma",
"Zhanyu",
""
],
[
"Wang",
"Liang",
""
]
] | In this work, we address the task of referring image segmentation (RIS), which aims at predicting a segmentation mask for the object described by a natural language expression. Most existing methods focus on establishing unidirectional or directional relationships between visual and linguistic features to associate two modalities together, while the multi-scale context is ignored or insufficiently modeled. Multi-scale context is crucial to localize and segment those objects that have large scale variations during the multi-modal fusion process. To solve this problem, we propose a simple yet effective Cascaded Multi-modal Fusion (CMF) module, which stacks multiple atrous convolutional layers in parallel and further introduces a cascaded branch to fuse visual and linguistic features. The cascaded branch can progressively integrate multi-scale contextual information and facilitate the alignment of two modalities during the multi-modal fusion process. Experimental results on four benchmark datasets demonstrate that our method outperforms most state-of-the-art methods. Code is available at https://github.com/jianhua2022/CMF-Refseg. |
1502.02465 | Luigi Palmieri | Luigi Palmieri | A behavioural approach to obstacle avoidance for mobile manipulators
based on distributed sensing | Master's Thesis | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/3.0/ | A reactive obstacle avoidance method for mobile manipulators is presented.
The objectives of the developed algorithm are twofold. The first one is to find
a trajectory in the configuration space of a mobile manipulator so as to follow
a given trajectory in the task space. The second objective consists in locally
adjusting the trajectory in the configuration space in order to avoid
collisions with potentially moving obstacles and self-collisions in
unstructured and dynamic environments. The perception is exclusively based on a
set of proximity sensors distributed on the robot mechanical structure and
visual information are not required. Thanks to the adoption of this kind of
proximity distributed perception, the approach does not require a 3D model of
the robot and allows the real-time collision avoidance without the need of a
sensorized environment. To achieve the features cited above, a behaviour-based
technique known as Null-Space-Based (NSB) approach has been adopted with some
modifications.On one hand, the concept of a total pseudo-energy based on the
information from the distributed sensors has been introduced. On the other
hand, a method to combine different tasks has been proposed to guarantee the
smoothness of the realtime trajectory adjustments. Another significant feature
of the method is the strict coordination between the base and the arm
exploiting the redundant degrees of freedom, that is a relevant topic in mobile
manipulation.
| [
{
"created": "Mon, 9 Feb 2015 12:49:35 GMT",
"version": "v1"
}
] | 2015-02-10 | [
[
"Palmieri",
"Luigi",
""
]
] | A reactive obstacle avoidance method for mobile manipulators is presented. The objectives of the developed algorithm are twofold. The first one is to find a trajectory in the configuration space of a mobile manipulator so as to follow a given trajectory in the task space. The second objective consists in locally adjusting the trajectory in the configuration space in order to avoid collisions with potentially moving obstacles and self-collisions in unstructured and dynamic environments. The perception is exclusively based on a set of proximity sensors distributed on the robot mechanical structure and visual information are not required. Thanks to the adoption of this kind of proximity distributed perception, the approach does not require a 3D model of the robot and allows the real-time collision avoidance without the need of a sensorized environment. To achieve the features cited above, a behaviour-based technique known as Null-Space-Based (NSB) approach has been adopted with some modifications.On one hand, the concept of a total pseudo-energy based on the information from the distributed sensors has been introduced. On the other hand, a method to combine different tasks has been proposed to guarantee the smoothness of the realtime trajectory adjustments. Another significant feature of the method is the strict coordination between the base and the arm exploiting the redundant degrees of freedom, that is a relevant topic in mobile manipulation. |
1902.10755 | Fazle Karim | Fazle Karim, Somshubra Majumdar, and Houshang Darabi | Adversarial Attacks on Time Series | 13 pages, 7 figures, 6 tables | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series classification models have been garnering significant importance
in the research community. However, not much research has been done on
generating adversarial samples for these models. These adversarial samples can
become a security concern. In this paper, we propose utilizing an adversarial
transformation network (ATN) on a distilled model to attack various time series
classification models. The proposed attack on the classification model utilizes
a distilled model as a surrogate that mimics the behavior of the attacked
classical time series classification models. Our proposed methodology is
applied onto 1-Nearest Neighbor Dynamic Time Warping (1-NN ) DTW, a Fully
Connected Network and a Fully Convolutional Network (FCN), all of which are
trained on 42 University of California Riverside (UCR) datasets. In this paper,
we show both models were susceptible to attacks on all 42 datasets. To the best
of our knowledge, such an attack on time series classification models has never
been done before. Finally, we recommend future researchers that develop time
series classification models to incorporating adversarial data samples into
their training data sets to improve resilience on adversarial samples and to
consider model robustness as an evaluative metric.
| [
{
"created": "Wed, 27 Feb 2019 19:55:44 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Mar 2019 01:07:12 GMT",
"version": "v2"
}
] | 2019-03-04 | [
[
"Karim",
"Fazle",
""
],
[
"Majumdar",
"Somshubra",
""
],
[
"Darabi",
"Houshang",
""
]
] | Time series classification models have been garnering significant importance in the research community. However, not much research has been done on generating adversarial samples for these models. These adversarial samples can become a security concern. In this paper, we propose utilizing an adversarial transformation network (ATN) on a distilled model to attack various time series classification models. The proposed attack on the classification model utilizes a distilled model as a surrogate that mimics the behavior of the attacked classical time series classification models. Our proposed methodology is applied onto 1-Nearest Neighbor Dynamic Time Warping (1-NN ) DTW, a Fully Connected Network and a Fully Convolutional Network (FCN), all of which are trained on 42 University of California Riverside (UCR) datasets. In this paper, we show both models were susceptible to attacks on all 42 datasets. To the best of our knowledge, such an attack on time series classification models has never been done before. Finally, we recommend future researchers that develop time series classification models to incorporating adversarial data samples into their training data sets to improve resilience on adversarial samples and to consider model robustness as an evaluative metric. |
1703.03717 | Andrew Ross | Andrew Slavin Ross, Michael C. Hughes, Finale Doshi-Velez | Right for the Right Reasons: Training Differentiable Models by
Constraining their Explanations | null | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural networks are among the most accurate supervised learning methods in
use today, but their opacity makes them difficult to trust in critical
applications, especially when conditions in training differ from those in test.
Recent work on explanations for black-box models has produced tools (e.g. LIME)
to show the implicit rules behind predictions, which can help us identify when
models are right for the wrong reasons. However, these methods do not scale to
explaining entire datasets and cannot correct the problems they reveal. We
introduce a method for efficiently explaining and regularizing differentiable
models by examining and selectively penalizing their input gradients, which
provide a normal to the decision boundary. We apply these penalties both based
on expert annotation and in an unsupervised fashion that encourages diverse
models with qualitatively different decision boundaries for the same
classification problem. On multiple datasets, we show our approach generates
faithful explanations and models that generalize much better when conditions
differ between training and test.
| [
{
"created": "Fri, 10 Mar 2017 15:35:32 GMT",
"version": "v1"
},
{
"created": "Thu, 25 May 2017 05:38:45 GMT",
"version": "v2"
}
] | 2017-11-15 | [
[
"Ross",
"Andrew Slavin",
""
],
[
"Hughes",
"Michael C.",
""
],
[
"Doshi-Velez",
"Finale",
""
]
] | Neural networks are among the most accurate supervised learning methods in use today, but their opacity makes them difficult to trust in critical applications, especially when conditions in training differ from those in test. Recent work on explanations for black-box models has produced tools (e.g. LIME) to show the implicit rules behind predictions, which can help us identify when models are right for the wrong reasons. However, these methods do not scale to explaining entire datasets and cannot correct the problems they reveal. We introduce a method for efficiently explaining and regularizing differentiable models by examining and selectively penalizing their input gradients, which provide a normal to the decision boundary. We apply these penalties both based on expert annotation and in an unsupervised fashion that encourages diverse models with qualitatively different decision boundaries for the same classification problem. On multiple datasets, we show our approach generates faithful explanations and models that generalize much better when conditions differ between training and test. |
1912.11844 | Matthieu Paul | Matthieu Paul, Christoph Mayer, Luc Van Gool, Radu Timofte | Efficient Video Semantic Segmentation with Labels Propagation and
Refinement | Accepted at WACV2020 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper tackles the problem of real-time semantic segmentation of high
definition videos using a hybrid GPU / CPU approach. We propose an Efficient
Video Segmentation(EVS) pipeline that combines:
(i) On the CPU, a very fast optical flow method, that is used to exploit the
temporal aspect of the video and propagate semantic information from one frame
to the next. It runs in parallel with the GPU.
(ii) On the GPU, two Convolutional Neural Networks: A main segmentation
network that is used to predict dense semantic labels from scratch, and a
Refiner that is designed to improve predictions from previous frames with the
help of a fast Inconsistencies Attention Module (IAM). The latter can identify
regions that cannot be propagated accurately.
We suggest several operating points depending on the desired frame rate and
accuracy. Our pipeline achieves accuracy levels competitive to the existing
real-time methods for semantic image segmentation(mIoU above 60%), while
achieving much higher frame rates. On the popular Cityscapes dataset with high
resolution frames (2048 x 1024), the proposed operating points range from 80 to
1000 Hz on a single GPU and CPU.
| [
{
"created": "Thu, 26 Dec 2019 11:45:15 GMT",
"version": "v1"
}
] | 2019-12-30 | [
[
"Paul",
"Matthieu",
""
],
[
"Mayer",
"Christoph",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Timofte",
"Radu",
""
]
] | This paper tackles the problem of real-time semantic segmentation of high definition videos using a hybrid GPU / CPU approach. We propose an Efficient Video Segmentation(EVS) pipeline that combines: (i) On the CPU, a very fast optical flow method, that is used to exploit the temporal aspect of the video and propagate semantic information from one frame to the next. It runs in parallel with the GPU. (ii) On the GPU, two Convolutional Neural Networks: A main segmentation network that is used to predict dense semantic labels from scratch, and a Refiner that is designed to improve predictions from previous frames with the help of a fast Inconsistencies Attention Module (IAM). The latter can identify regions that cannot be propagated accurately. We suggest several operating points depending on the desired frame rate and accuracy. Our pipeline achieves accuracy levels competitive to the existing real-time methods for semantic image segmentation(mIoU above 60%), while achieving much higher frame rates. On the popular Cityscapes dataset with high resolution frames (2048 x 1024), the proposed operating points range from 80 to 1000 Hz on a single GPU and CPU. |
2008.02385 | Ruizhe Huang | Ruizhe Huang, Ke Li, Ashish Arora, Dan Povey and Sanjeev Khudanpur | Efficient MDI Adaptation for n-gram Language Models | To appear in INTERSPEECH 2020. Appendix A of this full version will
be filled soon | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an efficient algorithm for n-gram language model
adaptation under the minimum discrimination information (MDI) principle, where
an out-of-domain language model is adapted to satisfy the constraints of
marginal probabilities of the in-domain data. The challenge for MDI language
model adaptation is its computational complexity. By taking advantage of the
backoff structure of n-gram model and the idea of hierarchical training method,
originally proposed for maximum entropy (ME) language models, we show that MDI
adaptation can be computed in linear-time complexity to the inputs in each
iteration. The complexity remains the same as ME models, although MDI is more
general than ME. This makes MDI adaptation practical for large corpus and
vocabulary. Experimental results confirm the scalability of our algorithm on
very large datasets, while MDI adaptation gets slightly worse perplexity but
better word error rate results compared to simple linear interpolation.
| [
{
"created": "Wed, 5 Aug 2020 22:21:03 GMT",
"version": "v1"
}
] | 2020-08-07 | [
[
"Huang",
"Ruizhe",
""
],
[
"Li",
"Ke",
""
],
[
"Arora",
"Ashish",
""
],
[
"Povey",
"Dan",
""
],
[
"Khudanpur",
"Sanjeev",
""
]
] | This paper presents an efficient algorithm for n-gram language model adaptation under the minimum discrimination information (MDI) principle, where an out-of-domain language model is adapted to satisfy the constraints of marginal probabilities of the in-domain data. The challenge for MDI language model adaptation is its computational complexity. By taking advantage of the backoff structure of n-gram model and the idea of hierarchical training method, originally proposed for maximum entropy (ME) language models, we show that MDI adaptation can be computed in linear-time complexity to the inputs in each iteration. The complexity remains the same as ME models, although MDI is more general than ME. This makes MDI adaptation practical for large corpus and vocabulary. Experimental results confirm the scalability of our algorithm on very large datasets, while MDI adaptation gets slightly worse perplexity but better word error rate results compared to simple linear interpolation. |
2405.09431 | Chenhan Jiang | Chenhan Jiang | A Survey On Text-to-3D Contents Generation In The Wild | 11 pages, 10 figures, 4 tables. arXiv admin note: text overlap with
arXiv:2401.17807 by other authors | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D content creation plays a vital role in various applications, such as
gaming, robotics simulation, and virtual reality. However, the process is
labor-intensive and time-consuming, requiring skilled designers to invest
considerable effort in creating a single 3D asset. To address this challenge,
text-to-3D generation technologies have emerged as a promising solution for
automating 3D creation. Leveraging the success of large vision language models,
these techniques aim to generate 3D content based on textual descriptions.
Despite recent advancements in this area, existing solutions still face
significant limitations in terms of generation quality and efficiency. In this
survey, we conduct an in-depth investigation of the latest text-to-3D creation
methods. We provide a comprehensive background on text-to-3D creation,
including discussions on datasets employed in training and evaluation metrics
used to assess the quality of generated 3D models. Then, we delve into the
various 3D representations that serve as the foundation for the 3D generation
process. Furthermore, we present a thorough comparison of the rapidly growing
literature on generative pipelines, categorizing them into feedforward
generators, optimization-based generation, and view reconstruction approaches.
By examining the strengths and weaknesses of these methods, we aim to shed
light on their respective capabilities and limitations. Lastly, we point out
several promising avenues for future research. With this survey, we hope to
inspire researchers further to explore the potential of open-vocabulary
text-conditioned 3D content creation.
| [
{
"created": "Wed, 15 May 2024 15:23:22 GMT",
"version": "v1"
}
] | 2024-05-16 | [
[
"Jiang",
"Chenhan",
""
]
] | 3D content creation plays a vital role in various applications, such as gaming, robotics simulation, and virtual reality. However, the process is labor-intensive and time-consuming, requiring skilled designers to invest considerable effort in creating a single 3D asset. To address this challenge, text-to-3D generation technologies have emerged as a promising solution for automating 3D creation. Leveraging the success of large vision language models, these techniques aim to generate 3D content based on textual descriptions. Despite recent advancements in this area, existing solutions still face significant limitations in terms of generation quality and efficiency. In this survey, we conduct an in-depth investigation of the latest text-to-3D creation methods. We provide a comprehensive background on text-to-3D creation, including discussions on datasets employed in training and evaluation metrics used to assess the quality of generated 3D models. Then, we delve into the various 3D representations that serve as the foundation for the 3D generation process. Furthermore, we present a thorough comparison of the rapidly growing literature on generative pipelines, categorizing them into feedforward generators, optimization-based generation, and view reconstruction approaches. By examining the strengths and weaknesses of these methods, we aim to shed light on their respective capabilities and limitations. Lastly, we point out several promising avenues for future research. With this survey, we hope to inspire researchers further to explore the potential of open-vocabulary text-conditioned 3D content creation. |
1704.05119 | Sharan Narang | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | Exploring Sparsity in Recurrent Neural Networks | Published as a conference paper at ICLR 2017 | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x.
| [
{
"created": "Mon, 17 Apr 2017 20:42:05 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Nov 2017 22:10:47 GMT",
"version": "v2"
}
] | 2017-11-08 | [
[
"Narang",
"Sharan",
""
],
[
"Elsen",
"Erich",
""
],
[
"Diamos",
"Gregory",
""
],
[
"Sengupta",
"Shubho",
""
]
] | Recurrent Neural Networks (RNN) are widely used to solve a variety of problems and as the quantity of data and the amount of available compute have increased, so have model sizes. The number of parameters in recent state-of-the-art networks makes them hard to deploy, especially on mobile phones and embedded devices. The challenge is due to both the size of the model and the time it takes to evaluate it. In order to deploy these RNNs efficiently, we propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network. At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network. The network size is reduced by 8x and the time required to train the model remains constant. Additionally, we can prune a larger dense network to achieve better than baseline performance while still reducing the total number of parameters significantly. Pruning RNNs reduces the size of the model and can also help achieve significant inference time speed-up using sparse matrix multiply. Benchmarks show that using our technique model size can be reduced by 90% and speed-up is around 2x to 7x. |
2110.13625 | Junsu Kim | Junsu Kim, Younggyo Seo, Jinwoo Shin | Landmark-Guided Subgoal Generation in Hierarchical Reinforcement
Learning | Accepted to NeurIPS 2021 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Goal-conditioned hierarchical reinforcement learning (HRL) has shown
promising results for solving complex and long-horizon RL tasks. However, the
action space of high-level policy in the goal-conditioned HRL is often large,
so it results in poor exploration, leading to inefficiency in training. In this
paper, we present HIerarchical reinforcement learning Guided by Landmarks
(HIGL), a novel framework for training a high-level policy with a reduced
action space guided by landmarks, i.e., promising states to explore. The key
component of HIGL is twofold: (a) sampling landmarks that are informative for
exploration and (b) encouraging the high-level policy to generate a subgoal
towards a selected landmark. For (a), we consider two criteria: coverage of the
entire visited state space (i.e., dispersion of states) and novelty of states
(i.e., prediction error of a state). For (b), we select a landmark as the very
first landmark in the shortest path in a graph whose nodes are landmarks. Our
experiments demonstrate that our framework outperforms prior-arts across a
variety of control tasks, thanks to efficient exploration guided by landmarks.
| [
{
"created": "Tue, 26 Oct 2021 12:16:19 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Oct 2021 13:52:38 GMT",
"version": "v2"
},
{
"created": "Sun, 5 Dec 2021 07:26:40 GMT",
"version": "v3"
}
] | 2021-12-07 | [
[
"Kim",
"Junsu",
""
],
[
"Seo",
"Younggyo",
""
],
[
"Shin",
"Jinwoo",
""
]
] | Goal-conditioned hierarchical reinforcement learning (HRL) has shown promising results for solving complex and long-horizon RL tasks. However, the action space of high-level policy in the goal-conditioned HRL is often large, so it results in poor exploration, leading to inefficiency in training. In this paper, we present HIerarchical reinforcement learning Guided by Landmarks (HIGL), a novel framework for training a high-level policy with a reduced action space guided by landmarks, i.e., promising states to explore. The key component of HIGL is twofold: (a) sampling landmarks that are informative for exploration and (b) encouraging the high-level policy to generate a subgoal towards a selected landmark. For (a), we consider two criteria: coverage of the entire visited state space (i.e., dispersion of states) and novelty of states (i.e., prediction error of a state). For (b), we select a landmark as the very first landmark in the shortest path in a graph whose nodes are landmarks. Our experiments demonstrate that our framework outperforms prior-arts across a variety of control tasks, thanks to efficient exploration guided by landmarks. |
1708.04677 | Nikolaus Correll | Nikolaus Correll, Prabal Dutta, Richard Han and Kristofer Pister | New Directions: Wireless Robotic Materials | To appear at SenSys 2017 | null | null | null | cs.RO cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe opportunities and challenges with wireless robotic materials.
Robotic materials are multi-functional composites that tightly integrate
sensing, actuation, computation and communication to create smart composites
that can sense their environment and change their physical properties in an
arbitrary programmable manner. Computation and communication in such materials
are based on miniature, possibly wireless, devices that are scattered in the
material and interface with sensors and actuators inside the material. Whereas
routing and processing of information within the material build upon results
from the field of sensor networks, robotic materials are pushing the limits of
sensor networks in both size (down to the order of microns) and numbers of
devices (up to the order of millions). In order to solve the algorithmic and
systems challenges of such an approach, which will involve not only computer
scientists, but also roboticists, chemists and material scientists, the
community requires a common platform - much like the "Mote" that bootstrapped
the widespread adoption of the field of sensor networks - that is small,
provides ample of computation, is equipped with basic networking
functionalities, and preferably can be powered wirelessly.
| [
{
"created": "Tue, 15 Aug 2017 20:29:06 GMT",
"version": "v1"
}
] | 2017-08-17 | [
[
"Correll",
"Nikolaus",
""
],
[
"Dutta",
"Prabal",
""
],
[
"Han",
"Richard",
""
],
[
"Pister",
"Kristofer",
""
]
] | We describe opportunities and challenges with wireless robotic materials. Robotic materials are multi-functional composites that tightly integrate sensing, actuation, computation and communication to create smart composites that can sense their environment and change their physical properties in an arbitrary programmable manner. Computation and communication in such materials are based on miniature, possibly wireless, devices that are scattered in the material and interface with sensors and actuators inside the material. Whereas routing and processing of information within the material build upon results from the field of sensor networks, robotic materials are pushing the limits of sensor networks in both size (down to the order of microns) and numbers of devices (up to the order of millions). In order to solve the algorithmic and systems challenges of such an approach, which will involve not only computer scientists, but also roboticists, chemists and material scientists, the community requires a common platform - much like the "Mote" that bootstrapped the widespread adoption of the field of sensor networks - that is small, provides ample of computation, is equipped with basic networking functionalities, and preferably can be powered wirelessly. |
2103.05904 | Yunlei Shi | Yunlei Shi, Zhaopeng Chen, Yansong Wu, Dimitri Henkel, Sebastian
Riedel, Hongxu Liu, Qian Feng, Jianwei Zhang | Combining Learning from Demonstration with Learning by Exploration to
Facilitate Contact-Rich Tasks | Accepted by the 2021 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2021) | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Collaborative robots are expected to be able to work alongside humans and in
some cases directly replace existing human workers, thus effectively responding
to rapid assembly line changes. Current methods for programming contact-rich
tasks, especially in heavily constrained space, tend to be fairly inefficient.
Therefore, faster and more intuitive approaches to robot teaching are urgently
required. This work focuses on combining visual servoing based learning from
demonstration (LfD) and force-based learning by exploration (LbE), to enable
fast and intuitive programming of contact-rich tasks with minimal user effort
required. Two learning approaches were developed and integrated into a
framework, and one relying on human to robot motion mapping (the visual
servoing approach) and one on force-based reinforcement learning. The developed
framework implements the non-contact demonstration teaching method based on
visual servoing approach and optimizes the demonstrated robot target positions
according to the detected contact state. The framework has been compared with
two most commonly used baseline techniques, pendant-based teaching and
hand-guiding teaching. The efficiency and reliability of the framework have
been validated through comparison experiments involving the teaching and
execution of contact-rich tasks. The framework proposed in this paper has
performed the best in terms of teaching time, execution success rate, risk of
damage, and ease of use.
| [
{
"created": "Wed, 10 Mar 2021 07:11:05 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Oct 2021 09:24:05 GMT",
"version": "v2"
}
] | 2021-10-27 | [
[
"Shi",
"Yunlei",
""
],
[
"Chen",
"Zhaopeng",
""
],
[
"Wu",
"Yansong",
""
],
[
"Henkel",
"Dimitri",
""
],
[
"Riedel",
"Sebastian",
""
],
[
"Liu",
"Hongxu",
""
],
[
"Feng",
"Qian",
""
],
[
"Zhang",
"Jianwei",
""
]
] | Collaborative robots are expected to be able to work alongside humans and in some cases directly replace existing human workers, thus effectively responding to rapid assembly line changes. Current methods for programming contact-rich tasks, especially in heavily constrained space, tend to be fairly inefficient. Therefore, faster and more intuitive approaches to robot teaching are urgently required. This work focuses on combining visual servoing based learning from demonstration (LfD) and force-based learning by exploration (LbE), to enable fast and intuitive programming of contact-rich tasks with minimal user effort required. Two learning approaches were developed and integrated into a framework, and one relying on human to robot motion mapping (the visual servoing approach) and one on force-based reinforcement learning. The developed framework implements the non-contact demonstration teaching method based on visual servoing approach and optimizes the demonstrated robot target positions according to the detected contact state. The framework has been compared with two most commonly used baseline techniques, pendant-based teaching and hand-guiding teaching. The efficiency and reliability of the framework have been validated through comparison experiments involving the teaching and execution of contact-rich tasks. The framework proposed in this paper has performed the best in terms of teaching time, execution success rate, risk of damage, and ease of use. |
1906.00097 | Elliot Meyerson | Elliot Meyerson and Risto Miikkulainen | Modular Universal Reparameterization: Deep Multi-task Learning Across
Diverse Domains | 33rd Conference on Neural Information Processing Systems (NeurIPS
2019), 16 pages, including Supplemental Material | null | null | null | cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As deep learning applications continue to become more diverse, an interesting
question arises: Can general problem solving arise from jointly learning
several such diverse tasks? To approach this question, deep multi-task learning
is extended in this paper to the setting where there is no obvious overlap
between task architectures. The idea is that any set of (architecture,task)
pairs can be decomposed into a set of potentially related subproblems, whose
sharing is optimized by an efficient stochastic algorithm. The approach is
first validated in a classic synthetic multi-task learning benchmark, and then
applied to sharing across disparate architectures for vision, NLP, and genomics
tasks. It discovers regularities across these domains, encodes them into
sharable modules, and combines these modules systematically to improve
performance in the individual tasks. The results confirm that sharing learned
functionality across diverse domains and architectures is indeed beneficial,
thus establishing a key ingredient for general problem solving in the future.
| [
{
"created": "Fri, 31 May 2019 22:00:43 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Oct 2019 17:51:14 GMT",
"version": "v2"
}
] | 2019-10-29 | [
[
"Meyerson",
"Elliot",
""
],
[
"Miikkulainen",
"Risto",
""
]
] | As deep learning applications continue to become more diverse, an interesting question arises: Can general problem solving arise from jointly learning several such diverse tasks? To approach this question, deep multi-task learning is extended in this paper to the setting where there is no obvious overlap between task architectures. The idea is that any set of (architecture,task) pairs can be decomposed into a set of potentially related subproblems, whose sharing is optimized by an efficient stochastic algorithm. The approach is first validated in a classic synthetic multi-task learning benchmark, and then applied to sharing across disparate architectures for vision, NLP, and genomics tasks. It discovers regularities across these domains, encodes them into sharable modules, and combines these modules systematically to improve performance in the individual tasks. The results confirm that sharing learned functionality across diverse domains and architectures is indeed beneficial, thus establishing a key ingredient for general problem solving in the future. |
0705.0423 | Farbod Kayhan | A. Braunstein, F. Kayhan, G. Montorsi and R. Zecchina | Encoding for the Blackwell Channel with Reinforced Belief Propagation | 5 pages, 8 figures, submitted to ISIT 2007 | IEEE International Symposium on Information Theory (ISIT07); 2007.
p. 1891-5 | 10.1109/ISIT.2007.4557497 | null | cs.IT math.IT | null | A key idea in coding for the broadcast channel (BC) is binning, in which the
transmitter encode information by selecting a codeword from an appropriate bin
(the messages are thus the bin indexes). This selection is normally done by
solving an appropriate (possibly difficult) combinatorial problem. Recently it
has been shown that binning for the Blackwell channel --a particular BC-- can
be done by iterative schemes based on Survey Propagation (SP). This method uses
decimation for SP and suffers a complexity of O(n^2). In this paper we propose
a new variation of the Belief Propagation (BP) algorithm, named Reinforced BP
algorithm, that turns BP into a solver. Our simulations show that this new
algorithm has complexity O(n log n). Using this new algorithm together with a
non-linear coding scheme, we can efficiently achieve rates close to the border
of the capacity region of the Blackwell channel.
| [
{
"created": "Thu, 3 May 2007 09:49:15 GMT",
"version": "v1"
}
] | 2016-11-17 | [
[
"Braunstein",
"A.",
""
],
[
"Kayhan",
"F.",
""
],
[
"Montorsi",
"G.",
""
],
[
"Zecchina",
"R.",
""
]
] | A key idea in coding for the broadcast channel (BC) is binning, in which the transmitter encode information by selecting a codeword from an appropriate bin (the messages are thus the bin indexes). This selection is normally done by solving an appropriate (possibly difficult) combinatorial problem. Recently it has been shown that binning for the Blackwell channel --a particular BC-- can be done by iterative schemes based on Survey Propagation (SP). This method uses decimation for SP and suffers a complexity of O(n^2). In this paper we propose a new variation of the Belief Propagation (BP) algorithm, named Reinforced BP algorithm, that turns BP into a solver. Our simulations show that this new algorithm has complexity O(n log n). Using this new algorithm together with a non-linear coding scheme, we can efficiently achieve rates close to the border of the capacity region of the Blackwell channel. |
1410.4139 | Stefanie Haustein | Stefanie Haustein, Timothy D. Bowman, Kim Holmberg, Andrew Tsou,
Cassidy R. Sugimoto and Vincent Larivi\`ere | Tweets as impact indicators: Examining the implications of automated bot
accounts on Twitter | 9 pages, 4 figures, 1 table | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This brief communication presents preliminary findings on automated Twitter
accounts distributing links to scientific papers deposited on the preprint
repository arXiv. It discusses the implication of the presence of such bots
from the perspective of social media metrics (altmetrics), where mentions of
scholarly documents on Twitter have been suggested as a means of measuring
impact that is both broader and timelier than citations. We present preliminary
findings that automated Twitter accounts create a considerable amount of tweets
to scientific papers and that they behave differently than common social bots,
which has critical implications for the use of raw tweet counts in research
evaluation and assessment. We discuss some definitions of Twitter cyborgs and
bots in scholarly communication and propose differentiating between different
levels of engagement from tweeting only bibliographic information to discussing
or commenting on the content of a paper.
| [
{
"created": "Wed, 15 Oct 2014 17:10:20 GMT",
"version": "v1"
}
] | 2014-10-16 | [
[
"Haustein",
"Stefanie",
""
],
[
"Bowman",
"Timothy D.",
""
],
[
"Holmberg",
"Kim",
""
],
[
"Tsou",
"Andrew",
""
],
[
"Sugimoto",
"Cassidy R.",
""
],
[
"Larivière",
"Vincent",
""
]
] | This brief communication presents preliminary findings on automated Twitter accounts distributing links to scientific papers deposited on the preprint repository arXiv. It discusses the implication of the presence of such bots from the perspective of social media metrics (altmetrics), where mentions of scholarly documents on Twitter have been suggested as a means of measuring impact that is both broader and timelier than citations. We present preliminary findings that automated Twitter accounts create a considerable amount of tweets to scientific papers and that they behave differently than common social bots, which has critical implications for the use of raw tweet counts in research evaluation and assessment. We discuss some definitions of Twitter cyborgs and bots in scholarly communication and propose differentiating between different levels of engagement from tweeting only bibliographic information to discussing or commenting on the content of a paper. |
1405.4471 | Tomer Koren | Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres | Online Learning with Composite Loss Functions | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a new class of online learning problems where each of the online
algorithm's actions is assigned an adversarial value, and the loss of the
algorithm at each step is a known and deterministic function of the values
assigned to its recent actions. This class includes problems where the
algorithm's loss is the minimum over the recent adversarial values, the maximum
over the recent values, or a linear combination of the recent values. We
analyze the minimax regret of this class of problems when the algorithm
receives bandit feedback, and prove that when the minimum or maximum functions
are used, the minimax regret is $\tilde \Omega(T^{2/3})$ (so called hard online
learning problems), and when a linear function is used, the minimax regret is
$\tilde O(\sqrt{T})$ (so called easy learning problems). Previously, the only
online learning problem that was known to be provably hard was the multi-armed
bandit with switching costs.
| [
{
"created": "Sun, 18 May 2014 08:47:58 GMT",
"version": "v1"
}
] | 2014-05-20 | [
[
"Dekel",
"Ofer",
""
],
[
"Ding",
"Jian",
""
],
[
"Koren",
"Tomer",
""
],
[
"Peres",
"Yuval",
""
]
] | We study a new class of online learning problems where each of the online algorithm's actions is assigned an adversarial value, and the loss of the algorithm at each step is a known and deterministic function of the values assigned to its recent actions. This class includes problems where the algorithm's loss is the minimum over the recent adversarial values, the maximum over the recent values, or a linear combination of the recent values. We analyze the minimax regret of this class of problems when the algorithm receives bandit feedback, and prove that when the minimum or maximum functions are used, the minimax regret is $\tilde \Omega(T^{2/3})$ (so called hard online learning problems), and when a linear function is used, the minimax regret is $\tilde O(\sqrt{T})$ (so called easy learning problems). Previously, the only online learning problem that was known to be provably hard was the multi-armed bandit with switching costs. |
2309.15423 | Abdullah Alawad | Abdullah Alawad, Muhammad Aneeq uz Zaman, Khaled Alshehri and Tamer
Ba\c{s}ar | Prosumers Participation in Markets: A Scalar-Parameterized Function
Bidding Approach | Corrected typos in the figures | null | null | null | cs.GT cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | In uniform-price markets, suppliers compete to supply a resource to
consumers, resulting in a single market price determined by their competition.
For sufficient flexibility, producers and consumers prefer to commit to a
function as their strategies, indicating their preferred quantity at any given
market price. Producers and consumers may wish to act as both, i.e., prosumers.
In this paper, we examine the behavior of profit-maximizing prosumers in a
uniform-price market for resource allocation with the objective of maximizing
the social welfare. We propose a scalar-parameterized function bidding
mechanism for the prosumers, in which we establish the existence and uniqueness
of Nash equilibrium. Furthermore, we provide an efficient way to compute the
Nash equilibrium through the computation of the market allocation at the Nash
equilibrium. Finally, we present a case study to illustrate the welfare loss
under different variations of market parameters, such as the market's supply
capacity and inelastic demand.
| [
{
"created": "Wed, 27 Sep 2023 06:20:28 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Sep 2023 17:27:26 GMT",
"version": "v2"
},
{
"created": "Thu, 14 Mar 2024 17:53:08 GMT",
"version": "v3"
}
] | 2024-03-15 | [
[
"Alawad",
"Abdullah",
""
],
[
"Zaman",
"Muhammad Aneeq uz",
""
],
[
"Alshehri",
"Khaled",
""
],
[
"Başar",
"Tamer",
""
]
] | In uniform-price markets, suppliers compete to supply a resource to consumers, resulting in a single market price determined by their competition. For sufficient flexibility, producers and consumers prefer to commit to a function as their strategies, indicating their preferred quantity at any given market price. Producers and consumers may wish to act as both, i.e., prosumers. In this paper, we examine the behavior of profit-maximizing prosumers in a uniform-price market for resource allocation with the objective of maximizing the social welfare. We propose a scalar-parameterized function bidding mechanism for the prosumers, in which we establish the existence and uniqueness of Nash equilibrium. Furthermore, we provide an efficient way to compute the Nash equilibrium through the computation of the market allocation at the Nash equilibrium. Finally, we present a case study to illustrate the welfare loss under different variations of market parameters, such as the market's supply capacity and inelastic demand. |
2103.15543 | Wenkai Yang | Wenkai Yang, Lei Li, Zhiyuan Zhang, Xuancheng Ren, Xu Sun, Bin He | Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability
of the Embedding Layers in NLP Models | NAACL-HLT 2021, Long Paper | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recent studies have revealed a security threat to natural language processing
(NLP) models, called the Backdoor Attack. Victim models can maintain
competitive performance on clean samples while behaving abnormally on samples
with a specific trigger word inserted. Previous backdoor attacking methods
usually assume that attackers have a certain degree of data knowledge, either
the dataset which users would use or proxy datasets for a similar task, for
implementing the data poisoning procedure. However, in this paper, we find that
it is possible to hack the model in a data-free way by modifying one single
word embedding vector, with almost no accuracy sacrificed on clean samples.
Experimental results on sentiment analysis and sentence-pair classification
tasks show that our method is more efficient and stealthier. We hope this work
can raise the awareness of such a critical security risk hidden in the
embedding layers of NLP models. Our code is available at
https://github.com/lancopku/Embedding-Poisoning.
| [
{
"created": "Mon, 29 Mar 2021 12:19:45 GMT",
"version": "v1"
}
] | 2021-03-30 | [
[
"Yang",
"Wenkai",
""
],
[
"Li",
"Lei",
""
],
[
"Zhang",
"Zhiyuan",
""
],
[
"Ren",
"Xuancheng",
""
],
[
"Sun",
"Xu",
""
],
[
"He",
"Bin",
""
]
] | Recent studies have revealed a security threat to natural language processing (NLP) models, called the Backdoor Attack. Victim models can maintain competitive performance on clean samples while behaving abnormally on samples with a specific trigger word inserted. Previous backdoor attacking methods usually assume that attackers have a certain degree of data knowledge, either the dataset which users would use or proxy datasets for a similar task, for implementing the data poisoning procedure. However, in this paper, we find that it is possible to hack the model in a data-free way by modifying one single word embedding vector, with almost no accuracy sacrificed on clean samples. Experimental results on sentiment analysis and sentence-pair classification tasks show that our method is more efficient and stealthier. We hope this work can raise the awareness of such a critical security risk hidden in the embedding layers of NLP models. Our code is available at https://github.com/lancopku/Embedding-Poisoning. |
1902.02311 | Alex Tong Lin | Alex Tong Lin, Mark J. Debord, Katia Estabridis, Gary Hewer, Guido
Montufar, Stanley Osher | Decentralized Multi-Agents by Imitation of a Centralized Controller | null | null | null | null | cs.MA cs.AI cs.LG cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a multi-agent reinforcement learning problem where each agent
seeks to maximize a shared reward while interacting with other agents, and they
may or may not be able to communicate. Typically the agents do not have access
to other agent policies and thus each agent is situated in a non-stationary and
partially-observable environment. In order to obtain multi-agents that act in a
decentralized manner, we introduce a novel algorithm under the popular
framework of centralized training, but decentralized execution. This training
framework first obtains solutions to a multi-agent problem with a single
centralized joint-space learner, which is then used to guide imitation learning
for independent decentralized multi-agents. This framework has the flexibility
to use any reinforcement learning algorithm to obtain the expert as well as any
imitation learning algorithm to obtain the decentralized agents. This is in
contrast to other multi-agent learning algorithms that, for example, can
require more specific structures. We present some theoretical bounds for our
method, and we show that one can obtain decentralized solutions to a
multi-agent problem through imitation learning.
| [
{
"created": "Wed, 6 Feb 2019 18:14:31 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Feb 2019 14:48:32 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Jun 2019 01:39:00 GMT",
"version": "v3"
},
{
"created": "Thu, 22 Apr 2021 18:59:26 GMT",
"version": "v4"
}
] | 2021-04-26 | [
[
"Lin",
"Alex Tong",
""
],
[
"Debord",
"Mark J.",
""
],
[
"Estabridis",
"Katia",
""
],
[
"Hewer",
"Gary",
""
],
[
"Montufar",
"Guido",
""
],
[
"Osher",
"Stanley",
""
]
] | We consider a multi-agent reinforcement learning problem where each agent seeks to maximize a shared reward while interacting with other agents, and they may or may not be able to communicate. Typically the agents do not have access to other agent policies and thus each agent is situated in a non-stationary and partially-observable environment. In order to obtain multi-agents that act in a decentralized manner, we introduce a novel algorithm under the popular framework of centralized training, but decentralized execution. This training framework first obtains solutions to a multi-agent problem with a single centralized joint-space learner, which is then used to guide imitation learning for independent decentralized multi-agents. This framework has the flexibility to use any reinforcement learning algorithm to obtain the expert as well as any imitation learning algorithm to obtain the decentralized agents. This is in contrast to other multi-agent learning algorithms that, for example, can require more specific structures. We present some theoretical bounds for our method, and we show that one can obtain decentralized solutions to a multi-agent problem through imitation learning. |
1804.01735 | Chaoyue Niu | Chaoyue Niu, Minping Zhou, Zhenzhe Zheng, Fan Wu, and Guihai Chen | ERA: Towards Privacy Preservation and Verifiability for Online Ad
Exchanges | null | C. Niu, M. Zhou, Z. Zheng, F. Wu, G. Chen, ERA: towards privacy
preservation and verifiability for online ad exchanges, Journal of Network
and Computer Applications 98 (2017) 1-10. doi:10.1016/j.jnca.2017.08.012 | 10.1016/j.jnca.2017.08.012 | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ad exchanges are kind of the most popular online advertising marketplaces for
trading ad spaces over the Internet. Ad exchanges run auctions to sell diverse
ad spaces on the publishers' web-pages to advertisers, who want to display ads
on ad spaces. However, the parties in an ad auction cannot verify whether the
auction is carried out correctly or not. Furthermore, the advertisers are
usually unwilling to reveal their sensitive bids and identities. In this paper,
we jointly consider the auction verifiability and the advertisers' privacy
preservation, and thus propose ERA, which is an Efficient, pRivacy-preserving,
and verifiAble online auction mechanism for ad exchanges. ERA exploits an order
preserving encryption scheme to guarantee privacy preservation, and achieves
verifiability by constructing a novel protocol of privacy preserving integer
comparison, which is built on the Paillier homomorphic encryption scheme. We
extensively evaluate the performance of ERA, and our evaluation results show
that ERA satisfies several desirable properties with low computation,
communication, and storage overheads, so ERA can be easily deployed in today's
ad exchanges.
| [
{
"created": "Thu, 5 Apr 2018 08:37:25 GMT",
"version": "v1"
}
] | 2018-04-06 | [
[
"Niu",
"Chaoyue",
""
],
[
"Zhou",
"Minping",
""
],
[
"Zheng",
"Zhenzhe",
""
],
[
"Wu",
"Fan",
""
],
[
"Chen",
"Guihai",
""
]
] | Ad exchanges are kind of the most popular online advertising marketplaces for trading ad spaces over the Internet. Ad exchanges run auctions to sell diverse ad spaces on the publishers' web-pages to advertisers, who want to display ads on ad spaces. However, the parties in an ad auction cannot verify whether the auction is carried out correctly or not. Furthermore, the advertisers are usually unwilling to reveal their sensitive bids and identities. In this paper, we jointly consider the auction verifiability and the advertisers' privacy preservation, and thus propose ERA, which is an Efficient, pRivacy-preserving, and verifiAble online auction mechanism for ad exchanges. ERA exploits an order preserving encryption scheme to guarantee privacy preservation, and achieves verifiability by constructing a novel protocol of privacy preserving integer comparison, which is built on the Paillier homomorphic encryption scheme. We extensively evaluate the performance of ERA, and our evaluation results show that ERA satisfies several desirable properties with low computation, communication, and storage overheads, so ERA can be easily deployed in today's ad exchanges. |
1805.04264 | Lyan Verwimp | Lyan Verwimp, Hugo Van hamme, Vincent Renkens, Patrick Wambacq | State Gradients for RNN Memory Analysis | Accepted for Interspeech 2018 | null | null | null | cs.CL cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a framework for analyzing what the state in RNNs remembers from
its input embeddings. Our approach is inspired by backpropagation, in the sense
that we compute the gradients of the states with respect to the input
embeddings. The gradient matrix is decomposed with Singular Value Decomposition
to analyze which directions in the embedding space are best transferred to the
hidden state space, characterized by the largest singular values. We apply our
approach to LSTM language models and investigate to what extent and for how
long certain classes of words are remembered on average for a certain corpus.
Additionally, the extent to which a specific property or relationship is
remembered by the RNN can be tracked by comparing a vector characterizing that
property with the direction(s) in embedding space that are best preserved in
hidden state space.
| [
{
"created": "Fri, 11 May 2018 07:51:28 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Jun 2018 09:05:30 GMT",
"version": "v2"
}
] | 2018-06-19 | [
[
"Verwimp",
"Lyan",
""
],
[
"Van hamme",
"Hugo",
""
],
[
"Renkens",
"Vincent",
""
],
[
"Wambacq",
"Patrick",
""
]
] | We present a framework for analyzing what the state in RNNs remembers from its input embeddings. Our approach is inspired by backpropagation, in the sense that we compute the gradients of the states with respect to the input embeddings. The gradient matrix is decomposed with Singular Value Decomposition to analyze which directions in the embedding space are best transferred to the hidden state space, characterized by the largest singular values. We apply our approach to LSTM language models and investigate to what extent and for how long certain classes of words are remembered on average for a certain corpus. Additionally, the extent to which a specific property or relationship is remembered by the RNN can be tracked by comparing a vector characterizing that property with the direction(s) in embedding space that are best preserved in hidden state space. |
2009.00328 | Yi Lou | Yi Lou, Ruofan Sun, Julian Cheng, Donghu Nie, Gang Qiao | Secrecy Outage Analysis of Two-Hop Decode-and-Forward Mixed RF/UWOC
Systems | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the secrecy performance of a two-hop mixed radio frequency
(RF)/underwater wireless optical communication (UWOC) system using a
decode-and-forward (DF) relay. All RF and UWOC links are modeled by the
$\alpha-\mu$ and exponential-generalized Gamma distributions, respectively. We
first derive the expressions of the secrecy outage probability (SOP) in exact
closed-form, which are subsequently used to derive asymptotic expressions at
high SNR that only includes simple functions for further insight. Moreover,
based on the asymptotic expression, we can determine the optimal transmit power
for a wide variety of RF and UWOC channel conditions. All analyses are
validated using Monte Carlo simulation.
| [
{
"created": "Tue, 1 Sep 2020 10:13:28 GMT",
"version": "v1"
}
] | 2020-09-02 | [
[
"Lou",
"Yi",
""
],
[
"Sun",
"Ruofan",
""
],
[
"Cheng",
"Julian",
""
],
[
"Nie",
"Donghu",
""
],
[
"Qiao",
"Gang",
""
]
] | We analyze the secrecy performance of a two-hop mixed radio frequency (RF)/underwater wireless optical communication (UWOC) system using a decode-and-forward (DF) relay. All RF and UWOC links are modeled by the $\alpha-\mu$ and exponential-generalized Gamma distributions, respectively. We first derive the expressions of the secrecy outage probability (SOP) in exact closed-form, which are subsequently used to derive asymptotic expressions at high SNR that only includes simple functions for further insight. Moreover, based on the asymptotic expression, we can determine the optimal transmit power for a wide variety of RF and UWOC channel conditions. All analyses are validated using Monte Carlo simulation. |
1510.06535 | Uri Zwick | Thomas Dueholm Hansen, Haim Kaplan, Robert E. Tarjan, Uri Zwick | Hollow Heaps | 27 pages, 7 figures, preliminary version appeared in ICALP 2015 | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the hollow heap, a very simple data structure with the same
amortized efficiency as the classical Fibonacci heap. All heap operations
except delete and delete-min take $O(1)$ time, worst case as well as amortized;
delete and delete-min take $O(\log n)$ amortized time on a heap of $n$ items.
Hollow heaps are by far the simplest structure to achieve this. Hollow heaps
combine two novel ideas: the use of lazy deletion and re-insertion to do
decrease-key operations, and the use of a dag (directed acyclic graph) instead
of a tree or set of trees to represent a heap. Lazy deletion produces hollow
nodes (nodes without items), giving the data structure its name.
| [
{
"created": "Thu, 22 Oct 2015 09:09:11 GMT",
"version": "v1"
}
] | 2015-10-23 | [
[
"Hansen",
"Thomas Dueholm",
""
],
[
"Kaplan",
"Haim",
""
],
[
"Tarjan",
"Robert E.",
""
],
[
"Zwick",
"Uri",
""
]
] | We introduce the hollow heap, a very simple data structure with the same amortized efficiency as the classical Fibonacci heap. All heap operations except delete and delete-min take $O(1)$ time, worst case as well as amortized; delete and delete-min take $O(\log n)$ amortized time on a heap of $n$ items. Hollow heaps are by far the simplest structure to achieve this. Hollow heaps combine two novel ideas: the use of lazy deletion and re-insertion to do decrease-key operations, and the use of a dag (directed acyclic graph) instead of a tree or set of trees to represent a heap. Lazy deletion produces hollow nodes (nodes without items), giving the data structure its name. |
1903.08970 | Alex Bird | Alex Bird, Christopher K. I. Williams, Christopher Hawthorne | Multi-Task Time Series Analysis applied to Drug Response Modelling | To appear in AISTATS 2019 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series models such as dynamical systems are frequently fitted to a
cohort of data, ignoring variation between individual entities such as
patients. In this paper we show how these models can be personalised to an
individual level while retaining statistical power, via use of multi-task
learning (MTL). To our knowledge this is a novel development of MTL which
applies to time series both with and without control inputs. The modelling
framework is demonstrated on a physiological drug response problem which
results in improved predictive accuracy and uncertainty estimation over
existing state-of-the-art models.
| [
{
"created": "Thu, 21 Mar 2019 13:03:55 GMT",
"version": "v1"
}
] | 2019-03-22 | [
[
"Bird",
"Alex",
""
],
[
"Williams",
"Christopher K. I.",
""
],
[
"Hawthorne",
"Christopher",
""
]
] | Time series models such as dynamical systems are frequently fitted to a cohort of data, ignoring variation between individual entities such as patients. In this paper we show how these models can be personalised to an individual level while retaining statistical power, via use of multi-task learning (MTL). To our knowledge this is a novel development of MTL which applies to time series both with and without control inputs. The modelling framework is demonstrated on a physiological drug response problem which results in improved predictive accuracy and uncertainty estimation over existing state-of-the-art models. |
1911.03855 | Salvatore Giorgi | Salvatore Giorgi, Veronica Lynn, Keshav Gupta, Farhan Ahmed, Sandra
Matz, Lyle Ungar, and H. Andrew Schwartz | Correcting Sociodemographic Selection Biases for Population Prediction
from Social Media | Published at the 16th International AAAI Conference on Web and Social
Media (ICWSM) 2022 | null | null | null | cs.SI cs.CL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social media is increasingly used for large-scale population predictions,
such as estimating community health statistics. However, social media users are
not typically a representative sample of the intended population -- a
"selection bias". Within the social sciences, such a bias is typically
addressed with restratification techniques, where observations are reweighted
according to how under- or over-sampled their socio-demographic groups are.
Yet, restratifaction is rarely evaluated for improving prediction. In this
two-part study, we first evaluate standard, "out-of-the-box" restratification
techniques, finding they provide no improvement and often even degraded
prediction accuracies across four tasks of esimating U.S. county population
health statistics from Twitter. The core reasons for degraded performance seem
to be tied to their reliance on either sparse or shrunken estimates of each
population's socio-demographics. In the second part of our study, we develop
and evaluate Robust Poststratification, which consists of three methods to
address these problems: (1) estimator redistribution to account for shrinking,
as well as (2) adaptive binning and (3) informed smoothing to handle sparse
socio-demographic estimates. We show that each of these methods leads to
significant improvement in prediction accuracies over the standard
restratification approaches. Taken together, Robust Poststratification enables
state-of-the-art prediction accuracies, yielding a 53.0% increase in variance
explained (R^2) in the case of surveyed life satisfaction, and a 17.8% average
increase across all tasks.
| [
{
"created": "Sun, 10 Nov 2019 05:13:29 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Jul 2020 20:05:06 GMT",
"version": "v2"
},
{
"created": "Fri, 23 Jul 2021 21:48:35 GMT",
"version": "v3"
},
{
"created": "Tue, 7 Jun 2022 15:52:47 GMT",
"version": "v4"
}
] | 2022-06-08 | [
[
"Giorgi",
"Salvatore",
""
],
[
"Lynn",
"Veronica",
""
],
[
"Gupta",
"Keshav",
""
],
[
"Ahmed",
"Farhan",
""
],
[
"Matz",
"Sandra",
""
],
[
"Ungar",
"Lyle",
""
],
[
"Schwartz",
"H. Andrew",
""
]
] | Social media is increasingly used for large-scale population predictions, such as estimating community health statistics. However, social media users are not typically a representative sample of the intended population -- a "selection bias". Within the social sciences, such a bias is typically addressed with restratification techniques, where observations are reweighted according to how under- or over-sampled their socio-demographic groups are. Yet, restratifaction is rarely evaluated for improving prediction. In this two-part study, we first evaluate standard, "out-of-the-box" restratification techniques, finding they provide no improvement and often even degraded prediction accuracies across four tasks of esimating U.S. county population health statistics from Twitter. The core reasons for degraded performance seem to be tied to their reliance on either sparse or shrunken estimates of each population's socio-demographics. In the second part of our study, we develop and evaluate Robust Poststratification, which consists of three methods to address these problems: (1) estimator redistribution to account for shrinking, as well as (2) adaptive binning and (3) informed smoothing to handle sparse socio-demographic estimates. We show that each of these methods leads to significant improvement in prediction accuracies over the standard restratification approaches. Taken together, Robust Poststratification enables state-of-the-art prediction accuracies, yielding a 53.0% increase in variance explained (R^2) in the case of surveyed life satisfaction, and a 17.8% average increase across all tasks. |
1408.5955 | EPTCS | Amir M. Ben-Amram | The Hardness of Finding Linear Ranking Functions for Lasso Programs | In Proceedings GandALF 2014, arXiv:1408.5560. I thank the organizers
of the Dagstuhl Seminar 14141, "Reachability Problems for Infinite-State
Systems", for the opportunity to present an early draft of this work | EPTCS 161, 2014, pp. 32-45 | 10.4204/EPTCS.161.6 | null | cs.LO cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finding whether a linear-constraint loop has a linear ranking function is an
important key to understanding the loop behavior, proving its termination and
establishing iteration bounds. If no preconditions are provided, the decision
problem is known to be in coNP when variables range over the integers and in
PTIME for the rational numbers, or real numbers. Here we show that deciding
whether a linear-constraint loop with a precondition, specifically with
partially-specified input, has a linear ranking function is EXPSPACE-hard over
the integers, and PSPACE-hard over the rationals. The precise complexity of
these decision problems is yet unknown. The EXPSPACE lower bound is derived
from the reachability problem for Petri nets (equivalently, Vector Addition
Systems), and possibly indicates an even stronger lower bound (subject to open
problems in VAS theory). The lower bound for the rationals follows from a novel
simulation of Boolean programs. Lower bounds are also given for the problem of
deciding if a linear ranking-function supported by a particular form of
inductive invariant exists. For loops over integers, the problem is PSPACE-hard
for convex polyhedral invariants and EXPSPACE-hard for downward-closed sets of
natural numbers as invariants.
| [
{
"created": "Tue, 26 Aug 2014 01:14:28 GMT",
"version": "v1"
}
] | 2014-08-27 | [
[
"Ben-Amram",
"Amir M.",
""
]
] | Finding whether a linear-constraint loop has a linear ranking function is an important key to understanding the loop behavior, proving its termination and establishing iteration bounds. If no preconditions are provided, the decision problem is known to be in coNP when variables range over the integers and in PTIME for the rational numbers, or real numbers. Here we show that deciding whether a linear-constraint loop with a precondition, specifically with partially-specified input, has a linear ranking function is EXPSPACE-hard over the integers, and PSPACE-hard over the rationals. The precise complexity of these decision problems is yet unknown. The EXPSPACE lower bound is derived from the reachability problem for Petri nets (equivalently, Vector Addition Systems), and possibly indicates an even stronger lower bound (subject to open problems in VAS theory). The lower bound for the rationals follows from a novel simulation of Boolean programs. Lower bounds are also given for the problem of deciding if a linear ranking-function supported by a particular form of inductive invariant exists. For loops over integers, the problem is PSPACE-hard for convex polyhedral invariants and EXPSPACE-hard for downward-closed sets of natural numbers as invariants. |
2301.01915 | Bin Lyu | Jie Jiang, Bin Lyu, Pengcheng Chen, and Zhen Yang | Sum-Rate Maximization in Active RIS-Assisted Multi-Antenna WPCN | Accepted by China Communications | null | null | null | cs.IT eess.SP math.IT | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose an active reconfigurable intelligent surface (RIS)
enabled hybrid relaying scheme for a multi-antenna wireless powered
communication network (WPCN), where the active RIS is employed to assist both
wireless energy transfer (WET) from the power station (PS) to
energy-constrained users and wireless information transmission (WIT) from users
to the receiving station (RS). For further performance enhancement, we propose
to employ both transmit beamforming at the PS and receive beamforming at the
RS. We formulate a sum-rate maximization problem by jointly optimizing the RIS
phase shifts and amplitude reflection coefficients for both the WET and the
WIT, transmit and receive beamforming vectors, and network resource allocation.
To solve this non-convex problem, we propose an efficient alternating
optimization algorithm with linear minimum mean squared error criterion,
semi-definite relaxation (SDR) and successive convex approximation techniques.
Specifically, the tightness of applying the SDR is proved. Simulation results
demonstrate that our proposed scheme with 10 reflecting elements (REs) and 4
antennas can achieve 17.78% and 415.48% performance gains compared to the
single-antenna scheme with 10 REs and passive RIS scheme with 100 REs,
respectively.
| [
{
"created": "Thu, 5 Jan 2023 05:26:14 GMT",
"version": "v1"
}
] | 2023-01-06 | [
[
"Jiang",
"Jie",
""
],
[
"Lyu",
"Bin",
""
],
[
"Chen",
"Pengcheng",
""
],
[
"Yang",
"Zhen",
""
]
] | In this paper, we propose an active reconfigurable intelligent surface (RIS) enabled hybrid relaying scheme for a multi-antenna wireless powered communication network (WPCN), where the active RIS is employed to assist both wireless energy transfer (WET) from the power station (PS) to energy-constrained users and wireless information transmission (WIT) from users to the receiving station (RS). For further performance enhancement, we propose to employ both transmit beamforming at the PS and receive beamforming at the RS. We formulate a sum-rate maximization problem by jointly optimizing the RIS phase shifts and amplitude reflection coefficients for both the WET and the WIT, transmit and receive beamforming vectors, and network resource allocation. To solve this non-convex problem, we propose an efficient alternating optimization algorithm with linear minimum mean squared error criterion, semi-definite relaxation (SDR) and successive convex approximation techniques. Specifically, the tightness of applying the SDR is proved. Simulation results demonstrate that our proposed scheme with 10 reflecting elements (REs) and 4 antennas can achieve 17.78% and 415.48% performance gains compared to the single-antenna scheme with 10 REs and passive RIS scheme with 100 REs, respectively. |
1804.08001 | Uri Stemmer | Haim Kaplan, Uri Stemmer | Differentially Private k-Means with Constant Multiplicative Error | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We design new differentially private algorithms for the Euclidean k-means
problem, both in the centralized model and in the local model of differential
privacy. In both models, our algorithms achieve significantly improved error
guarantees than the previous state-of-the-art. In addition, in the local model,
our algorithm significantly reduces the number of interaction rounds.
Although the problem has been widely studied in the context of differential
privacy, all of the existing constructions achieve only super constant
approximation factors. We present, for the first time, efficient private
algorithms for the problem with constant multiplicative error. Furthermore, we
show how to modify our algorithms so they compute private corsets for k-means
clustering in both models.
| [
{
"created": "Sat, 21 Apr 2018 17:41:04 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Jul 2018 16:08:54 GMT",
"version": "v2"
}
] | 2018-07-17 | [
[
"Kaplan",
"Haim",
""
],
[
"Stemmer",
"Uri",
""
]
] | We design new differentially private algorithms for the Euclidean k-means problem, both in the centralized model and in the local model of differential privacy. In both models, our algorithms achieve significantly improved error guarantees than the previous state-of-the-art. In addition, in the local model, our algorithm significantly reduces the number of interaction rounds. Although the problem has been widely studied in the context of differential privacy, all of the existing constructions achieve only super constant approximation factors. We present, for the first time, efficient private algorithms for the problem with constant multiplicative error. Furthermore, we show how to modify our algorithms so they compute private corsets for k-means clustering in both models. |
1007.5336 | Yiyue Wu | Yiyue Wu, Andreas Achtzehn, Marina Petrova, Petri Mahonen and Robert
Calderbank | The Value of Staying Current when Beamforming | 8 pages | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Beamforming is a widely used method of provisioning high quality wireless
channels that leads to high data rates and simple decoding structures. It
requires feedback of Channel State Information (CSI) from receiver to
transmitter, and the accuracy of this information is limited by rate
constraints on the feedback channel and by delay. It is important to understand
how the performance gains associated with beamforming depend on the accuracy or
currency of the Channel State Information. This paper quantifies performance
degradation caused by aging of CSI. It uses outage probability to measure the
currency of CSI, and to discount the performance gains associated with ideal
beamforming. Outage probability is a function of the beamforming algorithm and
results are presented for Transmit Antenna Selection and other widely used
methods. These results are translated into effective diversity orders for
Multiple Input Single Output (MISO) and Multiuser Multiple Input Multiple
Output (MIMO) systems.
| [
{
"created": "Thu, 29 Jul 2010 21:18:39 GMT",
"version": "v1"
}
] | 2010-08-02 | [
[
"Wu",
"Yiyue",
""
],
[
"Achtzehn",
"Andreas",
""
],
[
"Petrova",
"Marina",
""
],
[
"Mahonen",
"Petri",
""
],
[
"Calderbank",
"Robert",
""
]
] | Beamforming is a widely used method of provisioning high quality wireless channels that leads to high data rates and simple decoding structures. It requires feedback of Channel State Information (CSI) from receiver to transmitter, and the accuracy of this information is limited by rate constraints on the feedback channel and by delay. It is important to understand how the performance gains associated with beamforming depend on the accuracy or currency of the Channel State Information. This paper quantifies performance degradation caused by aging of CSI. It uses outage probability to measure the currency of CSI, and to discount the performance gains associated with ideal beamforming. Outage probability is a function of the beamforming algorithm and results are presented for Transmit Antenna Selection and other widely used methods. These results are translated into effective diversity orders for Multiple Input Single Output (MISO) and Multiuser Multiple Input Multiple Output (MIMO) systems. |
2010.11223 | Tim Genewein | Vladimir Mikulik, Gr\'egoire Del\'etang, Tom McGrath, Tim Genewein,
Miljan Martic, Shane Legg, Pedro A. Ortega | Meta-trained agents implement Bayes-optimal agents | Published at 34th Conference on Neural Information Processing Systems
(NeurIPS 2020), Vancouver, Canada | null | null | null | cs.AI cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Memory-based meta-learning is a powerful technique to build agents that adapt
fast to any task within a target distribution. A previous theoretical study has
argued that this remarkable performance is because the meta-training protocol
incentivises agents to behave Bayes-optimally. We empirically investigate this
claim on a number of prediction and bandit tasks. Inspired by ideas from
theoretical computer science, we show that meta-learned and Bayes-optimal
agents not only behave alike, but they even share a similar computational
structure, in the sense that one agent system can approximately simulate the
other. Furthermore, we show that Bayes-optimal agents are fixed points of the
meta-learning dynamics. Our results suggest that memory-based meta-learning
might serve as a general technique for numerically approximating Bayes-optimal
agents - that is, even for task distributions for which we currently don't
possess tractable models.
| [
{
"created": "Wed, 21 Oct 2020 18:05:21 GMT",
"version": "v1"
}
] | 2020-10-23 | [
[
"Mikulik",
"Vladimir",
""
],
[
"Delétang",
"Grégoire",
""
],
[
"McGrath",
"Tom",
""
],
[
"Genewein",
"Tim",
""
],
[
"Martic",
"Miljan",
""
],
[
"Legg",
"Shane",
""
],
[
"Ortega",
"Pedro A.",
""
]
] | Memory-based meta-learning is a powerful technique to build agents that adapt fast to any task within a target distribution. A previous theoretical study has argued that this remarkable performance is because the meta-training protocol incentivises agents to behave Bayes-optimally. We empirically investigate this claim on a number of prediction and bandit tasks. Inspired by ideas from theoretical computer science, we show that meta-learned and Bayes-optimal agents not only behave alike, but they even share a similar computational structure, in the sense that one agent system can approximately simulate the other. Furthermore, we show that Bayes-optimal agents are fixed points of the meta-learning dynamics. Our results suggest that memory-based meta-learning might serve as a general technique for numerically approximating Bayes-optimal agents - that is, even for task distributions for which we currently don't possess tractable models. |
2110.15157 | Gaoxiong Zeng | Zeng Gaoxiong and Chen Li and Yi Bairen and Chen Kai | Optimizing Tail Latency in Commodity Datacenters using Forward Error
Correction | 13 pages | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long tail latency of short flows (or messages) greatly affects user-facing
applications in datacenters. Prior solutions to the problem introduce
significant implementation complexities, such as global state monitoring,
complex network control, or non-trivial switch modifications. While promising
superior performance, they are hard to implement in practice. This paper
presents CloudBurst, a simple, effective yet readily deployable solution
achieving similar or even better results without introducing the above
complexities. At its core, CloudBurst explores forward error correction (FEC)
over multipath - it proactively spreads FEC-coded packets generated from
messages over multipath in parallel, and recovers them with the first few
arriving ones. As a result, CloudBurst is able to obliviously exploit
underutilized paths, thus achieving low tail latency. We have implemented
CloudBurst as a user-space library, and deployed it on a testbed with commodity
switches. Our testbed and simulation experiments show the superior performance
of CloudBurst. For example, CloudBurst achieves 63.69% and 60.06% reduction in
99th percentile message/flow completion time (FCT) compared to DCTCP and PIAS,
respectively.
| [
{
"created": "Thu, 28 Oct 2021 14:30:30 GMT",
"version": "v1"
}
] | 2021-10-29 | [
[
"Gaoxiong",
"Zeng",
""
],
[
"Li",
"Chen",
""
],
[
"Bairen",
"Yi",
""
],
[
"Kai",
"Chen",
""
]
] | Long tail latency of short flows (or messages) greatly affects user-facing applications in datacenters. Prior solutions to the problem introduce significant implementation complexities, such as global state monitoring, complex network control, or non-trivial switch modifications. While promising superior performance, they are hard to implement in practice. This paper presents CloudBurst, a simple, effective yet readily deployable solution achieving similar or even better results without introducing the above complexities. At its core, CloudBurst explores forward error correction (FEC) over multipath - it proactively spreads FEC-coded packets generated from messages over multipath in parallel, and recovers them with the first few arriving ones. As a result, CloudBurst is able to obliviously exploit underutilized paths, thus achieving low tail latency. We have implemented CloudBurst as a user-space library, and deployed it on a testbed with commodity switches. Our testbed and simulation experiments show the superior performance of CloudBurst. For example, CloudBurst achieves 63.69% and 60.06% reduction in 99th percentile message/flow completion time (FCT) compared to DCTCP and PIAS, respectively. |
2204.08754 | Amirhossein Mozafari | Binay Bhattacharya, Amirhossein Mozafari, Thomas C. Shermer | An Efficient Algorithm for the Proximity Connected Two Center Problem | null | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a set $P$ of $n$ points in the plane, the $k$-center problem is to find
$k$ congruent disks of minimum possible radius such that their union covers all
the points in $P$. The $2$-center problem is a special case of the $k$-center
problem that has been extensively studied in the recent past \cite{CAHN,HT,SH}.
In this paper, we consider a generalized version of the $2$-center problem
called \textit{proximity connected} $2$-center (PCTC) problem. In this problem,
we are also given a parameter $\delta\geq 0$ and we have the additional
constraint that the distance between the centers of the disks should be at most
$\delta$. Note that when $\delta=0$, the PCTC problem is reduced to the
$1$-center(minimum enclosing disk) problem and when $\delta$ tends to infinity,
it is reduced to the $2$-center problem. The PCTC problem first appeared in the
context of wireless networks in 1992 \cite{ACN0}, but obtaining a nontrivial
deterministic algorithm for the problem remained open. In this paper, we
resolve this open problem by providing a deterministic $O(n^2\log n)$ time
algorithm for the problem.
| [
{
"created": "Tue, 19 Apr 2022 08:49:40 GMT",
"version": "v1"
}
] | 2022-04-20 | [
[
"Bhattacharya",
"Binay",
""
],
[
"Mozafari",
"Amirhossein",
""
],
[
"Shermer",
"Thomas C.",
""
]
] | Given a set $P$ of $n$ points in the plane, the $k$-center problem is to find $k$ congruent disks of minimum possible radius such that their union covers all the points in $P$. The $2$-center problem is a special case of the $k$-center problem that has been extensively studied in the recent past \cite{CAHN,HT,SH}. In this paper, we consider a generalized version of the $2$-center problem called \textit{proximity connected} $2$-center (PCTC) problem. In this problem, we are also given a parameter $\delta\geq 0$ and we have the additional constraint that the distance between the centers of the disks should be at most $\delta$. Note that when $\delta=0$, the PCTC problem is reduced to the $1$-center(minimum enclosing disk) problem and when $\delta$ tends to infinity, it is reduced to the $2$-center problem. The PCTC problem first appeared in the context of wireless networks in 1992 \cite{ACN0}, but obtaining a nontrivial deterministic algorithm for the problem remained open. In this paper, we resolve this open problem by providing a deterministic $O(n^2\log n)$ time algorithm for the problem. |
2203.03821 | Mengzhao Chen | Mengzhao Chen, Mingbao Lin, Ke Li, Yunhang Shen, Yongjian Wu, Fei
Chao, Rongrong Ji | CF-ViT: A General Coarse-to-Fine Method for Vision Transformer | Accepted by AAAI 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision Transformers (ViT) have made many breakthroughs in computer vision
tasks. However, considerable redundancy arises in the spatial dimension of an
input image, leading to massive computational costs. Therefore, We propose a
coarse-to-fine vision transformer (CF-ViT) to relieve computational burden
while retaining performance in this paper. Our proposed CF-ViT is motivated by
two important observations in modern ViT models: (1) The coarse-grained patch
splitting can locate informative regions of an input image. (2) Most images can
be well recognized by a ViT model in a small-length token sequence. Therefore,
our CF-ViT implements network inference in a two-stage manner. At coarse
inference stage, an input image is split into a small-length patch sequence for
a computationally economical classification. If not well recognized, the
informative patches are identified and further re-split in a fine-grained
granularity. Extensive experiments demonstrate the efficacy of our CF-ViT. For
example, without any compromise on performance, CF-ViT reduces 53% FLOPs of
LV-ViT, and also achieves 2.01x throughput.
| [
{
"created": "Tue, 8 Mar 2022 02:57:49 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Jun 2022 04:33:20 GMT",
"version": "v2"
},
{
"created": "Tue, 12 Jul 2022 06:54:26 GMT",
"version": "v3"
},
{
"created": "Tue, 19 Jul 2022 12:45:08 GMT",
"version": "v4"
},
{
"created": "Mon, 21 Nov 2022 09:47:20 GMT",
"version": "v5"
}
] | 2022-11-22 | [
[
"Chen",
"Mengzhao",
""
],
[
"Lin",
"Mingbao",
""
],
[
"Li",
"Ke",
""
],
[
"Shen",
"Yunhang",
""
],
[
"Wu",
"Yongjian",
""
],
[
"Chao",
"Fei",
""
],
[
"Ji",
"Rongrong",
""
]
] | Vision Transformers (ViT) have made many breakthroughs in computer vision tasks. However, considerable redundancy arises in the spatial dimension of an input image, leading to massive computational costs. Therefore, We propose a coarse-to-fine vision transformer (CF-ViT) to relieve computational burden while retaining performance in this paper. Our proposed CF-ViT is motivated by two important observations in modern ViT models: (1) The coarse-grained patch splitting can locate informative regions of an input image. (2) Most images can be well recognized by a ViT model in a small-length token sequence. Therefore, our CF-ViT implements network inference in a two-stage manner. At coarse inference stage, an input image is split into a small-length patch sequence for a computationally economical classification. If not well recognized, the informative patches are identified and further re-split in a fine-grained granularity. Extensive experiments demonstrate the efficacy of our CF-ViT. For example, without any compromise on performance, CF-ViT reduces 53% FLOPs of LV-ViT, and also achieves 2.01x throughput. |
2012.14884 | Henry Corrigan-Gibbs | Dan Boneh, Elette Boyle, Henry Corrigan-Gibbs, Niv Gilboa, and Yuval
Ishai | Lightweight Techniques for Private Heavy Hitters | Appeared at IEEE Security & Privacy 2021 | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents Poplar, a new system for solving the private
heavy-hitters problem. In this problem, there are many clients and a small set
of data-collection servers. Each client holds a private bitstring. The servers
want to recover the set of all popular strings, without learning anything else
about any client's string. A web-browser vendor, for instance, can use Poplar
to figure out which homepages are popular, without learning any user's
homepage. We also consider the simpler private subset-histogram problem, in
which the servers want to count how many clients hold strings in a particular
set without revealing this set to the clients.
Poplar uses two data-collection servers and, in a protocol run, each client
send sends only a single message to the servers. Poplar protects client privacy
against arbitrary misbehavior by one of the servers and our approach requires
no public-key cryptography (except for secure channels), nor general-purpose
multiparty computation. Instead, we rely on incremental distributed point
functions, a new cryptographic tool that allows a client to succinctly
secret-share the labels on the nodes of an exponentially large binary tree,
provided that the tree has a single non-zero path. Along the way, we develop
new general tools for providing malicious security in applications of
distributed point functions.
| [
{
"created": "Tue, 29 Dec 2020 18:20:16 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Apr 2021 21:13:32 GMT",
"version": "v2"
},
{
"created": "Tue, 4 Jan 2022 11:17:58 GMT",
"version": "v3"
},
{
"created": "Fri, 6 May 2022 13:58:21 GMT",
"version": "v4"
},
{
"created": "Thu, 23 Mar 2023 20:03:11 GMT",
"version": "v5"
}
] | 2023-03-27 | [
[
"Boneh",
"Dan",
""
],
[
"Boyle",
"Elette",
""
],
[
"Corrigan-Gibbs",
"Henry",
""
],
[
"Gilboa",
"Niv",
""
],
[
"Ishai",
"Yuval",
""
]
] | This paper presents Poplar, a new system for solving the private heavy-hitters problem. In this problem, there are many clients and a small set of data-collection servers. Each client holds a private bitstring. The servers want to recover the set of all popular strings, without learning anything else about any client's string. A web-browser vendor, for instance, can use Poplar to figure out which homepages are popular, without learning any user's homepage. We also consider the simpler private subset-histogram problem, in which the servers want to count how many clients hold strings in a particular set without revealing this set to the clients. Poplar uses two data-collection servers and, in a protocol run, each client send sends only a single message to the servers. Poplar protects client privacy against arbitrary misbehavior by one of the servers and our approach requires no public-key cryptography (except for secure channels), nor general-purpose multiparty computation. Instead, we rely on incremental distributed point functions, a new cryptographic tool that allows a client to succinctly secret-share the labels on the nodes of an exponentially large binary tree, provided that the tree has a single non-zero path. Along the way, we develop new general tools for providing malicious security in applications of distributed point functions. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.