id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1906.06678
|
Zhi-Xiu Ye
|
Zhi-Xiu Ye and Zhen-Hua Ling
|
Multi-Level Matching and Aggregation Network for Few-Shot Relation
Classification
|
ACL 2019
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a multi-level matching and aggregation network (MLMAN)
for few-shot relation classification. Previous studies on this topic adopt
prototypical networks, which calculate the embedding vector of a query instance
and the prototype vector of each support set independently. In contrast, our
proposed MLMAN model encodes the query instance and each support set in an
interactive way by considering their matching information at both local and
instance levels. The final class prototype for each support set is obtained by
attentive aggregation over the representations of its support instances, where
the weights are calculated using the query instance. Experimental results
demonstrate the effectiveness of our proposed methods, which achieve a new
state-of-the-art performance on the FewRel dataset.
|
[
{
"created": "Sun, 16 Jun 2019 13:10:33 GMT",
"version": "v1"
}
] |
2019-06-18
|
[
[
"Ye",
"Zhi-Xiu",
""
],
[
"Ling",
"Zhen-Hua",
""
]
] |
This paper presents a multi-level matching and aggregation network (MLMAN) for few-shot relation classification. Previous studies on this topic adopt prototypical networks, which calculate the embedding vector of a query instance and the prototype vector of each support set independently. In contrast, our proposed MLMAN model encodes the query instance and each support set in an interactive way by considering their matching information at both local and instance levels. The final class prototype for each support set is obtained by attentive aggregation over the representations of its support instances, where the weights are calculated using the query instance. Experimental results demonstrate the effectiveness of our proposed methods, which achieve a new state-of-the-art performance on the FewRel dataset.
|
2008.10238
|
Sunjae Yoon
|
Minuk Ma, Sunjae Yoon, Junyeong Kim, Youngjoon Lee, Sunghun Kang, and
Chang D. Yoo
|
VLANet: Video-Language Alignment Network for Weakly-Supervised Video
Moment Retrieval
|
16 pages, 6 figures, European Conference on Computer Vision, 2020
| null |
10.1007/978-3-030-58604-1_10
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video Moment Retrieval (VMR) is a task to localize the temporal moment in
untrimmed video specified by natural language query. For VMR, several methods
that require full supervision for training have been proposed. Unfortunately,
acquiring a large number of training videos with labeled temporal boundaries
for each query is a labor-intensive process. This paper explores methods for
performing VMR in a weakly-supervised manner (wVMR): training is performed
without temporal moment labels but only with the text query that describes a
segment of the video. Existing methods on wVMR generate multi-scale proposals
and apply query-guided attention mechanisms to highlight the most relevant
proposal. To leverage the weak supervision, contrastive learning is used which
predicts higher scores for the correct video-query pairs than for the incorrect
pairs. It has been observed that a large number of candidate proposals, coarse
query representation, and one-way attention mechanism lead to blurry attention
maps which limit the localization performance. To handle this issue,
Video-Language Alignment Network (VLANet) is proposed that learns sharper
attention by pruning out spurious candidate proposals and applying a
multi-directional attention mechanism with fine-grained query representation.
The Surrogate Proposal Selection module selects a proposal based on the
proximity to the query in the joint embedding space, and thus substantially
reduces candidate proposals which leads to lower computation load and sharper
attention. Next, the Cascaded Cross-modal Attention module considers dense
feature interactions and multi-directional attention flow to learn the
multi-modal alignment. VLANet is trained end-to-end using contrastive loss
which enforces semantically similar videos and queries to gather. The
experiments show that the method achieves state-of-the-art performance on
Charades-STA and DiDeMo datasets.
|
[
{
"created": "Mon, 24 Aug 2020 07:54:59 GMT",
"version": "v1"
}
] |
2023-10-10
|
[
[
"Ma",
"Minuk",
""
],
[
"Yoon",
"Sunjae",
""
],
[
"Kim",
"Junyeong",
""
],
[
"Lee",
"Youngjoon",
""
],
[
"Kang",
"Sunghun",
""
],
[
"Yoo",
"Chang D.",
""
]
] |
Video Moment Retrieval (VMR) is a task to localize the temporal moment in untrimmed video specified by natural language query. For VMR, several methods that require full supervision for training have been proposed. Unfortunately, acquiring a large number of training videos with labeled temporal boundaries for each query is a labor-intensive process. This paper explores methods for performing VMR in a weakly-supervised manner (wVMR): training is performed without temporal moment labels but only with the text query that describes a segment of the video. Existing methods on wVMR generate multi-scale proposals and apply query-guided attention mechanisms to highlight the most relevant proposal. To leverage the weak supervision, contrastive learning is used which predicts higher scores for the correct video-query pairs than for the incorrect pairs. It has been observed that a large number of candidate proposals, coarse query representation, and one-way attention mechanism lead to blurry attention maps which limit the localization performance. To handle this issue, Video-Language Alignment Network (VLANet) is proposed that learns sharper attention by pruning out spurious candidate proposals and applying a multi-directional attention mechanism with fine-grained query representation. The Surrogate Proposal Selection module selects a proposal based on the proximity to the query in the joint embedding space, and thus substantially reduces candidate proposals which leads to lower computation load and sharper attention. Next, the Cascaded Cross-modal Attention module considers dense feature interactions and multi-directional attention flow to learn the multi-modal alignment. VLANet is trained end-to-end using contrastive loss which enforces semantically similar videos and queries to gather. The experiments show that the method achieves state-of-the-art performance on Charades-STA and DiDeMo datasets.
|
2404.13874
|
Haoyi Qiu
|
Haoyi Qiu, Wenbo Hu, Zi-Yi Dou, Nanyun Peng
|
VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large
Vision-Language Models
|
ACL 2024 Findings
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Large Vision-Language Models (LVLMs) suffer from hallucination issues,
wherein the models generate plausible-sounding but factually incorrect outputs,
undermining their reliability. A comprehensive quantitative evaluation is
necessary to identify and understand the extent of hallucinations in these
models. However, existing benchmarks are often limited in scope, focusing
mainly on object hallucinations. Furthermore, current evaluation methods
struggle to effectively address the subtle semantic distinctions between model
outputs and reference data, as well as the balance between hallucination and
informativeness. To address these issues, we introduce a multi-dimensional
benchmark covering objects, attributes, and relations, with challenging images
selected based on associative biases. Moreover, we propose a large language
model (LLM)-based two-stage evaluation framework that generalizes the popular
CHAIR metric and incorporates both faithfulness and coverage into the
evaluation. Experiments on 10 established LVLMs demonstrate that our evaluation
metric is more comprehensive and better correlated with humans than existing
work when evaluating on our challenging human-annotated benchmark dataset. Our
work also highlights the critical balance between faithfulness and coverage of
model outputs, and encourages future works to address hallucinations in LVLMs
while keeping their outputs informative.
|
[
{
"created": "Mon, 22 Apr 2024 04:49:22 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Jun 2024 02:53:37 GMT",
"version": "v2"
},
{
"created": "Sun, 14 Jul 2024 23:11:05 GMT",
"version": "v3"
}
] |
2024-07-16
|
[
[
"Qiu",
"Haoyi",
""
],
[
"Hu",
"Wenbo",
""
],
[
"Dou",
"Zi-Yi",
""
],
[
"Peng",
"Nanyun",
""
]
] |
Large Vision-Language Models (LVLMs) suffer from hallucination issues, wherein the models generate plausible-sounding but factually incorrect outputs, undermining their reliability. A comprehensive quantitative evaluation is necessary to identify and understand the extent of hallucinations in these models. However, existing benchmarks are often limited in scope, focusing mainly on object hallucinations. Furthermore, current evaluation methods struggle to effectively address the subtle semantic distinctions between model outputs and reference data, as well as the balance between hallucination and informativeness. To address these issues, we introduce a multi-dimensional benchmark covering objects, attributes, and relations, with challenging images selected based on associative biases. Moreover, we propose a large language model (LLM)-based two-stage evaluation framework that generalizes the popular CHAIR metric and incorporates both faithfulness and coverage into the evaluation. Experiments on 10 established LVLMs demonstrate that our evaluation metric is more comprehensive and better correlated with humans than existing work when evaluating on our challenging human-annotated benchmark dataset. Our work also highlights the critical balance between faithfulness and coverage of model outputs, and encourages future works to address hallucinations in LVLMs while keeping their outputs informative.
|
2103.15108
|
Wanhua Li
|
Wanhua Li, Shiwei Wang, Jiwen Lu, Jianjiang Feng, Jie Zhou
|
Meta-Mining Discriminative Samples for Kinship Verification
|
Accepted by CVPR2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Kinship verification aims to find out whether there is a kin relation for a
given pair of facial images. Kinship verification databases are born with
unbalanced data. For a database with N positive kinship pairs, we naturally
obtain N(N-1) negative pairs. How to fully utilize the limited positive pairs
and mine discriminative information from sufficient negative samples for
kinship verification remains an open issue. To address this problem, we propose
a Discriminative Sample Meta-Mining (DSMM) approach in this paper. Unlike
existing methods that usually construct a balanced dataset with fixed negative
pairs, we propose to utilize all possible pairs and automatically learn
discriminative information from data. Specifically, we sample an unbalanced
train batch and a balanced meta-train batch for each iteration. Then we learn a
meta-miner with the meta-gradient on the balanced meta-train batch. In the end,
the samples in the unbalanced train batch are re-weighted by the learned
meta-miner to optimize the kinship models. Experimental results on the widely
used KinFaceW-I, KinFaceW-II, TSKinFace, and Cornell Kinship datasets
demonstrate the effectiveness of the proposed approach.
|
[
{
"created": "Sun, 28 Mar 2021 11:47:07 GMT",
"version": "v1"
}
] |
2021-03-30
|
[
[
"Li",
"Wanhua",
""
],
[
"Wang",
"Shiwei",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Feng",
"Jianjiang",
""
],
[
"Zhou",
"Jie",
""
]
] |
Kinship verification aims to find out whether there is a kin relation for a given pair of facial images. Kinship verification databases are born with unbalanced data. For a database with N positive kinship pairs, we naturally obtain N(N-1) negative pairs. How to fully utilize the limited positive pairs and mine discriminative information from sufficient negative samples for kinship verification remains an open issue. To address this problem, we propose a Discriminative Sample Meta-Mining (DSMM) approach in this paper. Unlike existing methods that usually construct a balanced dataset with fixed negative pairs, we propose to utilize all possible pairs and automatically learn discriminative information from data. Specifically, we sample an unbalanced train batch and a balanced meta-train batch for each iteration. Then we learn a meta-miner with the meta-gradient on the balanced meta-train batch. In the end, the samples in the unbalanced train batch are re-weighted by the learned meta-miner to optimize the kinship models. Experimental results on the widely used KinFaceW-I, KinFaceW-II, TSKinFace, and Cornell Kinship datasets demonstrate the effectiveness of the proposed approach.
|
1101.5938
|
Vojt\v{e}ch P\v{r}ehnal Mgr.
|
Vojtech Prehnal
|
Dialog interface for dynamic data models
| null | null | null | null |
cs.SE cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, the new information system development methodology will be
proposed. This methodology will enable the whole data model to be built and
adjusted at the run time, without rebuilding the application. This will make
the user much more powerful and independent on the manufacturer of the system.
It will also cut the price and shorten the development time of the information
systems dramatically, because common business logic will not have to be
implemented for each individual table and the major part of the user interface
will be generated automatically.
|
[
{
"created": "Mon, 31 Jan 2011 12:39:26 GMT",
"version": "v1"
}
] |
2011-02-01
|
[
[
"Prehnal",
"Vojtech",
""
]
] |
In this paper, the new information system development methodology will be proposed. This methodology will enable the whole data model to be built and adjusted at the run time, without rebuilding the application. This will make the user much more powerful and independent on the manufacturer of the system. It will also cut the price and shorten the development time of the information systems dramatically, because common business logic will not have to be implemented for each individual table and the major part of the user interface will be generated automatically.
|
2402.12177
|
Mingtian Zhang
|
Mingtian Zhang, Shawn Lan, Peter Hayes, David Barber
|
Mafin: Enhancing Black-Box Embeddings with Model Augmented Fine-Tuning
| null | null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Retrieval Augmented Generation (RAG) has emerged as an effective solution for
mitigating hallucinations in Large Language Models (LLMs). The retrieval stage
in RAG typically involves a pre-trained embedding model, which converts queries
and passages into vectors to capture their semantics. However, a standard
pre-trained embedding model may exhibit sub-optimal performance when applied to
specific domain knowledge, necessitating fine-tuning. This paper addresses
scenarios where the embeddings are only available from a black-box model. We
introduce Model augmented fine-tuning (Mafin) -- a novel approach for
fine-tuning a black-box embedding model by augmenting it with a trainable
embedding model. Our results demonstrate that Mafin significantly enhances the
performance of the black-box embeddings by only requiring the training of a
small augmented model. We validate the effectiveness of our method on both
labeled and unlabeled datasets, illustrating its broad applicability and
efficiency.
|
[
{
"created": "Mon, 19 Feb 2024 14:33:24 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Feb 2024 11:54:12 GMT",
"version": "v2"
},
{
"created": "Tue, 5 Mar 2024 07:08:16 GMT",
"version": "v3"
},
{
"created": "Tue, 12 Mar 2024 16:04:23 GMT",
"version": "v4"
}
] |
2024-03-13
|
[
[
"Zhang",
"Mingtian",
""
],
[
"Lan",
"Shawn",
""
],
[
"Hayes",
"Peter",
""
],
[
"Barber",
"David",
""
]
] |
Retrieval Augmented Generation (RAG) has emerged as an effective solution for mitigating hallucinations in Large Language Models (LLMs). The retrieval stage in RAG typically involves a pre-trained embedding model, which converts queries and passages into vectors to capture their semantics. However, a standard pre-trained embedding model may exhibit sub-optimal performance when applied to specific domain knowledge, necessitating fine-tuning. This paper addresses scenarios where the embeddings are only available from a black-box model. We introduce Model augmented fine-tuning (Mafin) -- a novel approach for fine-tuning a black-box embedding model by augmenting it with a trainable embedding model. Our results demonstrate that Mafin significantly enhances the performance of the black-box embeddings by only requiring the training of a small augmented model. We validate the effectiveness of our method on both labeled and unlabeled datasets, illustrating its broad applicability and efficiency.
|
2405.01590
|
Haithem Kchaou
|
Manel Aloui, Hasna Chouikhi, Ghaith Chaabane, Haithem Kchaou, Chehir
Dhaouadi
|
101 Billion Arabic Words Dataset
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, Large Language Models have revolutionized the field of
natural language processing, showcasing an impressive rise predominantly in
English-centric domains. These advancements have set a global benchmark,
inspiring significant efforts toward developing Arabic LLMs capable of
understanding and generating the Arabic language with remarkable accuracy.
Despite these advancements, a critical challenge persists: the potential bias
in Arabic LLMs, primarily attributed to their reliance on datasets comprising
English data that has been translated into Arabic. This reliance not only
compromises the authenticity of the generated content but also reflects a
broader issue -the scarcity of original quality Arabic linguistic data. This
study aims to address the data scarcity in the Arab world and to encourage the
development of Arabic Language Models that are true to both the linguistic and
nuances of the region. We undertook a large-scale data mining project,
extracting a substantial volume of text from the Common Crawl WET files,
specifically targeting Arabic content. The extracted data underwent a rigorous
cleaning and deduplication process, using innovative techniques to ensure the
integrity and uniqueness of the dataset. The result is the 101 Billion Arabic
Words Dataset, the largest Arabic dataset available to date, which can
significantly contribute to the development of authentic Arabic LLMs. This
study not only highlights the potential for creating linguistically and
culturally accurate Arabic LLMs but also sets a precedent for future research
in enhancing the authenticity of Arabic language models.
|
[
{
"created": "Mon, 29 Apr 2024 13:15:03 GMT",
"version": "v1"
}
] |
2024-05-06
|
[
[
"Aloui",
"Manel",
""
],
[
"Chouikhi",
"Hasna",
""
],
[
"Chaabane",
"Ghaith",
""
],
[
"Kchaou",
"Haithem",
""
],
[
"Dhaouadi",
"Chehir",
""
]
] |
In recent years, Large Language Models have revolutionized the field of natural language processing, showcasing an impressive rise predominantly in English-centric domains. These advancements have set a global benchmark, inspiring significant efforts toward developing Arabic LLMs capable of understanding and generating the Arabic language with remarkable accuracy. Despite these advancements, a critical challenge persists: the potential bias in Arabic LLMs, primarily attributed to their reliance on datasets comprising English data that has been translated into Arabic. This reliance not only compromises the authenticity of the generated content but also reflects a broader issue -the scarcity of original quality Arabic linguistic data. This study aims to address the data scarcity in the Arab world and to encourage the development of Arabic Language Models that are true to both the linguistic and nuances of the region. We undertook a large-scale data mining project, extracting a substantial volume of text from the Common Crawl WET files, specifically targeting Arabic content. The extracted data underwent a rigorous cleaning and deduplication process, using innovative techniques to ensure the integrity and uniqueness of the dataset. The result is the 101 Billion Arabic Words Dataset, the largest Arabic dataset available to date, which can significantly contribute to the development of authentic Arabic LLMs. This study not only highlights the potential for creating linguistically and culturally accurate Arabic LLMs but also sets a precedent for future research in enhancing the authenticity of Arabic language models.
|
1911.10392
|
Mohsen Mesgar
|
Mohsen Mesgar, Paul Youssef, Lin Li, Dominik Bierwirth, Yihao Li,
Christian M. Meyer, Iryna Gurevych
|
When is ACL's Deadline? A Scientific Conversational Agent
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our conversational agent UKP-ATHENA assists NLP researchers in finding and
exploring scientific literature, identifying relevant authors, planning or
post-processing conference visits, and preparing paper submissions using a
unified interface based on natural language inputs and responses. UKP-ATHENA
enables new access paths to our swiftly evolving research area with its massive
amounts of scientific information and high turnaround times. UKP-ATHENA's
responses connect information from multiple heterogeneous sources which
researchers currently have to explore manually one after another. Unlike a
search engine, UKP-ATHENA maintains the context of a conversation to allow for
efficient information access on papers, researchers, and conferences. Our
architecture consists of multiple components with reference implementations
that can be easily extended by new skills and domains. Our user-based
evaluation shows that UKP-ATHENA already responds 45% of different formulations
of defined intents with 37% information coverage rate.
|
[
{
"created": "Sat, 23 Nov 2019 17:41:02 GMT",
"version": "v1"
}
] |
2019-11-26
|
[
[
"Mesgar",
"Mohsen",
""
],
[
"Youssef",
"Paul",
""
],
[
"Li",
"Lin",
""
],
[
"Bierwirth",
"Dominik",
""
],
[
"Li",
"Yihao",
""
],
[
"Meyer",
"Christian M.",
""
],
[
"Gurevych",
"Iryna",
""
]
] |
Our conversational agent UKP-ATHENA assists NLP researchers in finding and exploring scientific literature, identifying relevant authors, planning or post-processing conference visits, and preparing paper submissions using a unified interface based on natural language inputs and responses. UKP-ATHENA enables new access paths to our swiftly evolving research area with its massive amounts of scientific information and high turnaround times. UKP-ATHENA's responses connect information from multiple heterogeneous sources which researchers currently have to explore manually one after another. Unlike a search engine, UKP-ATHENA maintains the context of a conversation to allow for efficient information access on papers, researchers, and conferences. Our architecture consists of multiple components with reference implementations that can be easily extended by new skills and domains. Our user-based evaluation shows that UKP-ATHENA already responds 45% of different formulations of defined intents with 37% information coverage rate.
|
2405.11431
|
Rohitash Chandra
|
Jingyang Wu, Xinyi Zhang, Fangyixuan Huang, Haochen Zhou, Rohtiash
Chandra
|
Review of deep learning models for crypto price prediction:
implementation and evaluation
| null | null | null | null |
cs.LG q-fin.ST stat.ML
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
There has been much interest in accurate cryptocurrency price forecast models
by investors and researchers. Deep Learning models are prominent machine
learning techniques that have transformed various fields and have shown
potential for finance and economics. Although various deep learning models have
been explored for cryptocurrency price forecasting, it is not clear which
models are suitable due to high market volatility. In this study, we review the
literature about deep learning for cryptocurrency price forecasting and
evaluate novel deep learning models for cryptocurrency stock price prediction.
Our deep learning models include variants of long short-term memory (LSTM)
recurrent neural networks, variants of convolutional neural networks (CNNs),
and the Transformer model. We evaluate univariate and multivariate approaches
for multi-step ahead predicting of cryptocurrencies close-price. We also carry
out volatility analysis on the four cryptocurrencies which reveals significant
fluctuations in their prices throughout the COVID-19 pandemic. Additionally, we
investigate the prediction accuracy of two scenarios identified by different
training sets for the models. First, we use the pre-COVID-19 datasets to model
cryptocurrency close-price forecasting during the early period of COVID-19.
Secondly, we utilise data from the COVID-19 period to predict prices for 2023
to 2024. Our results show that the convolutional LSTM with a multivariate
approach provides the best prediction accuracy in two major experimental
settings.
Our results also indicate that the multivariate deep learning models exhibit
better performance in forecasting four different cryptocurrencies when compared
to the univariate models.
|
[
{
"created": "Sun, 19 May 2024 03:15:27 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Jun 2024 07:20:29 GMT",
"version": "v2"
}
] |
2024-06-04
|
[
[
"Wu",
"Jingyang",
""
],
[
"Zhang",
"Xinyi",
""
],
[
"Huang",
"Fangyixuan",
""
],
[
"Zhou",
"Haochen",
""
],
[
"Chandra",
"Rohtiash",
""
]
] |
There has been much interest in accurate cryptocurrency price forecast models by investors and researchers. Deep Learning models are prominent machine learning techniques that have transformed various fields and have shown potential for finance and economics. Although various deep learning models have been explored for cryptocurrency price forecasting, it is not clear which models are suitable due to high market volatility. In this study, we review the literature about deep learning for cryptocurrency price forecasting and evaluate novel deep learning models for cryptocurrency stock price prediction. Our deep learning models include variants of long short-term memory (LSTM) recurrent neural networks, variants of convolutional neural networks (CNNs), and the Transformer model. We evaluate univariate and multivariate approaches for multi-step ahead predicting of cryptocurrencies close-price. We also carry out volatility analysis on the four cryptocurrencies which reveals significant fluctuations in their prices throughout the COVID-19 pandemic. Additionally, we investigate the prediction accuracy of two scenarios identified by different training sets for the models. First, we use the pre-COVID-19 datasets to model cryptocurrency close-price forecasting during the early period of COVID-19. Secondly, we utilise data from the COVID-19 period to predict prices for 2023 to 2024. Our results show that the convolutional LSTM with a multivariate approach provides the best prediction accuracy in two major experimental settings. Our results also indicate that the multivariate deep learning models exhibit better performance in forecasting four different cryptocurrencies when compared to the univariate models.
|
1606.05688
|
Aleksandar Zlateski
|
Aleksandar Zlateski, Kisuk Lee and H. Sebastian Seung
|
ZNNi - Maximizing the Inference Throughput of 3D Convolutional Networks
on Multi-Core CPUs and GPUs
| null | null | null | null |
cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sliding window convolutional networks (ConvNets) have become a popular
approach to computer vision problems such as image segmentation, and object
detection and localization. Here we consider the problem of inference, the
application of a previously trained ConvNet, with emphasis on 3D images. Our
goal is to maximize throughput, defined as average number of output voxels
computed per unit time. Other things being equal, processing a larger image
tends to increase throughput, because fractionally less computation is wasted
on the borders of the image. It follows that an apparently slower algorithm may
end up having higher throughput if it can process a larger image within the
constraint of the available RAM. We introduce novel CPU and GPU primitives for
convolutional and pooling layers, which are designed to minimize memory
overhead. The primitives include convolution based on highly efficient pruned
FFTs. Our theoretical analyses and empirical tests reveal a number of
interesting findings. For some ConvNet architectures, cuDNN is outperformed by
our FFT-based GPU primitives, and these in turn can be outperformed by our CPU
primitives. The CPU manages to achieve higher throughput because of its fast
access to more RAM. A novel primitive in which the GPU accesses host RAM can
significantly increase GPU throughput. Finally, a CPU-GPU algorithm achieves
the greatest throughput of all, 10x or more than other publicly available
implementations of sliding window 3D ConvNets. All of our code has been made
available as open source project.
|
[
{
"created": "Fri, 17 Jun 2016 22:16:39 GMT",
"version": "v1"
}
] |
2016-06-21
|
[
[
"Zlateski",
"Aleksandar",
""
],
[
"Lee",
"Kisuk",
""
],
[
"Seung",
"H. Sebastian",
""
]
] |
Sliding window convolutional networks (ConvNets) have become a popular approach to computer vision problems such as image segmentation, and object detection and localization. Here we consider the problem of inference, the application of a previously trained ConvNet, with emphasis on 3D images. Our goal is to maximize throughput, defined as average number of output voxels computed per unit time. Other things being equal, processing a larger image tends to increase throughput, because fractionally less computation is wasted on the borders of the image. It follows that an apparently slower algorithm may end up having higher throughput if it can process a larger image within the constraint of the available RAM. We introduce novel CPU and GPU primitives for convolutional and pooling layers, which are designed to minimize memory overhead. The primitives include convolution based on highly efficient pruned FFTs. Our theoretical analyses and empirical tests reveal a number of interesting findings. For some ConvNet architectures, cuDNN is outperformed by our FFT-based GPU primitives, and these in turn can be outperformed by our CPU primitives. The CPU manages to achieve higher throughput because of its fast access to more RAM. A novel primitive in which the GPU accesses host RAM can significantly increase GPU throughput. Finally, a CPU-GPU algorithm achieves the greatest throughput of all, 10x or more than other publicly available implementations of sliding window 3D ConvNets. All of our code has been made available as open source project.
|
0810.3626
|
Muthiah Annamalai
|
Muthiah Annamalai, Darshan Shrestha, Saibun Tjuatja
|
Experimental Study of Application Specific Source Coding for Wireless
Sensor Networks
|
7 pages, 7 figures, 8 tables
| null | null | null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The energy bottleneck in Wireless Sensor Network(WSN) can be reduced by
limiting communication overhead. Application specific source coding schemes for
the sensor networks provide fewer bits to represent the same amount of
information exploiting the redundancy present in the source model, network
architecture and the physical process. This paper reports the performance of
representative codes from various families of source coding schemes (lossless,
lossy, constant bit-rate, variable bit-rate, distributed and joint
encoding/decoding) in terms of energy consumed, bit-rate achieved,
quantization-error/reconstruction-error, latency and complexity of
encoder-decoder(codec). A reusable frame work for testing source codes is
provided. Finally we propose a set of possible applications and suitable source
codes in terms of these parameters.
|
[
{
"created": "Mon, 20 Oct 2008 18:34:22 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Oct 2008 01:16:07 GMT",
"version": "v2"
}
] |
2008-10-21
|
[
[
"Annamalai",
"Muthiah",
""
],
[
"Shrestha",
"Darshan",
""
],
[
"Tjuatja",
"Saibun",
""
]
] |
The energy bottleneck in Wireless Sensor Network(WSN) can be reduced by limiting communication overhead. Application specific source coding schemes for the sensor networks provide fewer bits to represent the same amount of information exploiting the redundancy present in the source model, network architecture and the physical process. This paper reports the performance of representative codes from various families of source coding schemes (lossless, lossy, constant bit-rate, variable bit-rate, distributed and joint encoding/decoding) in terms of energy consumed, bit-rate achieved, quantization-error/reconstruction-error, latency and complexity of encoder-decoder(codec). A reusable frame work for testing source codes is provided. Finally we propose a set of possible applications and suitable source codes in terms of these parameters.
|
2106.05963
|
Manel Baradad Jurjo
|
Manel Baradad, Jonas Wulff, Tongzhou Wang, Phillip Isola, Antonio
Torralba
|
Learning to See by Looking at Noise
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current vision systems are trained on huge datasets, and these datasets come
with costs: curation is expensive, they inherit human biases, and there are
concerns over privacy and usage rights. To counter these costs, interest has
surged in learning from cheaper data sources, such as unlabeled images. In this
paper we go a step further and ask if we can do away with real image datasets
entirely, instead learning from noise processes. We investigate a suite of
image generation models that produce images from simple random processes. These
are then used as training data for a visual representation learner with a
contrastive loss. We study two types of noise processes, statistical image
models and deep generative models under different random initializations. Our
findings show that it is important for the noise to capture certain structural
properties of real data but that good performance can be achieved even with
processes that are far from realistic. We also find that diversity is a key
property to learn good representations. Datasets, models, and code are
available at https://mbaradad.github.io/learning_with_noise.
|
[
{
"created": "Thu, 10 Jun 2021 17:56:46 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Dec 2021 16:42:14 GMT",
"version": "v2"
},
{
"created": "Thu, 28 Apr 2022 23:37:06 GMT",
"version": "v3"
}
] |
2022-05-02
|
[
[
"Baradad",
"Manel",
""
],
[
"Wulff",
"Jonas",
""
],
[
"Wang",
"Tongzhou",
""
],
[
"Isola",
"Phillip",
""
],
[
"Torralba",
"Antonio",
""
]
] |
Current vision systems are trained on huge datasets, and these datasets come with costs: curation is expensive, they inherit human biases, and there are concerns over privacy and usage rights. To counter these costs, interest has surged in learning from cheaper data sources, such as unlabeled images. In this paper we go a step further and ask if we can do away with real image datasets entirely, instead learning from noise processes. We investigate a suite of image generation models that produce images from simple random processes. These are then used as training data for a visual representation learner with a contrastive loss. We study two types of noise processes, statistical image models and deep generative models under different random initializations. Our findings show that it is important for the noise to capture certain structural properties of real data but that good performance can be achieved even with processes that are far from realistic. We also find that diversity is a key property to learn good representations. Datasets, models, and code are available at https://mbaradad.github.io/learning_with_noise.
|
2304.00686
|
Zihao Li
|
Zihao Li, Aixin Sun, Chenliang Li
|
DiffuRec: A Diffusion Model for Sequential Recommendation
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mainstream solutions to Sequential Recommendation (SR) represent items with
fixed vectors. These vectors have limited capability in capturing items' latent
aspects and users' diverse preferences. As a new generative paradigm, Diffusion
models have achieved excellent performance in areas like computer vision and
natural language processing. To our understanding, its unique merit in
representation generation well fits the problem setting of sequential
recommendation. In this paper, we make the very first attempt to adapt
Diffusion model to SR and propose DiffuRec, for item representation
construction and uncertainty injection. Rather than modeling item
representations as fixed vectors, we represent them as distributions in
DiffuRec, which reflect user's multiple interests and item's various aspects
adaptively. In diffusion phase, DiffuRec corrupts the target item embedding
into a Gaussian distribution via noise adding, which is further applied for
sequential item distribution representation generation and uncertainty
injection. Afterward, the item representation is fed into an Approximator for
target item representation reconstruction. In reverse phase, based on user's
historical interaction behaviors, we reverse a Gaussian noise into the target
item representation, then apply a rounding operation for target item
prediction. Experiments over four datasets show that DiffuRec outperforms
strong baselines by a large margin.
|
[
{
"created": "Mon, 3 Apr 2023 02:22:01 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Apr 2023 07:14:16 GMT",
"version": "v2"
},
{
"created": "Sun, 9 Apr 2023 10:12:38 GMT",
"version": "v3"
},
{
"created": "Mon, 30 Oct 2023 11:43:46 GMT",
"version": "v4"
}
] |
2023-10-31
|
[
[
"Li",
"Zihao",
""
],
[
"Sun",
"Aixin",
""
],
[
"Li",
"Chenliang",
""
]
] |
Mainstream solutions to Sequential Recommendation (SR) represent items with fixed vectors. These vectors have limited capability in capturing items' latent aspects and users' diverse preferences. As a new generative paradigm, Diffusion models have achieved excellent performance in areas like computer vision and natural language processing. To our understanding, its unique merit in representation generation well fits the problem setting of sequential recommendation. In this paper, we make the very first attempt to adapt Diffusion model to SR and propose DiffuRec, for item representation construction and uncertainty injection. Rather than modeling item representations as fixed vectors, we represent them as distributions in DiffuRec, which reflect user's multiple interests and item's various aspects adaptively. In diffusion phase, DiffuRec corrupts the target item embedding into a Gaussian distribution via noise adding, which is further applied for sequential item distribution representation generation and uncertainty injection. Afterward, the item representation is fed into an Approximator for target item representation reconstruction. In reverse phase, based on user's historical interaction behaviors, we reverse a Gaussian noise into the target item representation, then apply a rounding operation for target item prediction. Experiments over four datasets show that DiffuRec outperforms strong baselines by a large margin.
|
2408.01614
|
Jinwen Tang
|
Jinwen Tang and Yi Shang
|
Advancing Mental Health Pre-Screening: A New Custom GPT for
Psychological Distress Assessment
| null | null | null | null |
cs.CY cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study introduces 'Psycho Analyst', a custom GPT model based on OpenAI's
GPT-4, optimized for pre-screening mental health disorders. Enhanced with
DSM-5, PHQ-8, detailed data descriptions, and extensive training data, the
model adeptly decodes nuanced linguistic indicators of mental health disorders.
It utilizes a dual-task framework that includes binary classification and a
three-stage PHQ-8 score computation involving initial assessment, detailed
breakdown, and independent assessment, showcasing refined analytic
capabilities. Validation with the DAIC-WOZ dataset reveals F1 and Macro-F1
scores of 0.929 and 0.949, respectively, along with the lowest MAE and RMSE of
2.89 and 3.69 in PHQ-8 scoring. These results highlight the model's precision
and transformative potential in enhancing public mental health support,
improving accessibility, cost-effectiveness, and serving as a second opinion
for professionals.
|
[
{
"created": "Sat, 3 Aug 2024 00:38:30 GMT",
"version": "v1"
}
] |
2024-08-06
|
[
[
"Tang",
"Jinwen",
""
],
[
"Shang",
"Yi",
""
]
] |
This study introduces 'Psycho Analyst', a custom GPT model based on OpenAI's GPT-4, optimized for pre-screening mental health disorders. Enhanced with DSM-5, PHQ-8, detailed data descriptions, and extensive training data, the model adeptly decodes nuanced linguistic indicators of mental health disorders. It utilizes a dual-task framework that includes binary classification and a three-stage PHQ-8 score computation involving initial assessment, detailed breakdown, and independent assessment, showcasing refined analytic capabilities. Validation with the DAIC-WOZ dataset reveals F1 and Macro-F1 scores of 0.929 and 0.949, respectively, along with the lowest MAE and RMSE of 2.89 and 3.69 in PHQ-8 scoring. These results highlight the model's precision and transformative potential in enhancing public mental health support, improving accessibility, cost-effectiveness, and serving as a second opinion for professionals.
|
2404.09359
|
Tal Hakim
|
Tal Hakim
|
Exploring Feedback Generation in Automated Skeletal Movement Assessment:
A Comprehensive Overview
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The application of machine-learning solutions to movement assessment from
skeleton videos has attracted significant research attention in recent years.
This advancement has made rehabilitation at home more accessible, utilizing
movement assessment algorithms that can operate on affordable equipment for
human pose detection and analysis from 2D or 3D videos. While the primary
objective of automatic assessment tasks is to score movements, the automatic
generation of feedback highlighting key movement issues has the potential to
significantly enhance and accelerate the rehabilitation process. While numerous
research works exist in the field of automatic movement assessment, only a
handful address feedback generation. In this study, we explain the types of
feedback that can be generated, review existing solutions for automatic
feedback generation, and discuss future research directions. To our knowledge,
this is the first comprehensive review of feedback generation in skeletal
movement assessment.
|
[
{
"created": "Sun, 14 Apr 2024 21:14:47 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Apr 2024 10:52:32 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Apr 2024 15:07:04 GMT",
"version": "v3"
}
] |
2024-04-25
|
[
[
"Hakim",
"Tal",
""
]
] |
The application of machine-learning solutions to movement assessment from skeleton videos has attracted significant research attention in recent years. This advancement has made rehabilitation at home more accessible, utilizing movement assessment algorithms that can operate on affordable equipment for human pose detection and analysis from 2D or 3D videos. While the primary objective of automatic assessment tasks is to score movements, the automatic generation of feedback highlighting key movement issues has the potential to significantly enhance and accelerate the rehabilitation process. While numerous research works exist in the field of automatic movement assessment, only a handful address feedback generation. In this study, we explain the types of feedback that can be generated, review existing solutions for automatic feedback generation, and discuss future research directions. To our knowledge, this is the first comprehensive review of feedback generation in skeletal movement assessment.
|
2306.14470
|
Chanjun Park
|
Dahyun Jung, Jaehyung Seo, Jaewook Lee, Chanjun Park, Heuiseok Lim
|
Knowledge Graph-Augmented Korean Generative Commonsense Reasoning
|
Accepted for Data-centric Machine Learning Research (DMLR) Workshop
at ICML 2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Generative commonsense reasoning refers to the task of generating acceptable
and logical assumptions about everyday situations based on commonsense
understanding. By utilizing an existing dataset such as Korean CommonGen,
language generation models can learn commonsense reasoning specific to the
Korean language. However, language models often fail to consider the
relationships between concepts and the deep knowledge inherent to concepts. To
address these limitations, we propose a method to utilize the Korean knowledge
graph data for text generation. Our experimental result shows that the proposed
method can enhance the efficiency of Korean commonsense inference, thereby
underlining the significance of employing supplementary data.
|
[
{
"created": "Mon, 26 Jun 2023 07:23:47 GMT",
"version": "v1"
}
] |
2023-06-27
|
[
[
"Jung",
"Dahyun",
""
],
[
"Seo",
"Jaehyung",
""
],
[
"Lee",
"Jaewook",
""
],
[
"Park",
"Chanjun",
""
],
[
"Lim",
"Heuiseok",
""
]
] |
Generative commonsense reasoning refers to the task of generating acceptable and logical assumptions about everyday situations based on commonsense understanding. By utilizing an existing dataset such as Korean CommonGen, language generation models can learn commonsense reasoning specific to the Korean language. However, language models often fail to consider the relationships between concepts and the deep knowledge inherent to concepts. To address these limitations, we propose a method to utilize the Korean knowledge graph data for text generation. Our experimental result shows that the proposed method can enhance the efficiency of Korean commonsense inference, thereby underlining the significance of employing supplementary data.
|
2407.14643
|
Yuehua Ding
|
Yuehua Ding, Jean-Francois Dollinger, Vincent Vauchey, Mourad Zghal
|
Double-Layer Soft Data Fusion for Indoor Robot WiFi-Visual Localization
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel WiFi-Visual data fusion method for indoor robot
(TIAGO++) localization. This method can use 10 WiFi samples and 4
low-resolution images ($58 \times 58$ in pixels) to localize a indoor robot
with an average error distance about 1.32 meters. The experiment test is 3
months after the data collection in a general teaching building, whose WiFi and
visual environments are partially changed. This indirectly shows the robustness
of the proposed method.
Instead of neural network design, this paper focuses on the soft data fusion
to prevent unbounded errors in visual localization. A double-layer soft data
fusion is proposed. The proposed soft data fusion includes the first-layer
WiFi-Visual feature fusion and the second-layer decision vector fusion.
Firstly, motivated by the excellent capability of neural network in image
processing and recognition, the temporal-spatial features are extracted from
WiFi data, these features are represented in image form. Secondly, the WiFi
temporal-spatial features in image form and the visual features taken by the
robot camera are combined together, and are jointly exploited by a
classification neural network to produce a likelihood vector for WiFi-Visual
localization. This is called first-layer WiFi-Visual fusion. Similarly, these
two types of features can exploited separately by neural networks to produce
another two independent likelihood vectors. Thirdly, the three likelihood
vectors are fused by Hadamard product and median filtering to produce the final
likelihood vector for localization. This called the second-layer decision
vector fusion. The proposed soft data fusion does not apply any threshold or
prioritize any data source over the other in the fusion process. It never
excludes the positions of low probabilities, which can avoid the information
loss due to a hard decision. The demo video is provided. The code will be open.
|
[
{
"created": "Fri, 19 Jul 2024 19:40:15 GMT",
"version": "v1"
}
] |
2024-07-23
|
[
[
"Ding",
"Yuehua",
""
],
[
"Dollinger",
"Jean-Francois",
""
],
[
"Vauchey",
"Vincent",
""
],
[
"Zghal",
"Mourad",
""
]
] |
This paper presents a novel WiFi-Visual data fusion method for indoor robot (TIAGO++) localization. This method can use 10 WiFi samples and 4 low-resolution images ($58 \times 58$ in pixels) to localize a indoor robot with an average error distance about 1.32 meters. The experiment test is 3 months after the data collection in a general teaching building, whose WiFi and visual environments are partially changed. This indirectly shows the robustness of the proposed method. Instead of neural network design, this paper focuses on the soft data fusion to prevent unbounded errors in visual localization. A double-layer soft data fusion is proposed. The proposed soft data fusion includes the first-layer WiFi-Visual feature fusion and the second-layer decision vector fusion. Firstly, motivated by the excellent capability of neural network in image processing and recognition, the temporal-spatial features are extracted from WiFi data, these features are represented in image form. Secondly, the WiFi temporal-spatial features in image form and the visual features taken by the robot camera are combined together, and are jointly exploited by a classification neural network to produce a likelihood vector for WiFi-Visual localization. This is called first-layer WiFi-Visual fusion. Similarly, these two types of features can exploited separately by neural networks to produce another two independent likelihood vectors. Thirdly, the three likelihood vectors are fused by Hadamard product and median filtering to produce the final likelihood vector for localization. This called the second-layer decision vector fusion. The proposed soft data fusion does not apply any threshold or prioritize any data source over the other in the fusion process. It never excludes the positions of low probabilities, which can avoid the information loss due to a hard decision. The demo video is provided. The code will be open.
|
1705.07674
|
Ahmed Alaa
|
Ahmed M. Alaa, Jinsung Yoon, Scott Hu, and Mihaela van der Schaar
|
Individualized Risk Prognosis for Critical Care Patients: A Multi-task
Gaussian Process Model
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We report the development and validation of a data-driven real-time risk
score that provides timely assessments for the clinical acuity of ward patients
based on their temporal lab tests and vital signs, which allows for timely
intensive care unit (ICU) admissions. Unlike the existing risk scoring
technologies, the proposed score is individualized; it uses the electronic
health record (EHR) data to cluster the patients based on their static
covariates into subcohorts of similar patients, and then learns a separate
temporal, non-stationary multi-task Gaussian Process (GP) model that captures
the physiology of every subcohort. Experiments conducted on data from a
heterogeneous cohort of 6,094 patients admitted to the Ronald Reagan UCLA
medical center show that our risk score significantly outperforms the
state-of-the-art risk scoring technologies, such as the Rothman index and MEWS,
in terms of timeliness, true positive rate (TPR), and positive predictive value
(PPV). In particular, the proposed score increases the AUC with 20% and 38% as
compared to Rothman index and MEWS respectively, and can predict ICU admissions
8 hours before clinicians at a PPV of 35% and a TPR of 50%. Moreover, we show
that the proposed risk score allows for better decisions on when to discharge
clinically stable patients from the ward, thereby improving the efficiency of
hospital resource utilization.
|
[
{
"created": "Mon, 22 May 2017 11:27:58 GMT",
"version": "v1"
}
] |
2017-05-23
|
[
[
"Alaa",
"Ahmed M.",
""
],
[
"Yoon",
"Jinsung",
""
],
[
"Hu",
"Scott",
""
],
[
"van der Schaar",
"Mihaela",
""
]
] |
We report the development and validation of a data-driven real-time risk score that provides timely assessments for the clinical acuity of ward patients based on their temporal lab tests and vital signs, which allows for timely intensive care unit (ICU) admissions. Unlike the existing risk scoring technologies, the proposed score is individualized; it uses the electronic health record (EHR) data to cluster the patients based on their static covariates into subcohorts of similar patients, and then learns a separate temporal, non-stationary multi-task Gaussian Process (GP) model that captures the physiology of every subcohort. Experiments conducted on data from a heterogeneous cohort of 6,094 patients admitted to the Ronald Reagan UCLA medical center show that our risk score significantly outperforms the state-of-the-art risk scoring technologies, such as the Rothman index and MEWS, in terms of timeliness, true positive rate (TPR), and positive predictive value (PPV). In particular, the proposed score increases the AUC with 20% and 38% as compared to Rothman index and MEWS respectively, and can predict ICU admissions 8 hours before clinicians at a PPV of 35% and a TPR of 50%. Moreover, we show that the proposed risk score allows for better decisions on when to discharge clinically stable patients from the ward, thereby improving the efficiency of hospital resource utilization.
|
1406.7735
|
Walter Lasecki
|
Haoqi Zhang, Andes Monroy-Hernandez, Aaron Shaw, Sean Munson, Liz
Gerber, Benjamin Mako Hill, Peter Kinnaird, Shelly Farnham, and Patrick
Minder
|
WeDo: Exploring Participatory, End-To-End Collective Action
| null | null | null |
ci-2014/95
|
cs.CY cs.HC cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many celebrate the Internet's ability to connect individuals and facilitate
collective action toward a common goal. While numerous systems have been
designed to support particular aspects of collective action, few systems
support participatory, end-to-end collective action in which a crowd or
community identifies opportunities, formulates goals, brainstorms ideas,
develops plans, mobilizes, and takes action. To explore the possibilities and
barriers in supporting such interactions, we have developed WeDo, a system
aimed at promoting simple forms of participatory, end-to-end collective action.
Pilot deployments of WeDo illustrate that sociotechnical systems can support
automated transitions through different phases of end-to-end collective action,
but that challenges, such as the elicitation of leadership and the
accommodation of existing group norms, remain.
|
[
{
"created": "Mon, 30 Jun 2014 13:48:42 GMT",
"version": "v1"
}
] |
2021-08-02
|
[
[
"Zhang",
"Haoqi",
""
],
[
"Monroy-Hernandez",
"Andes",
""
],
[
"Shaw",
"Aaron",
""
],
[
"Munson",
"Sean",
""
],
[
"Gerber",
"Liz",
""
],
[
"Hill",
"Benjamin Mako",
""
],
[
"Kinnaird",
"Peter",
""
],
[
"Farnham",
"Shelly",
""
],
[
"Minder",
"Patrick",
""
]
] |
Many celebrate the Internet's ability to connect individuals and facilitate collective action toward a common goal. While numerous systems have been designed to support particular aspects of collective action, few systems support participatory, end-to-end collective action in which a crowd or community identifies opportunities, formulates goals, brainstorms ideas, develops plans, mobilizes, and takes action. To explore the possibilities and barriers in supporting such interactions, we have developed WeDo, a system aimed at promoting simple forms of participatory, end-to-end collective action. Pilot deployments of WeDo illustrate that sociotechnical systems can support automated transitions through different phases of end-to-end collective action, but that challenges, such as the elicitation of leadership and the accommodation of existing group norms, remain.
|
cs/0508075
|
Russell K. Standish
|
Russell K. Standish
|
Complexity of Networks
|
Accepted for Australian Conference on Artificial Life (ACAL05). To
appear in Advances in Natural Computation (World Scientific)
|
in Recent Advances in Artificial Life, Abbass et al. (eds) (World
Scientific: Singapore) p253 (2005).
| null | null |
cs.IT math.IT
| null |
Network or graph structures are ubiquitous in the study of complex systems.
Often, we are interested in complexity trends of these system as it evolves
under some dynamic. An example might be looking at the complexity of a food web
as species enter an ecosystem via migration or speciation, and leave via
extinction.
In this paper, a complexity measure of networks is proposed based on the {\em
complexity is information content} paradigm. To apply this paradigm to any
object, one must fix two things: a representation language, in which strings of
symbols from some alphabet describe, or stand for the objects being considered;
and a means of determining when two such descriptions refer to the same object.
With these two things set, the information content of an object can be computed
in principle from the number of equivalent descriptions describing a particular
object.
I propose a simple representation language for undirected graphs that can be
encoded as a bitstring, and equivalence is a topological equivalence. I also
present an algorithm for computing the complexity of an arbitrary undirected
network.
|
[
{
"created": "Wed, 17 Aug 2005 00:51:41 GMT",
"version": "v1"
}
] |
2007-07-16
|
[
[
"Standish",
"Russell K.",
""
]
] |
Network or graph structures are ubiquitous in the study of complex systems. Often, we are interested in complexity trends of these system as it evolves under some dynamic. An example might be looking at the complexity of a food web as species enter an ecosystem via migration or speciation, and leave via extinction. In this paper, a complexity measure of networks is proposed based on the {\em complexity is information content} paradigm. To apply this paradigm to any object, one must fix two things: a representation language, in which strings of symbols from some alphabet describe, or stand for the objects being considered; and a means of determining when two such descriptions refer to the same object. With these two things set, the information content of an object can be computed in principle from the number of equivalent descriptions describing a particular object. I propose a simple representation language for undirected graphs that can be encoded as a bitstring, and equivalence is a topological equivalence. I also present an algorithm for computing the complexity of an arbitrary undirected network.
|
2305.12696
|
Ajay Patel
|
Ajay Patel, Delip Rao, Ansh Kothary, Kathleen McKeown, Chris
Callison-Burch
|
Learning Interpretable Style Embeddings via Prompting LLMs
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Style representation learning builds content-independent representations of
author style in text. Stylometry, the analysis of style in text, is often
performed by expert forensic linguists and no large dataset of stylometric
annotations exists for training. Current style representation learning uses
neural methods to disentangle style from content to create style vectors,
however, these approaches result in uninterpretable representations,
complicating their usage in downstream applications like authorship attribution
where auditing and explainability is critical. In this work, we use prompting
to perform stylometry on a large number of texts to create a synthetic dataset
and train human-interpretable style representations we call LISA embeddings. We
release our synthetic stylometry dataset and our interpretable style models as
resources.
|
[
{
"created": "Mon, 22 May 2023 04:07:54 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Oct 2023 19:20:32 GMT",
"version": "v2"
}
] |
2023-10-11
|
[
[
"Patel",
"Ajay",
""
],
[
"Rao",
"Delip",
""
],
[
"Kothary",
"Ansh",
""
],
[
"McKeown",
"Kathleen",
""
],
[
"Callison-Burch",
"Chris",
""
]
] |
Style representation learning builds content-independent representations of author style in text. Stylometry, the analysis of style in text, is often performed by expert forensic linguists and no large dataset of stylometric annotations exists for training. Current style representation learning uses neural methods to disentangle style from content to create style vectors, however, these approaches result in uninterpretable representations, complicating their usage in downstream applications like authorship attribution where auditing and explainability is critical. In this work, we use prompting to perform stylometry on a large number of texts to create a synthetic dataset and train human-interpretable style representations we call LISA embeddings. We release our synthetic stylometry dataset and our interpretable style models as resources.
|
1102.0486
|
Sahana Bisalapur Sahana Bisalapur
|
Sahana S.Bisalapur
|
Design of an Efficient Neural Key Distribution Centre
|
11 pages,9 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of any cryptographic system is the exchange of information among the
intended users without any leakage of information to others who may have
unauthorized access to it. A common secret key could be created over a public
channel accessible to any opponent. Neural networks can be used to generate
common secret key. In case of neural cryptography, both the communicating
networks receive an identical input vector, generate an output bit and are
trained based on the output bit. The two networks and their weight vectors
exhibit a novel phenomenon, where the networks synchronize to a state with
identical time-dependent weights. The generated secret key over a public
channel is used for encrypting and decrypting the information being sent on the
channel. This secret key is distributed to the other vendor efficiently by
using an agent based approach.
|
[
{
"created": "Wed, 2 Feb 2011 16:49:01 GMT",
"version": "v1"
}
] |
2011-02-03
|
[
[
"Bisalapur",
"Sahana S.",
""
]
] |
The goal of any cryptographic system is the exchange of information among the intended users without any leakage of information to others who may have unauthorized access to it. A common secret key could be created over a public channel accessible to any opponent. Neural networks can be used to generate common secret key. In case of neural cryptography, both the communicating networks receive an identical input vector, generate an output bit and are trained based on the output bit. The two networks and their weight vectors exhibit a novel phenomenon, where the networks synchronize to a state with identical time-dependent weights. The generated secret key over a public channel is used for encrypting and decrypting the information being sent on the channel. This secret key is distributed to the other vendor efficiently by using an agent based approach.
|
2310.07668
|
S. AmirAli Gh. Ghahramani
|
Makan Kananian, Fatima Badiei, S. AmirAli Gh. Ghahramani
|
GRaMuFeN: Graph-based Multi-modal Fake News Detection in Social Media
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The proliferation of social media platforms such as Twitter, Instagram, and
Weibo has significantly enhanced the dissemination of false information. This
phenomenon grants both individuals and governmental entities the ability to
shape public opinions, highlighting the need for deploying effective detection
methods. In this paper, we propose GraMuFeN, a model designed to detect fake
content by analyzing both the textual and image content of news. GraMuFeN
comprises two primary components: a text encoder and an image encoder. For
textual analysis, GraMuFeN treats each text as a graph and employs a Graph
Convolutional Neural Network (GCN) as the text encoder. Additionally, the
pre-trained ResNet-152, as a Convolutional Neural Network (CNN), has been
utilized as the image encoder. By integrating the outputs from these two
encoders and implementing a contrastive similarity loss function, GraMuFeN
achieves remarkable results. Extensive evaluations conducted on two publicly
available benchmark datasets for social media news indicate a 10 % increase in
micro F1-Score, signifying improvement over existing state-of-the-art models.
These findings underscore the effectiveness of combining GCN and CNN models for
detecting fake news in multi-modal data, all while minimizing the additional
computational burden imposed by model parameters.
|
[
{
"created": "Wed, 11 Oct 2023 17:17:40 GMT",
"version": "v1"
}
] |
2023-10-12
|
[
[
"Kananian",
"Makan",
""
],
[
"Badiei",
"Fatima",
""
],
[
"Ghahramani",
"S. AmirAli Gh.",
""
]
] |
The proliferation of social media platforms such as Twitter, Instagram, and Weibo has significantly enhanced the dissemination of false information. This phenomenon grants both individuals and governmental entities the ability to shape public opinions, highlighting the need for deploying effective detection methods. In this paper, we propose GraMuFeN, a model designed to detect fake content by analyzing both the textual and image content of news. GraMuFeN comprises two primary components: a text encoder and an image encoder. For textual analysis, GraMuFeN treats each text as a graph and employs a Graph Convolutional Neural Network (GCN) as the text encoder. Additionally, the pre-trained ResNet-152, as a Convolutional Neural Network (CNN), has been utilized as the image encoder. By integrating the outputs from these two encoders and implementing a contrastive similarity loss function, GraMuFeN achieves remarkable results. Extensive evaluations conducted on two publicly available benchmark datasets for social media news indicate a 10 % increase in micro F1-Score, signifying improvement over existing state-of-the-art models. These findings underscore the effectiveness of combining GCN and CNN models for detecting fake news in multi-modal data, all while minimizing the additional computational burden imposed by model parameters.
|
1710.03702
|
Eike Neumann
|
Michal Kone\v{c}n\'y and Eike Neumann
|
Representations and evaluation strategies for feasibly approximable
functions
|
33 pages, 4 figures
| null | null | null |
cs.CC cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A famous result due to Ko and Friedman (1982) asserts that the problems of
integration and maximisation of a univariate real function are computationally
hard in a well-defined sense. Yet, both functionals are routinely computed at
great speed in practice. We aim to resolve this apparent paradox by studying
classes of functions which can be feasibly integrated and maximised, together
with representations for these classes of functions which encode the
information which is necessary to uniformly compute integral and maximum in
polynomial time. The theoretical framework for this is the second-order
complexity theory for operators in analysis which was introduced by Kawamura
and Cook (2012). The representations we study are based on rigorous
approximation by polynomials, piecewise polynomials, and rational functions. We
compare these representations with respect to polytime reducibility as well as
with respect to their ability to quickly evaluate symbolic expressions in a
given language. We show that the representation based on rigorous approximation
by piecewise polynomials is polytime equivalent to the representation based on
rigorous approximation by rational functions. With this representation, all
terms in a certain language, which is expressive enough to contain the maximum
and integral of most functions of practical interest, can be evaluated in
polynomial time. By contrast, both the representation based on polynomial
approximation and the standard representation based on function evaluation,
which implicitly underlies the Ko-Friedman result, require exponential time to
evaluate certain terms in this language. We confirm our theoretical results by
an implementation in Haskell, which provides some evidence that second-order
polynomial time computability is similarly closely tied with practical
feasibility as its first-order counterpart.
|
[
{
"created": "Tue, 10 Oct 2017 16:17:52 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Nov 2018 16:17:12 GMT",
"version": "v2"
},
{
"created": "Mon, 21 Oct 2019 21:01:19 GMT",
"version": "v3"
}
] |
2019-10-23
|
[
[
"Konečný",
"Michal",
""
],
[
"Neumann",
"Eike",
""
]
] |
A famous result due to Ko and Friedman (1982) asserts that the problems of integration and maximisation of a univariate real function are computationally hard in a well-defined sense. Yet, both functionals are routinely computed at great speed in practice. We aim to resolve this apparent paradox by studying classes of functions which can be feasibly integrated and maximised, together with representations for these classes of functions which encode the information which is necessary to uniformly compute integral and maximum in polynomial time. The theoretical framework for this is the second-order complexity theory for operators in analysis which was introduced by Kawamura and Cook (2012). The representations we study are based on rigorous approximation by polynomials, piecewise polynomials, and rational functions. We compare these representations with respect to polytime reducibility as well as with respect to their ability to quickly evaluate symbolic expressions in a given language. We show that the representation based on rigorous approximation by piecewise polynomials is polytime equivalent to the representation based on rigorous approximation by rational functions. With this representation, all terms in a certain language, which is expressive enough to contain the maximum and integral of most functions of practical interest, can be evaluated in polynomial time. By contrast, both the representation based on polynomial approximation and the standard representation based on function evaluation, which implicitly underlies the Ko-Friedman result, require exponential time to evaluate certain terms in this language. We confirm our theoretical results by an implementation in Haskell, which provides some evidence that second-order polynomial time computability is similarly closely tied with practical feasibility as its first-order counterpart.
|
cs/0506039
|
Heechoon Lee
|
Weijun Zhu, Heechoon Lee, Daniel Liu and Michael P. Fitz
|
Antenna array geometry and coding performance
|
5 pages, 7 figures, ISIT 2005
| null | null | null |
cs.IT math.IT
| null |
This paper provides details about experiments in realistic, urban, and
frequency flat channels with space-time coding that specifically examines the
impact of the number of receive antennas and the design criteria for code
selection on the performance. Also the performance characteristics are examined
of the coded modulations in the presence of finite size array geometries. This
paper gives some insight into which of the theories are most useful in
realistic deployments.
|
[
{
"created": "Sat, 11 Jun 2005 02:38:26 GMT",
"version": "v1"
}
] |
2007-07-13
|
[
[
"Zhu",
"Weijun",
""
],
[
"Lee",
"Heechoon",
""
],
[
"Liu",
"Daniel",
""
],
[
"Fitz",
"Michael P.",
""
]
] |
This paper provides details about experiments in realistic, urban, and frequency flat channels with space-time coding that specifically examines the impact of the number of receive antennas and the design criteria for code selection on the performance. Also the performance characteristics are examined of the coded modulations in the presence of finite size array geometries. This paper gives some insight into which of the theories are most useful in realistic deployments.
|
2011.14922
|
Hanwen Miao
|
Hanwen Miao, Shengan Zhang, Carol Flannagan
|
Driver Behavior Extraction from Videos in Naturalistic Driving Datasets
with 3D ConvNets
| null | null |
10.1007/s42421-022-00053-8
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Naturalistic driving data (NDD) is an important source of information to
understand crash causation and human factors and to further develop crash
avoidance countermeasures. Videos recorded while driving are often included in
such datasets. While there is often a large amount of video data in NDD, only a
small portion of them can be annotated by human coders and used for research,
which underuses all video data. In this paper, we explored a computer vision
method to automatically extract the information we need from videos. More
specifically, we developed a 3D ConvNet algorithm to automatically extract
cell-phone-related behaviors from videos. The experiments show that our method
can extract chunks from videos, most of which (~79%) contain the automatically
labeled cell phone behaviors. In conjunction with human review of the extracted
chunks, this approach can find cell-phone-related driver behaviors much more
efficiently than simply viewing video.
|
[
{
"created": "Mon, 30 Nov 2020 15:53:15 GMT",
"version": "v1"
}
] |
2022-06-30
|
[
[
"Miao",
"Hanwen",
""
],
[
"Zhang",
"Shengan",
""
],
[
"Flannagan",
"Carol",
""
]
] |
Naturalistic driving data (NDD) is an important source of information to understand crash causation and human factors and to further develop crash avoidance countermeasures. Videos recorded while driving are often included in such datasets. While there is often a large amount of video data in NDD, only a small portion of them can be annotated by human coders and used for research, which underuses all video data. In this paper, we explored a computer vision method to automatically extract the information we need from videos. More specifically, we developed a 3D ConvNet algorithm to automatically extract cell-phone-related behaviors from videos. The experiments show that our method can extract chunks from videos, most of which (~79%) contain the automatically labeled cell phone behaviors. In conjunction with human review of the extracted chunks, this approach can find cell-phone-related driver behaviors much more efficiently than simply viewing video.
|
1910.14467
|
Mahdi Barzegar Khalilsarai
|
Mahdi Barzegar Khalilsarai, Tianyu Yang, Saeid Haghighatshoar, and
Giuseppe Caire
|
Structured Channel Covariance Estimation from Limited Samples in Massive
MIMO
|
27 pages, 9 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Obtaining channel covariance knowledge is of great importance in various
Multiple-Input Multiple-Output MIMO communication applications, including
channel estimation and covariance-based user grouping. In a massive MIMO
system, covariance estimation proves to be challenging due to the large number
of antennas ($M\gg 1$) employed in the base station and hence, a high signal
dimension. In this case, the number of pilot transmissions $N$ becomes
comparable to the number of antennas and standard estimators, such as the
sample covariance, yield a poor estimate of the true covariance and are
undesirable. In this paper, we propose a Maximum-Likelihood (ML) massive MIMO
covariance estimator, based on a parametric representation of the channel
angular spread function (ASF). The parametric representation emerges from
super-resolving discrete ASF components via the well-known MUltiple SIgnal
Classification (MUSIC) method plus approximating its continuous component using
suitable limited-support density function. We maximize the likelihood function
using a concave-convex procedure, which is initialized via a non-negative
least-squares optimization problem. Our simulation results show that the
proposed method outperforms the state of the art in various estimation quality
metrics and for different sample size to signal dimension ($N/M$) ratios.
|
[
{
"created": "Thu, 31 Oct 2019 13:49:30 GMT",
"version": "v1"
}
] |
2019-11-01
|
[
[
"Khalilsarai",
"Mahdi Barzegar",
""
],
[
"Yang",
"Tianyu",
""
],
[
"Haghighatshoar",
"Saeid",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
Obtaining channel covariance knowledge is of great importance in various Multiple-Input Multiple-Output MIMO communication applications, including channel estimation and covariance-based user grouping. In a massive MIMO system, covariance estimation proves to be challenging due to the large number of antennas ($M\gg 1$) employed in the base station and hence, a high signal dimension. In this case, the number of pilot transmissions $N$ becomes comparable to the number of antennas and standard estimators, such as the sample covariance, yield a poor estimate of the true covariance and are undesirable. In this paper, we propose a Maximum-Likelihood (ML) massive MIMO covariance estimator, based on a parametric representation of the channel angular spread function (ASF). The parametric representation emerges from super-resolving discrete ASF components via the well-known MUltiple SIgnal Classification (MUSIC) method plus approximating its continuous component using suitable limited-support density function. We maximize the likelihood function using a concave-convex procedure, which is initialized via a non-negative least-squares optimization problem. Our simulation results show that the proposed method outperforms the state of the art in various estimation quality metrics and for different sample size to signal dimension ($N/M$) ratios.
|
1807.05761
|
Meredydd Williams
|
Meredydd Williams, Jason R. C. Nurse, Sadie Creese
|
"Privacy is the Boring Bit": User Perceptions and Behaviour in the
Internet-of-Things
|
10 pages, 2 figures, Proceedings of the 15th International Conference
on Privacy, Security and Trust (PST2017) (2017)
| null | null | null |
cs.CY cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In opinion polls, the public frequently claim to value their privacy.
However, individuals often seem to overlook the principle, contributing to a
disparity labelled the `Privacy Paradox'. The growth of the Internet-of-Things
(IoT) is frequently claimed to place privacy at risk. However, the Paradox
remains underexplored in the IoT. In addressing this, we first conduct an
online survey (N = 170) to compare public opinions of IoT and less-novel
devices. Although we find users perceive privacy risks, many still decide to
purchase smart devices. With the IoT rated less usable/familiar, we assert that
it constrains protective behaviour. To explore this hypothesis, we perform
contextualised interviews (N = 40) with the public. In these dialogues, owners
discuss their opinions and actions with a personal device. We find the Paradox
is significantly more prevalent in the IoT, frequently justified by a lack of
awareness. We finish by highlighting the qualitative comments of users, and
suggesting practical solutions to their issues. This is the first work, to our
knowledge, to evaluate the Privacy Paradox over a broad range of technologies.
|
[
{
"created": "Mon, 16 Jul 2018 09:54:15 GMT",
"version": "v1"
}
] |
2018-07-17
|
[
[
"Williams",
"Meredydd",
""
],
[
"Nurse",
"Jason R. C.",
""
],
[
"Creese",
"Sadie",
""
]
] |
In opinion polls, the public frequently claim to value their privacy. However, individuals often seem to overlook the principle, contributing to a disparity labelled the `Privacy Paradox'. The growth of the Internet-of-Things (IoT) is frequently claimed to place privacy at risk. However, the Paradox remains underexplored in the IoT. In addressing this, we first conduct an online survey (N = 170) to compare public opinions of IoT and less-novel devices. Although we find users perceive privacy risks, many still decide to purchase smart devices. With the IoT rated less usable/familiar, we assert that it constrains protective behaviour. To explore this hypothesis, we perform contextualised interviews (N = 40) with the public. In these dialogues, owners discuss their opinions and actions with a personal device. We find the Paradox is significantly more prevalent in the IoT, frequently justified by a lack of awareness. We finish by highlighting the qualitative comments of users, and suggesting practical solutions to their issues. This is the first work, to our knowledge, to evaluate the Privacy Paradox over a broad range of technologies.
|
1601.07473
|
Stephen Chestnut
|
Vladimir Braverman, Stephen R. Chestnut, David P. Woodruff, Lin F.
Yang
|
Streaming Space Complexity of Nearly All Functions of One Variable on
Frequency Vectors
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A central problem in the theory of algorithms for data streams is to
determine which functions on a stream can be approximated in sublinear, and
especially sub-polynomial or poly-logarithmic, space. Given a function $g$, we
study the space complexity of approximating $\sum_{i=1}^n g(|f_i|)$, where
$f\in\mathbb{Z}^n$ is the frequency vector of a turnstile stream. This is a
generalization of the well-known frequency moments problem, and previous
results apply only when $g$ is monotonic or has a special functional form. Our
contribution is to give a condition such that, except for a narrow class of
functions $g$, there is a space-efficient approximation algorithm for the sum
if and only if $g$ satisfies the condition. The functions $g$ that we are able
to characterize include all convex, concave, monotonic, polynomial, and
trigonometric functions, among many others, and is the first such
characterization for non-monotonic functions. Thus, for nearly all functions of
one variable, we answer the open question from the celebrated paper of Alon,
Matias and Szegedy (1996).
|
[
{
"created": "Wed, 27 Jan 2016 18:04:05 GMT",
"version": "v1"
}
] |
2016-01-28
|
[
[
"Braverman",
"Vladimir",
""
],
[
"Chestnut",
"Stephen R.",
""
],
[
"Woodruff",
"David P.",
""
],
[
"Yang",
"Lin F.",
""
]
] |
A central problem in the theory of algorithms for data streams is to determine which functions on a stream can be approximated in sublinear, and especially sub-polynomial or poly-logarithmic, space. Given a function $g$, we study the space complexity of approximating $\sum_{i=1}^n g(|f_i|)$, where $f\in\mathbb{Z}^n$ is the frequency vector of a turnstile stream. This is a generalization of the well-known frequency moments problem, and previous results apply only when $g$ is monotonic or has a special functional form. Our contribution is to give a condition such that, except for a narrow class of functions $g$, there is a space-efficient approximation algorithm for the sum if and only if $g$ satisfies the condition. The functions $g$ that we are able to characterize include all convex, concave, monotonic, polynomial, and trigonometric functions, among many others, and is the first such characterization for non-monotonic functions. Thus, for nearly all functions of one variable, we answer the open question from the celebrated paper of Alon, Matias and Szegedy (1996).
|
2005.10986
|
Jia-Wei Chen
|
Jia-Wei Chen, Rongfang Wang, Fan Ding, Bo Liu, Licheng Jiao, Jie Zhang
|
A Convolutional Neural Network with Parallel Multi-Scale Spatial Pooling
to Detect Temporal Changes in SAR Images
| null |
Remote Sens. 2020, 12, 1619
| null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
In synthetic aperture radar (SAR) image change detection, it is quite
challenging to exploit the changing information from the noisy difference image
subject to the speckle. In this paper, we propose a multi-scale spatial pooling
(MSSP) network to exploit the changed information from the noisy difference
image. Being different from the traditional convolutional network with only
mono-scale pooling kernels, in the proposed method, multi-scale pooling kernels
are equipped in a convolutional network to exploit the spatial context
information on changed regions from the difference image. Furthermore, to
verify the generalization of the proposed method, we apply our proposed method
to the cross-dataset bitemporal SAR image change detection, where the MSSP
network (MSSP-Net) is trained on a dataset and then applied to an unknown
testing dataset. We compare the proposed method with other state-of-arts and
the comparisons are performed on four challenging datasets of bitemporal SAR
images. Experimental results demonstrate that our proposed method obtains
comparable results with S-PCA-Net on YR-A and YR-B dataset and outperforms
other state-of-art methods, especially on the Sendai-A and Sendai-B datasets
with more complex scenes. More important, MSSP-Net is more efficient than
S-PCA-Net and convolutional neural networks (CNN) with less executing time in
both training and testing phases.
|
[
{
"created": "Fri, 22 May 2020 03:37:30 GMT",
"version": "v1"
}
] |
2020-05-25
|
[
[
"Chen",
"Jia-Wei",
""
],
[
"Wang",
"Rongfang",
""
],
[
"Ding",
"Fan",
""
],
[
"Liu",
"Bo",
""
],
[
"Jiao",
"Licheng",
""
],
[
"Zhang",
"Jie",
""
]
] |
In synthetic aperture radar (SAR) image change detection, it is quite challenging to exploit the changing information from the noisy difference image subject to the speckle. In this paper, we propose a multi-scale spatial pooling (MSSP) network to exploit the changed information from the noisy difference image. Being different from the traditional convolutional network with only mono-scale pooling kernels, in the proposed method, multi-scale pooling kernels are equipped in a convolutional network to exploit the spatial context information on changed regions from the difference image. Furthermore, to verify the generalization of the proposed method, we apply our proposed method to the cross-dataset bitemporal SAR image change detection, where the MSSP network (MSSP-Net) is trained on a dataset and then applied to an unknown testing dataset. We compare the proposed method with other state-of-arts and the comparisons are performed on four challenging datasets of bitemporal SAR images. Experimental results demonstrate that our proposed method obtains comparable results with S-PCA-Net on YR-A and YR-B dataset and outperforms other state-of-art methods, especially on the Sendai-A and Sendai-B datasets with more complex scenes. More important, MSSP-Net is more efficient than S-PCA-Net and convolutional neural networks (CNN) with less executing time in both training and testing phases.
|
2407.11004
|
Tzu-Heng Huang
|
Tzu-Heng Huang, Catherine Cao, Vaishnavi Bhargava, Frederic Sala
|
The ALCHEmist: Automated Labeling 500x CHEaper Than LLM Data Annotators
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large pretrained models can be used as annotators, helping replace or augment
crowdworkers and enabling distilling generalist models into smaller specialist
models. Unfortunately, this comes at a cost: employing top-of-the-line models
often requires paying thousands of dollars for API calls, while the resulting
datasets are static and challenging to audit. To address these challenges, we
propose a simple alternative: rather than directly querying labels from
pretrained models, we task models to generate programs that can produce labels.
These programs can be stored and applied locally, re-used and extended, and
cost orders of magnitude less. Our system, Alchemist, obtains comparable to or
better performance than large language model-based annotation in a range of
tasks for a fraction of the cost: on average, improvements amount to a 12.9%
enhancement while the total labeling costs across all datasets are reduced by a
factor of approximately 500x.
|
[
{
"created": "Tue, 25 Jun 2024 17:58:26 GMT",
"version": "v1"
}
] |
2024-07-17
|
[
[
"Huang",
"Tzu-Heng",
""
],
[
"Cao",
"Catherine",
""
],
[
"Bhargava",
"Vaishnavi",
""
],
[
"Sala",
"Frederic",
""
]
] |
Large pretrained models can be used as annotators, helping replace or augment crowdworkers and enabling distilling generalist models into smaller specialist models. Unfortunately, this comes at a cost: employing top-of-the-line models often requires paying thousands of dollars for API calls, while the resulting datasets are static and challenging to audit. To address these challenges, we propose a simple alternative: rather than directly querying labels from pretrained models, we task models to generate programs that can produce labels. These programs can be stored and applied locally, re-used and extended, and cost orders of magnitude less. Our system, Alchemist, obtains comparable to or better performance than large language model-based annotation in a range of tasks for a fraction of the cost: on average, improvements amount to a 12.9% enhancement while the total labeling costs across all datasets are reduced by a factor of approximately 500x.
|
2009.02678
|
Raja Appuswamy
|
Raja Appuswamy and Vincent Joguin
|
Universal Layout Emulation for Long-Term Database Archival
| null | null | null | null |
cs.DB cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research on alternate media technologies, like film, synthetic DNA, and
glass, for long-term data archival has received a lot of attention recently due
to the media obsolescence issues faced by contemporary storage media like tape,
Hard Disk Drives (HDD), and Solid State Disks (SSD). While researchers have
developed novel layout and encoding techniques for archiving databases on these
new media types, one key question remains unaddressed: How do we ensure that
the decoders developed today will be available and executable by a user who is
restoring an archived database several decades later in the future, on a
computing platform that potentially does not even exist today?
In this paper, we make the case for Universal Layout Emulation (ULE), a new
approach for future-proof, long-term database archival that advocates archiving
decoders together with the data to ensure successful recovery. In order to do
so, ULE brings together concepts from Data Management and Digital Preservation
communities by using emulation for archiving decoders. In order to show that
ULE can be implemented in practice, we present the design and evaluation of
Micr'Olonys, an end-to-end long-term database archival system that can be used
to archive databases using visual analog media like film, microform, and
archival paper.
|
[
{
"created": "Sun, 6 Sep 2020 09:06:13 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Sep 2020 10:09:25 GMT",
"version": "v2"
}
] |
2020-09-09
|
[
[
"Appuswamy",
"Raja",
""
],
[
"Joguin",
"Vincent",
""
]
] |
Research on alternate media technologies, like film, synthetic DNA, and glass, for long-term data archival has received a lot of attention recently due to the media obsolescence issues faced by contemporary storage media like tape, Hard Disk Drives (HDD), and Solid State Disks (SSD). While researchers have developed novel layout and encoding techniques for archiving databases on these new media types, one key question remains unaddressed: How do we ensure that the decoders developed today will be available and executable by a user who is restoring an archived database several decades later in the future, on a computing platform that potentially does not even exist today? In this paper, we make the case for Universal Layout Emulation (ULE), a new approach for future-proof, long-term database archival that advocates archiving decoders together with the data to ensure successful recovery. In order to do so, ULE brings together concepts from Data Management and Digital Preservation communities by using emulation for archiving decoders. In order to show that ULE can be implemented in practice, we present the design and evaluation of Micr'Olonys, an end-to-end long-term database archival system that can be used to archive databases using visual analog media like film, microform, and archival paper.
|
2306.17670
|
Ilyass Hammouamri
|
Ilyass Hammouamri, Ismail Khalfaoui-Hassani, Timoth\'ee Masquelier
|
Learning Delays in Spiking Neural Networks using Dilated Convolutions
with Learnable Spacings
| null |
ICLR 2024
| null | null |
cs.NE cs.AI cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Spiking Neural Networks (SNNs) are a promising research direction for
building power-efficient information processing systems, especially for
temporal tasks such as speech recognition. In SNNs, delays refer to the time
needed for one spike to travel from one neuron to another. These delays matter
because they influence the spike arrival times, and it is well-known that
spiking neurons respond more strongly to coincident input spikes. More
formally, it has been shown theoretically that plastic delays greatly increase
the expressivity in SNNs. Yet, efficient algorithms to learn these delays have
been lacking. Here, we propose a new discrete-time algorithm that addresses
this issue in deep feedforward SNNs using backpropagation, in an offline
manner. To simulate delays between consecutive layers, we use 1D convolutions
across time. The kernels contain only a few non-zero weights - one per synapse
- whose positions correspond to the delays. These positions are learned
together with the weights using the recently proposed Dilated Convolution with
Learnable Spacings (DCLS). We evaluated our method on three datasets: the
Spiking Heidelberg Dataset (SHD), the Spiking Speech Commands (SSC) and its
non-spiking version Google Speech Commands v0.02 (GSC) benchmarks, which
require detecting temporal patterns. We used feedforward SNNs with two or three
hidden fully connected layers, and vanilla leaky integrate-and-fire neurons. We
showed that fixed random delays help and that learning them helps even more.
Furthermore, our method outperformed the state-of-the-art in the three datasets
without using recurrent connections and with substantially fewer parameters.
Our work demonstrates the potential of delay learning in developing accurate
and precise models for temporal data processing. Our code is based on PyTorch /
SpikingJelly and available at: https://github.com/Thvnvtos/SNN-delays
|
[
{
"created": "Fri, 30 Jun 2023 14:01:53 GMT",
"version": "v1"
},
{
"created": "Thu, 31 Aug 2023 14:53:15 GMT",
"version": "v2"
},
{
"created": "Fri, 1 Dec 2023 14:23:16 GMT",
"version": "v3"
}
] |
2024-08-13
|
[
[
"Hammouamri",
"Ilyass",
""
],
[
"Khalfaoui-Hassani",
"Ismail",
""
],
[
"Masquelier",
"Timothée",
""
]
] |
Spiking Neural Networks (SNNs) are a promising research direction for building power-efficient information processing systems, especially for temporal tasks such as speech recognition. In SNNs, delays refer to the time needed for one spike to travel from one neuron to another. These delays matter because they influence the spike arrival times, and it is well-known that spiking neurons respond more strongly to coincident input spikes. More formally, it has been shown theoretically that plastic delays greatly increase the expressivity in SNNs. Yet, efficient algorithms to learn these delays have been lacking. Here, we propose a new discrete-time algorithm that addresses this issue in deep feedforward SNNs using backpropagation, in an offline manner. To simulate delays between consecutive layers, we use 1D convolutions across time. The kernels contain only a few non-zero weights - one per synapse - whose positions correspond to the delays. These positions are learned together with the weights using the recently proposed Dilated Convolution with Learnable Spacings (DCLS). We evaluated our method on three datasets: the Spiking Heidelberg Dataset (SHD), the Spiking Speech Commands (SSC) and its non-spiking version Google Speech Commands v0.02 (GSC) benchmarks, which require detecting temporal patterns. We used feedforward SNNs with two or three hidden fully connected layers, and vanilla leaky integrate-and-fire neurons. We showed that fixed random delays help and that learning them helps even more. Furthermore, our method outperformed the state-of-the-art in the three datasets without using recurrent connections and with substantially fewer parameters. Our work demonstrates the potential of delay learning in developing accurate and precise models for temporal data processing. Our code is based on PyTorch / SpikingJelly and available at: https://github.com/Thvnvtos/SNN-delays
|
1606.03268
|
Andr\'e Nichterlein
|
Christian Komusiewicz, Andr\'e Nichterlein, Rolf Niedermeier
|
Parameterized Algorithmics for Graph Modification Problems: On
Interactions with Heuristics
|
Invited Paper at the 41st International Workshop on Graph-Theoretic
Concepts in Computer Science (WG 15)
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In graph modification problems, one is given a graph G and the goal is to
apply a minimum number of modification operations (such as edge deletions) to G
such that the resulting graph fulfills a certain property. For example, the
Cluster Deletion problem asks to delete as few edges as possible such that the
resulting graph is a disjoint union of cliques. Graph modification problems
appear in numerous applications, including the analysis of biological and
social networks. Typically, graph modification problems are NP-hard, making
them natural candidates for parameterized complexity studies. We discuss
several fruitful interactions between the development of fixed-parameter
algorithms and the design of heuristics for graph modification problems,
featuring quite different aspects of mutual benefits.
|
[
{
"created": "Fri, 10 Jun 2016 10:50:28 GMT",
"version": "v1"
}
] |
2016-06-13
|
[
[
"Komusiewicz",
"Christian",
""
],
[
"Nichterlein",
"André",
""
],
[
"Niedermeier",
"Rolf",
""
]
] |
In graph modification problems, one is given a graph G and the goal is to apply a minimum number of modification operations (such as edge deletions) to G such that the resulting graph fulfills a certain property. For example, the Cluster Deletion problem asks to delete as few edges as possible such that the resulting graph is a disjoint union of cliques. Graph modification problems appear in numerous applications, including the analysis of biological and social networks. Typically, graph modification problems are NP-hard, making them natural candidates for parameterized complexity studies. We discuss several fruitful interactions between the development of fixed-parameter algorithms and the design of heuristics for graph modification problems, featuring quite different aspects of mutual benefits.
|
2003.10026
|
Yan Fang
|
Ashwin Sanjay Lele, Yan Fang, Justin Ting, Arijit Raychowdhury
|
Learning to Walk: Spike Based Reinforcement Learning for Hexapod Robot
Central Pattern Generation
|
5 pages, 7 figures, to be published in proceeding of IEEE AICAS
| null | null | null |
cs.NE cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning to walk -- i.e., learning locomotion under performance and energy
constraints continues to be a challenge in legged robotics. Methods such as
stochastic gradient, deep reinforcement learning (RL) have been explored for
bipeds, quadrupeds and hexapods. These techniques are computationally intensive
and often prohibitive for edge applications. These methods rely on complex
sensors and pre-processing of data, which further increases energy and latency.
Recent advances in spiking neural networks (SNNs) promise a significant
reduction in computing owing to the sparse firing of neuros and has been shown
to integrate reinforcement learning mechanisms with biologically observed spike
time dependent plasticity (STDP). However, training a legged robot to walk by
learning the synchronization patterns of central pattern generators (CPG) in an
SNN framework has not been shown. This can marry the efficiency of SNNs with
synchronized locomotion of CPG based systems providing breakthrough end-to-end
learning in mobile robotics. In this paper, we propose a reinforcement based
stochastic weight update technique for training a spiking CPG. The whole system
is implemented on a lightweight raspberry pi platform with integrated sensors,
thus opening up exciting new possibilities.
|
[
{
"created": "Sun, 22 Mar 2020 23:45:32 GMT",
"version": "v1"
}
] |
2020-03-24
|
[
[
"Lele",
"Ashwin Sanjay",
""
],
[
"Fang",
"Yan",
""
],
[
"Ting",
"Justin",
""
],
[
"Raychowdhury",
"Arijit",
""
]
] |
Learning to walk -- i.e., learning locomotion under performance and energy constraints continues to be a challenge in legged robotics. Methods such as stochastic gradient, deep reinforcement learning (RL) have been explored for bipeds, quadrupeds and hexapods. These techniques are computationally intensive and often prohibitive for edge applications. These methods rely on complex sensors and pre-processing of data, which further increases energy and latency. Recent advances in spiking neural networks (SNNs) promise a significant reduction in computing owing to the sparse firing of neuros and has been shown to integrate reinforcement learning mechanisms with biologically observed spike time dependent plasticity (STDP). However, training a legged robot to walk by learning the synchronization patterns of central pattern generators (CPG) in an SNN framework has not been shown. This can marry the efficiency of SNNs with synchronized locomotion of CPG based systems providing breakthrough end-to-end learning in mobile robotics. In this paper, we propose a reinforcement based stochastic weight update technique for training a spiking CPG. The whole system is implemented on a lightweight raspberry pi platform with integrated sensors, thus opening up exciting new possibilities.
|
cs/0611099
|
Travis Gagie
|
Travis Gagie
|
On the space complexity of one-pass compression
| null | null | null | null |
cs.IT math.IT
| null |
We study how much memory one-pass compression algorithms need to compete with
the best multi-pass algorithms. We call a one-pass algorithm an (f (n,
\ell))-footprint compressor if, given $n$, $\ell$ and an $n$-ary string $S$, it
stores $S$ in ((\rule{0ex}{2ex} O (H_\ell (S)) + o (\log n)) |S| + O (n^{\ell +
1} \log n)) bits -- where (H_\ell (S)) is the $\ell$th-order empirical entropy
of $S$ -- while using at most (f (n, \ell)) bits of memory. We prove that, for
any (\epsilon > 0) and some (f (n, \ell) \in O (n^{\ell + \epsilon} \log n)),
there is an (f (n, \ell))-footprint compressor; on the other hand, there is no
(f (n, \ell))-footprint compressor for (f (n, \ell) \in o (n^\ell \log n)).
|
[
{
"created": "Tue, 21 Nov 2006 02:06:31 GMT",
"version": "v1"
}
] |
2007-07-16
|
[
[
"Gagie",
"Travis",
""
]
] |
We study how much memory one-pass compression algorithms need to compete with the best multi-pass algorithms. We call a one-pass algorithm an (f (n, \ell))-footprint compressor if, given $n$, $\ell$ and an $n$-ary string $S$, it stores $S$ in ((\rule{0ex}{2ex} O (H_\ell (S)) + o (\log n)) |S| + O (n^{\ell + 1} \log n)) bits -- where (H_\ell (S)) is the $\ell$th-order empirical entropy of $S$ -- while using at most (f (n, \ell)) bits of memory. We prove that, for any (\epsilon > 0) and some (f (n, \ell) \in O (n^{\ell + \epsilon} \log n)), there is an (f (n, \ell))-footprint compressor; on the other hand, there is no (f (n, \ell))-footprint compressor for (f (n, \ell) \in o (n^\ell \log n)).
|
2208.14602
|
Yinhe Zheng Dr.
|
Yinhe Zheng
|
Continuous QA Learning with Structured Prompts
|
Duplicate of arXiv:2305.06555 (Please cite arXiv:2305.06555 since it
is the camera ready version)
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
QA models with lifelong learning (LL) abilities are important for practical
QA applications, and architecture-based LL methods are reported to be an
effective implementation for these models. However, it is non-trivial to extend
previous approaches to QA tasks since they either require access to task
identities in the testing phase or do not explicitly model samples from unseen
tasks. In this paper, we propose Diana: a dynamic architecture-based lifelong
QA model that tries to learn a sequence of QA tasks with a prompt enhanced
language model. Four types of hierarchically organized prompts are used in
Diana to capture QA knowledge from different granularities. Specifically, we
dedicate task-level prompts to capture task-specific knowledge to retain high
LL performances and maintain instance-level prompts to learn knowledge shared
across different input samples to improve the model's generalization
performance. Moreover, we dedicate separate prompts to explicitly model unseen
tasks and introduce a set of prompt key vectors to facilitate knowledge sharing
between tasks. Extensive experiments demonstrate that Diana outperforms
state-of-the-art lifelong QA models, especially in handling unseen tasks.
|
[
{
"created": "Wed, 31 Aug 2022 02:38:16 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Oct 2022 08:39:02 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Mar 2024 01:53:58 GMT",
"version": "v3"
}
] |
2024-03-18
|
[
[
"Zheng",
"Yinhe",
""
]
] |
QA models with lifelong learning (LL) abilities are important for practical QA applications, and architecture-based LL methods are reported to be an effective implementation for these models. However, it is non-trivial to extend previous approaches to QA tasks since they either require access to task identities in the testing phase or do not explicitly model samples from unseen tasks. In this paper, we propose Diana: a dynamic architecture-based lifelong QA model that tries to learn a sequence of QA tasks with a prompt enhanced language model. Four types of hierarchically organized prompts are used in Diana to capture QA knowledge from different granularities. Specifically, we dedicate task-level prompts to capture task-specific knowledge to retain high LL performances and maintain instance-level prompts to learn knowledge shared across different input samples to improve the model's generalization performance. Moreover, we dedicate separate prompts to explicitly model unseen tasks and introduce a set of prompt key vectors to facilitate knowledge sharing between tasks. Extensive experiments demonstrate that Diana outperforms state-of-the-art lifelong QA models, especially in handling unseen tasks.
|
1906.11518
|
Longbin Lai
|
Longbin Lai, Zhu Qing, Zhengyi Yang, Xin Jin, Zhengmin Lai, Ran Wang,
Kongzhang Hao, Xuemin Lin, Lu Qin, Wenjie Zhang, Ying Zhang, Zhengping Qian
and Jingren Zhou
|
A Survey and Experimental Analysis of Distributed Subgraph Matching
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently there emerge many distributed algorithms that aim at solving
subgraph matching at scale. Existing algorithm-level comparisons failed to
provide a systematic view to the pros and cons of each algorithm mainly due to
the intertwining of strategy and optimization. In this paper, we identify four
strategies and three general-purpose optimizations from representative
state-of-the-art works. We implement the four strategies with the optimizations
based on the common Timely dataflow system for systematic strategy-level
comparison. Our implementation covers all representation algorithms. We conduct
extensive experiments for both unlabelled matching and labelled matching to
analyze the performance of distributed subgraph matching under various
settings, which is finally summarized as a practical guide.
|
[
{
"created": "Thu, 27 Jun 2019 09:38:46 GMT",
"version": "v1"
}
] |
2019-06-28
|
[
[
"Lai",
"Longbin",
""
],
[
"Qing",
"Zhu",
""
],
[
"Yang",
"Zhengyi",
""
],
[
"Jin",
"Xin",
""
],
[
"Lai",
"Zhengmin",
""
],
[
"Wang",
"Ran",
""
],
[
"Hao",
"Kongzhang",
""
],
[
"Lin",
"Xuemin",
""
],
[
"Qin",
"Lu",
""
],
[
"Zhang",
"Wenjie",
""
],
[
"Zhang",
"Ying",
""
],
[
"Qian",
"Zhengping",
""
],
[
"Zhou",
"Jingren",
""
]
] |
Recently there emerge many distributed algorithms that aim at solving subgraph matching at scale. Existing algorithm-level comparisons failed to provide a systematic view to the pros and cons of each algorithm mainly due to the intertwining of strategy and optimization. In this paper, we identify four strategies and three general-purpose optimizations from representative state-of-the-art works. We implement the four strategies with the optimizations based on the common Timely dataflow system for systematic strategy-level comparison. Our implementation covers all representation algorithms. We conduct extensive experiments for both unlabelled matching and labelled matching to analyze the performance of distributed subgraph matching under various settings, which is finally summarized as a practical guide.
|
2101.03263
|
Matthew Sotoudeh
|
Matthew Sotoudeh and Aditya V. Thakur
|
SyReNN: A Tool for Analyzing Deep Neural Networks
|
Accepted paper at TACAS 2021. Tool is available at
https://github.com/95616ARG/SyReNN
| null | null | null |
cs.LG cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Neural Networks (DNNs) are rapidly gaining popularity in a variety of
important domains. Formally, DNNs are complicated vector-valued functions which
come in a variety of sizes and applications. Unfortunately, modern DNNs have
been shown to be vulnerable to a variety of attacks and buggy behavior. This
has motivated recent work in formally analyzing the properties of such DNNs.
This paper introduces SyReNN, a tool for understanding and analyzing a DNN by
computing its symbolic representation. The key insight is to decompose the DNN
into linear functions. Our tool is designed for analyses using low-dimensional
subsets of the input space, a unique design point in the space of DNN analysis
tools. We describe the tool and the underlying theory, then evaluate its use
and performance on three case studies: computing Integrated Gradients,
visualizing a DNN's decision boundaries, and patching a DNN.
|
[
{
"created": "Sat, 9 Jan 2021 00:27:23 GMT",
"version": "v1"
}
] |
2021-01-12
|
[
[
"Sotoudeh",
"Matthew",
""
],
[
"Thakur",
"Aditya V.",
""
]
] |
Deep Neural Networks (DNNs) are rapidly gaining popularity in a variety of important domains. Formally, DNNs are complicated vector-valued functions which come in a variety of sizes and applications. Unfortunately, modern DNNs have been shown to be vulnerable to a variety of attacks and buggy behavior. This has motivated recent work in formally analyzing the properties of such DNNs. This paper introduces SyReNN, a tool for understanding and analyzing a DNN by computing its symbolic representation. The key insight is to decompose the DNN into linear functions. Our tool is designed for analyses using low-dimensional subsets of the input space, a unique design point in the space of DNN analysis tools. We describe the tool and the underlying theory, then evaluate its use and performance on three case studies: computing Integrated Gradients, visualizing a DNN's decision boundaries, and patching a DNN.
|
2205.09249
|
Hyounghun Kim
|
Hyounghun Kim, Aishwarya Padmakumar, Di Jin, Mohit Bansal, Dilek
Hakkani-Tur
|
On the Limits of Evaluating Embodied Agent Model Generalization Using
Validation Sets
|
ACL 2022 Insights Workshop (6 pages)
| null | null | null |
cs.CL cs.AI cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Natural language guided embodied task completion is a challenging problem
since it requires understanding natural language instructions, aligning them
with egocentric visual observations, and choosing appropriate actions to
execute in the environment to produce desired changes. We experiment with
augmenting a transformer model for this task with modules that effectively
utilize a wider field of view and learn to choose whether the next step
requires a navigation or manipulation action. We observed that the proposed
modules resulted in improved, and in fact state-of-the-art performance on an
unseen validation set of a popular benchmark dataset, ALFRED. However, our best
model selected using the unseen validation set underperforms on the unseen test
split of ALFRED, indicating that performance on the unseen validation set may
not in itself be a sufficient indicator of whether model improvements
generalize to unseen test sets. We highlight this result as we believe it may
be a wider phenomenon in machine learning tasks but primarily noticeable only
in benchmarks that limit evaluations on test splits, and highlights the need to
modify benchmark design to better account for variance in model performance.
|
[
{
"created": "Wed, 18 May 2022 23:52:21 GMT",
"version": "v1"
}
] |
2022-05-20
|
[
[
"Kim",
"Hyounghun",
""
],
[
"Padmakumar",
"Aishwarya",
""
],
[
"Jin",
"Di",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Hakkani-Tur",
"Dilek",
""
]
] |
Natural language guided embodied task completion is a challenging problem since it requires understanding natural language instructions, aligning them with egocentric visual observations, and choosing appropriate actions to execute in the environment to produce desired changes. We experiment with augmenting a transformer model for this task with modules that effectively utilize a wider field of view and learn to choose whether the next step requires a navigation or manipulation action. We observed that the proposed modules resulted in improved, and in fact state-of-the-art performance on an unseen validation set of a popular benchmark dataset, ALFRED. However, our best model selected using the unseen validation set underperforms on the unseen test split of ALFRED, indicating that performance on the unseen validation set may not in itself be a sufficient indicator of whether model improvements generalize to unseen test sets. We highlight this result as we believe it may be a wider phenomenon in machine learning tasks but primarily noticeable only in benchmarks that limit evaluations on test splits, and highlights the need to modify benchmark design to better account for variance in model performance.
|
2004.04077
|
Andrea Cossu
|
Andrea Cossu, Antonio Carta, Davide Bacciu
|
Continual Learning with Gated Incremental Memories for sequential data
processing
|
Accepted as a conference paper at 2020 International Joint Conference
on Neural Networks (IJCNN 2020). Part of 2020 IEEE World Congress on
Computational Intelligence (IEEE WCCI 2020)
| null |
10.1109/IJCNN48605.2020.9207550
| null |
cs.LG cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to learn in dynamic, nonstationary environments without
forgetting previous knowledge, also known as Continual Learning (CL), is a key
enabler for scalable and trustworthy deployments of adaptive solutions. While
the importance of continual learning is largely acknowledged in machine vision
and reinforcement learning problems, this is mostly under-documented for
sequence processing tasks. This work proposes a Recurrent Neural Network (RNN)
model for CL that is able to deal with concept drift in input distribution
without forgetting previously acquired knowledge. We also implement and test a
popular CL approach, Elastic Weight Consolidation (EWC), on top of two
different types of RNNs. Finally, we compare the performances of our enhanced
architecture against EWC and RNNs on a set of standard CL benchmarks, adapted
to the sequential data processing scenario. Results show the superior
performance of our architecture and highlight the need for special solutions
designed to address CL in RNNs.
|
[
{
"created": "Wed, 8 Apr 2020 16:00:20 GMT",
"version": "v1"
}
] |
2021-03-25
|
[
[
"Cossu",
"Andrea",
""
],
[
"Carta",
"Antonio",
""
],
[
"Bacciu",
"Davide",
""
]
] |
The ability to learn in dynamic, nonstationary environments without forgetting previous knowledge, also known as Continual Learning (CL), is a key enabler for scalable and trustworthy deployments of adaptive solutions. While the importance of continual learning is largely acknowledged in machine vision and reinforcement learning problems, this is mostly under-documented for sequence processing tasks. This work proposes a Recurrent Neural Network (RNN) model for CL that is able to deal with concept drift in input distribution without forgetting previously acquired knowledge. We also implement and test a popular CL approach, Elastic Weight Consolidation (EWC), on top of two different types of RNNs. Finally, we compare the performances of our enhanced architecture against EWC and RNNs on a set of standard CL benchmarks, adapted to the sequential data processing scenario. Results show the superior performance of our architecture and highlight the need for special solutions designed to address CL in RNNs.
|
2109.04029
|
Triet Le
|
Xuanyu Duan, Mengmeng Ge, Triet H. M. Le, Faheem Ullah, Shang Gao,
Xuequan Lu, M. Ali Babar
|
Automated Security Assessment for the Internet of Things
|
Accepted for publication at the 26th IEEE Pacific Rim International
Symposium on Dependable Computing (PRDC 2021)
| null | null | null |
cs.CR cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Internet of Things (IoT) based applications face an increasing number of
potential security risks, which need to be systematically assessed and
addressed. Expert-based manual assessment of IoT security is a predominant
approach, which is usually inefficient. To address this problem, we propose an
automated security assessment framework for IoT networks. Our framework first
leverages machine learning and natural language processing to analyze
vulnerability descriptions for predicting vulnerability metrics. The predicted
metrics are then input into a two-layered graphical security model, which
consists of an attack graph at the upper layer to present the network
connectivity and an attack tree for each node in the network at the bottom
layer to depict the vulnerability information. This security model
automatically assesses the security of the IoT network by capturing potential
attack paths. We evaluate the viability of our approach using a
proof-of-concept smart building system model which contains a variety of
real-world IoT devices and potential vulnerabilities. Our evaluation of the
proposed framework demonstrates its effectiveness in terms of automatically
predicting the vulnerability metrics of new vulnerabilities with more than 90%
accuracy, on average, and identifying the most vulnerable attack paths within
an IoT network. The produced assessment results can serve as a guideline for
cybersecurity professionals to take further actions and mitigate risks in a
timely manner.
|
[
{
"created": "Thu, 9 Sep 2021 04:42:24 GMT",
"version": "v1"
}
] |
2021-09-10
|
[
[
"Duan",
"Xuanyu",
""
],
[
"Ge",
"Mengmeng",
""
],
[
"Le",
"Triet H. M.",
""
],
[
"Ullah",
"Faheem",
""
],
[
"Gao",
"Shang",
""
],
[
"Lu",
"Xuequan",
""
],
[
"Babar",
"M. Ali",
""
]
] |
Internet of Things (IoT) based applications face an increasing number of potential security risks, which need to be systematically assessed and addressed. Expert-based manual assessment of IoT security is a predominant approach, which is usually inefficient. To address this problem, we propose an automated security assessment framework for IoT networks. Our framework first leverages machine learning and natural language processing to analyze vulnerability descriptions for predicting vulnerability metrics. The predicted metrics are then input into a two-layered graphical security model, which consists of an attack graph at the upper layer to present the network connectivity and an attack tree for each node in the network at the bottom layer to depict the vulnerability information. This security model automatically assesses the security of the IoT network by capturing potential attack paths. We evaluate the viability of our approach using a proof-of-concept smart building system model which contains a variety of real-world IoT devices and potential vulnerabilities. Our evaluation of the proposed framework demonstrates its effectiveness in terms of automatically predicting the vulnerability metrics of new vulnerabilities with more than 90% accuracy, on average, and identifying the most vulnerable attack paths within an IoT network. The produced assessment results can serve as a guideline for cybersecurity professionals to take further actions and mitigate risks in a timely manner.
|
1807.01659
|
Guang-Yuan Hao
|
Guang-Yuan Hao, Hong-Xing Yu, Wei-Shi Zheng
|
MIXGAN: Learning Concepts from Different Domains for Mixture Generation
|
Accepted by IJCAI-ECAI 2018, the 27th International Joint Conference
on Artificial Intelligence and the 23rd European Conference on Artificial
Intelligence
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present an interesting attempt on mixture generation:
absorbing different image concepts (e.g., content and style) from different
domains and thus generating a new domain with learned concepts. In particular,
we propose a mixture generative adversarial network (MIXGAN). MIXGAN learns
concepts of content and style from two domains respectively, and thus can join
them for mixture generation in a new domain, i.e., generating images with
content from one domain and style from another. MIXGAN overcomes the limitation
of current GAN-based models which either generate new images in the same domain
as they observed in training stage, or require off-the-shelf content templates
for transferring or translation. Extensive experimental results demonstrate the
effectiveness of MIXGAN as compared to related state-of-the-art GAN-based
models.
|
[
{
"created": "Wed, 4 Jul 2018 16:20:47 GMT",
"version": "v1"
}
] |
2018-07-05
|
[
[
"Hao",
"Guang-Yuan",
""
],
[
"Yu",
"Hong-Xing",
""
],
[
"Zheng",
"Wei-Shi",
""
]
] |
In this work, we present an interesting attempt on mixture generation: absorbing different image concepts (e.g., content and style) from different domains and thus generating a new domain with learned concepts. In particular, we propose a mixture generative adversarial network (MIXGAN). MIXGAN learns concepts of content and style from two domains respectively, and thus can join them for mixture generation in a new domain, i.e., generating images with content from one domain and style from another. MIXGAN overcomes the limitation of current GAN-based models which either generate new images in the same domain as they observed in training stage, or require off-the-shelf content templates for transferring or translation. Extensive experimental results demonstrate the effectiveness of MIXGAN as compared to related state-of-the-art GAN-based models.
|
2407.05216
|
Cheng-Han Chiang
|
Cheng-Han Chiang, Wei-Chih Chen, Chun-Yi Kuan, Chienchou Yang, Hung-yi
Lee
|
Large Language Model as an Assignment Evaluator: Insights, Feedback, and
Challenges in a 1000+ Student Course
|
An empirical report of our course: Introduction to Generative AI 2024
Spring (https://speech.ee.ntu.edu.tw/~hylee/genai/2024-spring.php)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Using large language models (LLMs) for automatic evaluation has become an
important evaluation method in NLP research. However, it is unclear whether
these LLM-based evaluators can be applied in real-world classrooms to assess
student assignments. This empirical report shares how we use GPT-4 as an
automatic assignment evaluator in a university course with 1,028 students.
Based on student responses, we find that LLM-based assignment evaluators are
generally acceptable to students when students have free access to these
LLM-based evaluators. However, students also noted that the LLM sometimes fails
to adhere to the evaluation instructions. Additionally, we observe that
students can easily manipulate the LLM-based evaluator to output specific
strings, allowing them to achieve high scores without meeting the assignment
rubric. Based on student feedback and our experience, we provide several
recommendations for integrating LLM-based evaluators into future classrooms.
|
[
{
"created": "Sun, 7 Jul 2024 00:17:24 GMT",
"version": "v1"
}
] |
2024-07-09
|
[
[
"Chiang",
"Cheng-Han",
""
],
[
"Chen",
"Wei-Chih",
""
],
[
"Kuan",
"Chun-Yi",
""
],
[
"Yang",
"Chienchou",
""
],
[
"Lee",
"Hung-yi",
""
]
] |
Using large language models (LLMs) for automatic evaluation has become an important evaluation method in NLP research. However, it is unclear whether these LLM-based evaluators can be applied in real-world classrooms to assess student assignments. This empirical report shares how we use GPT-4 as an automatic assignment evaluator in a university course with 1,028 students. Based on student responses, we find that LLM-based assignment evaluators are generally acceptable to students when students have free access to these LLM-based evaluators. However, students also noted that the LLM sometimes fails to adhere to the evaluation instructions. Additionally, we observe that students can easily manipulate the LLM-based evaluator to output specific strings, allowing them to achieve high scores without meeting the assignment rubric. Based on student feedback and our experience, we provide several recommendations for integrating LLM-based evaluators into future classrooms.
|
2210.11050
|
Zeyu Cao
|
Zeyu Cao, Zhipeng Liang, Shu Zhang, Hangyu Li, Ouyang Wen, Yu Rong,
Peilin Zhao, Bingzhe Wu
|
Vertical Federated Linear Contextual Bandits
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate a novel problem of building contextual bandits
in the vertical federated setting, i.e., contextual information is vertically
distributed over different departments. This problem remains largely unexplored
in the research community. To this end, we carefully design a customized
encryption scheme named orthogonal matrix-based mask mechanism(O3M) for
encrypting local contextual information while avoiding expensive conventional
cryptographic techniques. We further apply the mechanism to two commonly-used
bandit algorithms, LinUCB and LinTS, and instantiate two practical protocols
for online recommendation under the vertical federated setting. The proposed
protocols can perfectly recover the service quality of centralized bandit
algorithms while achieving a satisfactory runtime efficiency, which is
theoretically proved and analyzed in this paper. By conducting extensive
experiments on both synthetic and real-world datasets, we show the superiority
of the proposed method in terms of privacy protection and recommendation
performance.
|
[
{
"created": "Thu, 20 Oct 2022 06:59:42 GMT",
"version": "v1"
}
] |
2022-10-21
|
[
[
"Cao",
"Zeyu",
""
],
[
"Liang",
"Zhipeng",
""
],
[
"Zhang",
"Shu",
""
],
[
"Li",
"Hangyu",
""
],
[
"Wen",
"Ouyang",
""
],
[
"Rong",
"Yu",
""
],
[
"Zhao",
"Peilin",
""
],
[
"Wu",
"Bingzhe",
""
]
] |
In this paper, we investigate a novel problem of building contextual bandits in the vertical federated setting, i.e., contextual information is vertically distributed over different departments. This problem remains largely unexplored in the research community. To this end, we carefully design a customized encryption scheme named orthogonal matrix-based mask mechanism(O3M) for encrypting local contextual information while avoiding expensive conventional cryptographic techniques. We further apply the mechanism to two commonly-used bandit algorithms, LinUCB and LinTS, and instantiate two practical protocols for online recommendation under the vertical federated setting. The proposed protocols can perfectly recover the service quality of centralized bandit algorithms while achieving a satisfactory runtime efficiency, which is theoretically proved and analyzed in this paper. By conducting extensive experiments on both synthetic and real-world datasets, we show the superiority of the proposed method in terms of privacy protection and recommendation performance.
|
1805.00329
|
Michele Alberti
|
Michele Alberti, Vinaychandran Pondenkandath, Marcel W\"ursch, Rolf
Ingold, Marcus Liwicki
|
DeepDIVA: A Highly-Functional Python Framework for Reproducible
Experiments
|
Submitted at the 16th International Conference on Frontiers in
Handwriting Recognition (ICFHR), 6 pages, 6 Figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce DeepDIVA: an infrastructure designed to enable quick and
intuitive setup of reproducible experiments with a large range of useful
analysis functionality. Reproducing scientific results can be a frustrating
experience, not only in document image analysis but in machine learning in
general. Using DeepDIVA a researcher can either reproduce a given experiment
with a very limited amount of information or share their own experiments with
others. Moreover, the framework offers a large range of functions, such as
boilerplate code, keeping track of experiments, hyper-parameter optimization,
and visualization of data and results. To demonstrate the effectiveness of this
framework, this paper presents case studies in the area of handwritten document
analysis where researchers benefit from the integrated functionality. DeepDIVA
is implemented in Python and uses the deep learning framework PyTorch. It is
completely open source, and accessible as Web Service through DIVAServices.
|
[
{
"created": "Mon, 23 Apr 2018 20:00:42 GMT",
"version": "v1"
}
] |
2018-05-02
|
[
[
"Alberti",
"Michele",
""
],
[
"Pondenkandath",
"Vinaychandran",
""
],
[
"Würsch",
"Marcel",
""
],
[
"Ingold",
"Rolf",
""
],
[
"Liwicki",
"Marcus",
""
]
] |
We introduce DeepDIVA: an infrastructure designed to enable quick and intuitive setup of reproducible experiments with a large range of useful analysis functionality. Reproducing scientific results can be a frustrating experience, not only in document image analysis but in machine learning in general. Using DeepDIVA a researcher can either reproduce a given experiment with a very limited amount of information or share their own experiments with others. Moreover, the framework offers a large range of functions, such as boilerplate code, keeping track of experiments, hyper-parameter optimization, and visualization of data and results. To demonstrate the effectiveness of this framework, this paper presents case studies in the area of handwritten document analysis where researchers benefit from the integrated functionality. DeepDIVA is implemented in Python and uses the deep learning framework PyTorch. It is completely open source, and accessible as Web Service through DIVAServices.
|
1503.06680
|
Kieran Larkin
|
Kieran Gerard Larkin
|
Structural Similarity Index SSIMplified: Is there really a simpler
concept at the heart of image quality measurement?
|
Updated abstract and references. 4 pages total, main analysis 2
pages, notes and minimal references 1 page
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Structural Similarity Index (SSIM) is generally considered to be a
milestone in the recent history of Image Quality Assessment (IQA). Alas, SSIM's
accepted development from the product of three heuristic factors continues to
obscure it's real underlying simplicity. Starting instead from a
symmetric-antisymmetric reformulation we first show SSIM to be a contrast or
visibility function in the classic sense. Furthermore, the previously enigmatic
structural covariance is revealed to be the difference of variances. The second
step, eliminating the intrinsic quadratic nature of SSIM, allows a near linear
correlation with human observer scores, and without invoking the usual, but
arbitrary, sigmoid model fitting. We conclude that SSIM can be re-interpreted
in terms of perceptual masking: it is essentially equivalent to a normalised
error or noise visibility function (NVF), and, furthermore, the NVF alone
explains it success in modelling perceptual image quality. We use the term
Dissimilarity Quotient (DQ) for the specifically anti/symmetric SSIM derived
NVF. It seems that IQA researchers may now have two choices: 1) Continue to use
the complex SSIM formula, but noting that SSIM only works coincidentally since
the covariance term is actually the mean square error (MSE) in disguise. 2) Use
the simplest of all perceptually-masked image quality metrics, namely NVF or
DQ. On this choice Occam is clear: in the absence of differences in predictive
ability, the fewer assumptions that are made, the better.
|
[
{
"created": "Thu, 29 Jan 2015 21:27:49 GMT",
"version": "v1"
},
{
"created": "Mon, 25 May 2015 01:53:07 GMT",
"version": "v2"
}
] |
2015-05-26
|
[
[
"Larkin",
"Kieran Gerard",
""
]
] |
The Structural Similarity Index (SSIM) is generally considered to be a milestone in the recent history of Image Quality Assessment (IQA). Alas, SSIM's accepted development from the product of three heuristic factors continues to obscure it's real underlying simplicity. Starting instead from a symmetric-antisymmetric reformulation we first show SSIM to be a contrast or visibility function in the classic sense. Furthermore, the previously enigmatic structural covariance is revealed to be the difference of variances. The second step, eliminating the intrinsic quadratic nature of SSIM, allows a near linear correlation with human observer scores, and without invoking the usual, but arbitrary, sigmoid model fitting. We conclude that SSIM can be re-interpreted in terms of perceptual masking: it is essentially equivalent to a normalised error or noise visibility function (NVF), and, furthermore, the NVF alone explains it success in modelling perceptual image quality. We use the term Dissimilarity Quotient (DQ) for the specifically anti/symmetric SSIM derived NVF. It seems that IQA researchers may now have two choices: 1) Continue to use the complex SSIM formula, but noting that SSIM only works coincidentally since the covariance term is actually the mean square error (MSE) in disguise. 2) Use the simplest of all perceptually-masked image quality metrics, namely NVF or DQ. On this choice Occam is clear: in the absence of differences in predictive ability, the fewer assumptions that are made, the better.
|
1907.01602
|
Gustavo Pinto
|
Wagner Felidr\'e and Leonardo Furtado and Daniel da Costa and Bruno
Cartaxo and Gustavo Pinto
|
Continuous Integration Theater
|
to appear at ESEM 2019
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Background: Continuous Integration (CI) systems are now the bedrock of
several software development practices. Several tools such as TravisCI,
CircleCI, and Hudson, that implement CI practices, are commonly adopted by
software engineers. However, the way that software engineers use these tools
could lead to what we call "Continuous Integration Theater", a situation in
which software engineers do not employ these tools effectively, leading to
unhealthy CI practices. Aims: The goal of this paper is to make sense of how
commonplace are these unhealthy continuous integration practices being employed
in practice. Method: By inspecting 1,270 open-source projects that use
TravisCI, the most used CI service, we quantitatively studied how common is to
use CI (1) with infrequent commits, (2) in a software project with poor test
coverage, (3) with builds that stay broken for long periods, and (4) with
builds that take too long to run. Results: We observed that 748 ($sim$60%)
projects face infrequent commits, which essentially makes the merging process
harder. Moreover, we were able to find code coverage information for 51
projects. The average code coverage was 78%, although Ruby projects have a
higher code coverage than Java projects (86% and 63%, respectively). However,
some projects with very small coverage ($sim$4%) were found. Still, we observed
that 85% of the studied projects have at least one broken build that take more
than four days to be fixed. Interestingly, very small projects (up to 1,000
lines of code) are the ones that take the longest to fix broken builds.
Finally, we noted that, for the majority of the studied projects, the build is
executed under the 10 minutes rule of thumb. Conclusions: Our results are
important to an increasing community of software engineers that employ CI
practices on daily basis but may not be aware of bad practices that are
eventually employed.
|
[
{
"created": "Tue, 2 Jul 2019 19:47:55 GMT",
"version": "v1"
}
] |
2019-07-04
|
[
[
"Felidré",
"Wagner",
""
],
[
"Furtado",
"Leonardo",
""
],
[
"da Costa",
"Daniel",
""
],
[
"Cartaxo",
"Bruno",
""
],
[
"Pinto",
"Gustavo",
""
]
] |
Background: Continuous Integration (CI) systems are now the bedrock of several software development practices. Several tools such as TravisCI, CircleCI, and Hudson, that implement CI practices, are commonly adopted by software engineers. However, the way that software engineers use these tools could lead to what we call "Continuous Integration Theater", a situation in which software engineers do not employ these tools effectively, leading to unhealthy CI practices. Aims: The goal of this paper is to make sense of how commonplace are these unhealthy continuous integration practices being employed in practice. Method: By inspecting 1,270 open-source projects that use TravisCI, the most used CI service, we quantitatively studied how common is to use CI (1) with infrequent commits, (2) in a software project with poor test coverage, (3) with builds that stay broken for long periods, and (4) with builds that take too long to run. Results: We observed that 748 ($sim$60%) projects face infrequent commits, which essentially makes the merging process harder. Moreover, we were able to find code coverage information for 51 projects. The average code coverage was 78%, although Ruby projects have a higher code coverage than Java projects (86% and 63%, respectively). However, some projects with very small coverage ($sim$4%) were found. Still, we observed that 85% of the studied projects have at least one broken build that take more than four days to be fixed. Interestingly, very small projects (up to 1,000 lines of code) are the ones that take the longest to fix broken builds. Finally, we noted that, for the majority of the studied projects, the build is executed under the 10 minutes rule of thumb. Conclusions: Our results are important to an increasing community of software engineers that employ CI practices on daily basis but may not be aware of bad practices that are eventually employed.
|
2202.07836
|
Eugene Wu
|
Eugene Wu
|
View Composition Algebra for Ad Hoc Comparison
| null | null | null | null |
cs.HC cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Comparison is a core task in visual analysis. Although there are numerous
guidelines to help users design effective visualizations to aid known
comparison tasks, there are few techniques available when users want to make ad
hoc comparisons between marks, trends, or charts during data exploration and
visual analysis. For instance, to compare voting count maps from different
years, two stock trends in a line chart, or a scatterplot of country GDPs with
a textual summary of the average GDP. Ideally, users can directly select the
comparison targets and compare them, however what elements of a visualization
should be candidate targets, which combinations of targets are safe to compare,
and what comparison operations make sense? This paper proposes a conceptual
model that lets users compose combinations of values, marks, legend elements,
and charts using a set of composition operators that summarize, compute
differences, merge, and model their operands. We further define a View
Composition Algebra (VCA) that is compatible with datacube-based
visualizations, derive an interaction design based on this algebra that
supports ad hoc visual comparisons, and illustrate its utility through several
use cases.
|
[
{
"created": "Wed, 16 Feb 2022 03:01:26 GMT",
"version": "v1"
}
] |
2022-02-17
|
[
[
"Wu",
"Eugene",
""
]
] |
Comparison is a core task in visual analysis. Although there are numerous guidelines to help users design effective visualizations to aid known comparison tasks, there are few techniques available when users want to make ad hoc comparisons between marks, trends, or charts during data exploration and visual analysis. For instance, to compare voting count maps from different years, two stock trends in a line chart, or a scatterplot of country GDPs with a textual summary of the average GDP. Ideally, users can directly select the comparison targets and compare them, however what elements of a visualization should be candidate targets, which combinations of targets are safe to compare, and what comparison operations make sense? This paper proposes a conceptual model that lets users compose combinations of values, marks, legend elements, and charts using a set of composition operators that summarize, compute differences, merge, and model their operands. We further define a View Composition Algebra (VCA) that is compatible with datacube-based visualizations, derive an interaction design based on this algebra that supports ad hoc visual comparisons, and illustrate its utility through several use cases.
|
1710.06831
|
Pooyan Fazli
|
Utkarsh Patel, Emre Hatay, Mike D'Arcy, Ghazal Zand, and Pooyan Fazli
|
Setting Up the Beam for Human-Centered Service Tasks
|
10 pages
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the Beam, a collaborative autonomous mobile service robot, based
on SuitableTech's Beam telepresence system. We present a set of enhancements to
the telepresence system, including autonomy, human awareness, increased
computation and sensing capabilities, and integration with the popular Robot
Operating System (ROS) framework. Together, our improvements transform the Beam
into a low-cost platform for research on service robots. We examine the Beam on
target search and object delivery tasks and demonstrate that the robot achieves
a 100% success rate.
|
[
{
"created": "Wed, 18 Oct 2017 17:17:04 GMT",
"version": "v1"
}
] |
2017-10-19
|
[
[
"Patel",
"Utkarsh",
""
],
[
"Hatay",
"Emre",
""
],
[
"D'Arcy",
"Mike",
""
],
[
"Zand",
"Ghazal",
""
],
[
"Fazli",
"Pooyan",
""
]
] |
We introduce the Beam, a collaborative autonomous mobile service robot, based on SuitableTech's Beam telepresence system. We present a set of enhancements to the telepresence system, including autonomy, human awareness, increased computation and sensing capabilities, and integration with the popular Robot Operating System (ROS) framework. Together, our improvements transform the Beam into a low-cost platform for research on service robots. We examine the Beam on target search and object delivery tasks and demonstrate that the robot achieves a 100% success rate.
|
1910.04006
|
Eben Holderness
|
Elena Alvarez-Mellado, Eben Holderness, Nicholas Miller, Fyonn Dhang,
Philip Cawkwell, Kirsten Bolton, James Pustejovsky, Mei-Hua Hall
|
Assessing the Efficacy of Clinical Sentiment Analysis and Topic
Extraction in Psychiatric Readmission Risk Prediction
|
LOUHI @ EMNLP 2019
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predicting which patients are more likely to be readmitted to a hospital
within 30 days after discharge is a valuable piece of information in clinical
decision-making. Building a successful readmission risk classifier based on the
content of Electronic Health Records (EHRs) has proved, however, to be a
challenging task. Previously explored features include mainly structured
information, such as sociodemographic data, comorbidity codes and physiological
variables. In this paper we assess incorporating additional clinically
interpretable NLP-based features such as topic extraction and clinical
sentiment analysis to predict early readmission risk in psychiatry patients.
|
[
{
"created": "Wed, 9 Oct 2019 14:10:47 GMT",
"version": "v1"
}
] |
2019-10-10
|
[
[
"Alvarez-Mellado",
"Elena",
""
],
[
"Holderness",
"Eben",
""
],
[
"Miller",
"Nicholas",
""
],
[
"Dhang",
"Fyonn",
""
],
[
"Cawkwell",
"Philip",
""
],
[
"Bolton",
"Kirsten",
""
],
[
"Pustejovsky",
"James",
""
],
[
"Hall",
"Mei-Hua",
""
]
] |
Predicting which patients are more likely to be readmitted to a hospital within 30 days after discharge is a valuable piece of information in clinical decision-making. Building a successful readmission risk classifier based on the content of Electronic Health Records (EHRs) has proved, however, to be a challenging task. Previously explored features include mainly structured information, such as sociodemographic data, comorbidity codes and physiological variables. In this paper we assess incorporating additional clinically interpretable NLP-based features such as topic extraction and clinical sentiment analysis to predict early readmission risk in psychiatry patients.
|
1604.08625
|
Victor Hugo Ba\~nos Gonzalez
|
Victor Ba\~nos-Gonzalez, M. Shahwaiz Afaqui, Elena Lopez-Aguilera,
Eduard Garcia-Villegas
|
Throughput and range characterization of IEEE 802.11ah
|
7 pages, 6 figures, 5 tables
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The most essential part of Internet of Things (IoT) infrastructure is the
wireless communication system that acts as a bridge for the delivery of data
and control messages. However, the existing wireless technologies lack the
ability to support a huge amount of data exchange from many battery driven
devices spread over a wide area. In order to support the IoT paradigm, the IEEE
802.11 standard committee is in process of introducing a new standard, called
IEEE 802.11ah. This is one of the most promising and appealing standards, which
aims to bridge the gap between traditional mobile networks and the demands of
the IoT. In this paper, we first discuss the main PHY and MAC layer amendments
proposed for IEEE 802.11ah. Furthermore, we investigate the operability of IEEE
802.11ah as a backhaul link to connect devices over a long range. Additionally,
we compare the aforementioned standard with previous notable IEEE 802.11
amendments (i.e. IEEE 802.11n and IEEE 802.11ac) in terms of throughput (with
and without frame aggregation) by utilizing the most robust modulation schemes.
The results show an improved performance of IEEE 802.11ah (in terms of power
received at long range while experiencing different packet error rates) as
compared to previous IEEE 802.11 standards.
|
[
{
"created": "Thu, 28 Apr 2016 21:42:06 GMT",
"version": "v1"
}
] |
2016-05-02
|
[
[
"Baños-Gonzalez",
"Victor",
""
],
[
"Afaqui",
"M. Shahwaiz",
""
],
[
"Lopez-Aguilera",
"Elena",
""
],
[
"Garcia-Villegas",
"Eduard",
""
]
] |
The most essential part of Internet of Things (IoT) infrastructure is the wireless communication system that acts as a bridge for the delivery of data and control messages. However, the existing wireless technologies lack the ability to support a huge amount of data exchange from many battery driven devices spread over a wide area. In order to support the IoT paradigm, the IEEE 802.11 standard committee is in process of introducing a new standard, called IEEE 802.11ah. This is one of the most promising and appealing standards, which aims to bridge the gap between traditional mobile networks and the demands of the IoT. In this paper, we first discuss the main PHY and MAC layer amendments proposed for IEEE 802.11ah. Furthermore, we investigate the operability of IEEE 802.11ah as a backhaul link to connect devices over a long range. Additionally, we compare the aforementioned standard with previous notable IEEE 802.11 amendments (i.e. IEEE 802.11n and IEEE 802.11ac) in terms of throughput (with and without frame aggregation) by utilizing the most robust modulation schemes. The results show an improved performance of IEEE 802.11ah (in terms of power received at long range while experiencing different packet error rates) as compared to previous IEEE 802.11 standards.
|
2208.09822
|
Jianyu Yao
|
Jianyu Yao, Boqian Shi, Chunyang Xiang, Haipeng Jia, Chendi Li, Hang
Cao, Yunquan Zhang
|
IAAT: A Input-Aware Adaptive Tuning framework for Small GEMM
| null | null | null | null |
cs.DC cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
GEMM with the small size of input matrices is becoming widely used in many
fields like HPC and machine learning. Although many famous BLAS libraries
already supported small GEMM, they cannot achieve near-optimal performance.
This is because the costs of pack operations are high and frequent boundary
processing cannot be neglected. This paper proposes an input-aware adaptive
tuning framework(IAAT) for small GEMM to overcome the performance bottlenecks
in state-of-the-art implementations. IAAT consists of two stages, the
install-time stage and the run-time stage. In the run-time stage, IAAT tiles
matrices into blocks to alleviate boundary processing. This stage utilizes an
input-aware adaptive tile algorithm and plays the role of runtime tuning. In
the install-time stage, IAAT auto-generates hundreds of kernels of different
sizes to remove pack operations. Finally, IAAT finishes the computation of
small GEMM by invoking different kernels, which corresponds to the size of
blocks. The experimental results show that IAAT gains better performance than
other BLAS libraries on ARMv8 platform.
|
[
{
"created": "Sun, 21 Aug 2022 06:54:59 GMT",
"version": "v1"
}
] |
2022-08-23
|
[
[
"Yao",
"Jianyu",
""
],
[
"Shi",
"Boqian",
""
],
[
"Xiang",
"Chunyang",
""
],
[
"Jia",
"Haipeng",
""
],
[
"Li",
"Chendi",
""
],
[
"Cao",
"Hang",
""
],
[
"Zhang",
"Yunquan",
""
]
] |
GEMM with the small size of input matrices is becoming widely used in many fields like HPC and machine learning. Although many famous BLAS libraries already supported small GEMM, they cannot achieve near-optimal performance. This is because the costs of pack operations are high and frequent boundary processing cannot be neglected. This paper proposes an input-aware adaptive tuning framework(IAAT) for small GEMM to overcome the performance bottlenecks in state-of-the-art implementations. IAAT consists of two stages, the install-time stage and the run-time stage. In the run-time stage, IAAT tiles matrices into blocks to alleviate boundary processing. This stage utilizes an input-aware adaptive tile algorithm and plays the role of runtime tuning. In the install-time stage, IAAT auto-generates hundreds of kernels of different sizes to remove pack operations. Finally, IAAT finishes the computation of small GEMM by invoking different kernels, which corresponds to the size of blocks. The experimental results show that IAAT gains better performance than other BLAS libraries on ARMv8 platform.
|
1512.08314
|
Lan Wang
|
Olivier Brun, Lan Wang and Erol Gelenbe
|
Data Driven SMART Intercontinental Overlay Networks
|
9 pages
| null | null | null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the use of Big Data and machine learning based analytics
to the real-time management of Internet scale Quality-of-Service Route
Optimisation with the help of an overlay network. Based on the collection of
large amounts of data sampled each $2$ minutes over a large number of
source-destinations pairs, we show that intercontinental Internet Protocol (IP)
paths are far from optimal with respect to Quality of Service (QoS) metrics
such as end-to-end round-trip delay. We therefore develop a machine learning
based scheme that exploits large scale data collected from communicating node
pairs in a multi-hop overlay network that uses IP between the overlay nodes
themselves, to select paths that provide substantially better QoS than IP. The
approach inspired from Cognitive Packet Network protocol, uses Random Neural
Networks with Reinforcement Learning based on the massive data that is
collected, to select intermediate overlay hops resulting in significantly
better QoS than IP itself. The routing scheme is illustrated on a $20$-node
intercontinental overlay network that collects close to $2\times 10^6$
measurements per week, and makes scalable distributed routing decisions.
Experimental results show that this approach improves QoS significantly and
efficiently in a scalable manner.
|
[
{
"created": "Mon, 28 Dec 2015 03:43:04 GMT",
"version": "v1"
}
] |
2015-12-29
|
[
[
"Brun",
"Olivier",
""
],
[
"Wang",
"Lan",
""
],
[
"Gelenbe",
"Erol",
""
]
] |
This paper addresses the use of Big Data and machine learning based analytics to the real-time management of Internet scale Quality-of-Service Route Optimisation with the help of an overlay network. Based on the collection of large amounts of data sampled each $2$ minutes over a large number of source-destinations pairs, we show that intercontinental Internet Protocol (IP) paths are far from optimal with respect to Quality of Service (QoS) metrics such as end-to-end round-trip delay. We therefore develop a machine learning based scheme that exploits large scale data collected from communicating node pairs in a multi-hop overlay network that uses IP between the overlay nodes themselves, to select paths that provide substantially better QoS than IP. The approach inspired from Cognitive Packet Network protocol, uses Random Neural Networks with Reinforcement Learning based on the massive data that is collected, to select intermediate overlay hops resulting in significantly better QoS than IP itself. The routing scheme is illustrated on a $20$-node intercontinental overlay network that collects close to $2\times 10^6$ measurements per week, and makes scalable distributed routing decisions. Experimental results show that this approach improves QoS significantly and efficiently in a scalable manner.
|
1707.01068
|
Alexander Peysakhovich
|
Adam Lerer and Alexander Peysakhovich
|
Maintaining cooperation in complex social dilemmas using deep
reinforcement learning
| null | null | null | null |
cs.AI cs.GT cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social dilemmas are situations where individuals face a temptation to
increase their payoffs at a cost to total welfare. Building artificially
intelligent agents that achieve good outcomes in these situations is important
because many real world interactions include a tension between selfish
interests and the welfare of others. We show how to modify modern reinforcement
learning methods to construct agents that act in ways that are simple to
understand, nice (begin by cooperating), provokable (try to avoid being
exploited), and forgiving (try to return to mutual cooperation). We show both
theoretically and experimentally that such agents can maintain cooperation in
Markov social dilemmas. Our construction does not require training methods
beyond a modification of self-play, thus if an environment is such that good
strategies can be constructed in the zero-sum case (eg. Atari) then we can
construct agents that solve social dilemmas in this environment.
|
[
{
"created": "Tue, 4 Jul 2017 17:02:05 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Jul 2017 22:40:15 GMT",
"version": "v2"
},
{
"created": "Sat, 28 Oct 2017 15:23:38 GMT",
"version": "v3"
},
{
"created": "Fri, 2 Mar 2018 14:39:55 GMT",
"version": "v4"
}
] |
2018-03-05
|
[
[
"Lerer",
"Adam",
""
],
[
"Peysakhovich",
"Alexander",
""
]
] |
Social dilemmas are situations where individuals face a temptation to increase their payoffs at a cost to total welfare. Building artificially intelligent agents that achieve good outcomes in these situations is important because many real world interactions include a tension between selfish interests and the welfare of others. We show how to modify modern reinforcement learning methods to construct agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (try to return to mutual cooperation). We show both theoretically and experimentally that such agents can maintain cooperation in Markov social dilemmas. Our construction does not require training methods beyond a modification of self-play, thus if an environment is such that good strategies can be constructed in the zero-sum case (eg. Atari) then we can construct agents that solve social dilemmas in this environment.
|
2003.07240
|
Wei Wang Dr.
|
Shengkai Zhang, Wei Wang, Tao Jiang
|
WiFi-Inertial Indoor Pose Estimation for Micro Aerial Vehicles
|
To appear in IEEE Transactions on Industrial Electronics
| null | null | null |
cs.RO eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an indoor pose estimation system for micro aerial
vehicles (MAVs) with a single WiFi access point. Conventional approaches based
on computer vision are limited by illumination conditions and environmental
texture. Our system is free of visual limitations and instantly deployable,
working upon existing WiFi infrastructure without any deployment cost. Our
system consists of two coupled modules. First, we propose an angle-of-arrival
(AoA) estimation algorithm to estimate MAV attitudes and disentangle the AoA
for positioning. Second, we formulate a WiFi-inertial sensor fusion model that
fuses the AoA and the odometry measured by inertial sensors to optimize MAV
poses. Considering the practicality of MAVs, our system is designed to be
real-time and initialization-free for the need of agile flight in unknown
environments. The indoor experiments show that our system achieves the accuracy
of pose estimation with the position error of $61.7$ cm and the attitude error
of $0.92^\circ$.
|
[
{
"created": "Mon, 16 Mar 2020 14:05:18 GMT",
"version": "v1"
}
] |
2020-03-17
|
[
[
"Zhang",
"Shengkai",
""
],
[
"Wang",
"Wei",
""
],
[
"Jiang",
"Tao",
""
]
] |
This paper presents an indoor pose estimation system for micro aerial vehicles (MAVs) with a single WiFi access point. Conventional approaches based on computer vision are limited by illumination conditions and environmental texture. Our system is free of visual limitations and instantly deployable, working upon existing WiFi infrastructure without any deployment cost. Our system consists of two coupled modules. First, we propose an angle-of-arrival (AoA) estimation algorithm to estimate MAV attitudes and disentangle the AoA for positioning. Second, we formulate a WiFi-inertial sensor fusion model that fuses the AoA and the odometry measured by inertial sensors to optimize MAV poses. Considering the practicality of MAVs, our system is designed to be real-time and initialization-free for the need of agile flight in unknown environments. The indoor experiments show that our system achieves the accuracy of pose estimation with the position error of $61.7$ cm and the attitude error of $0.92^\circ$.
|
2407.08134
|
Amir Noorizadegan Ph.D.
|
A. Noorizadegan, Y.C. Hon, D.L. Young, C.S. Chen
|
Highway Networks for Improved Surface Reconstruction: The Role of
Residuals and Weight Updates
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Surface reconstruction from point clouds is a fundamental challenge in
computer graphics and medical imaging. In this paper, we explore the
application of advanced neural network architectures for the accurate and
efficient reconstruction of surfaces from data points. We introduce a novel
variant of the Highway network (Hw) called Square-Highway (SqrHw) within the
context of multilayer perceptrons and investigate its performance alongside
plain neural networks and a simplified Hw in various numerical examples. These
examples include the reconstruction of simple and complex surfaces, such as
spheres, human hands, and intricate models like the Stanford Bunny. We analyze
the impact of factors such as the number of hidden layers, interior and
exterior points, and data distribution on surface reconstruction quality. Our
results show that the proposed SqrHw architecture outperforms other neural
network configurations, achieving faster convergence and higher-quality surface
reconstructions. Additionally, we demonstrate the SqrHw's ability to predict
surfaces over missing data, a valuable feature for challenging applications
like medical imaging. Furthermore, our study delves into further details,
demonstrating that the proposed method based on highway networks yields more
stable weight norms and backpropagation gradients compared to the Plain Network
architecture. This research not only advances the field of computer graphics
but also holds utility for other purposes such as function interpolation and
physics-informed neural networks, which integrate multilayer perceptrons into
their algorithms.
|
[
{
"created": "Thu, 11 Jul 2024 02:15:21 GMT",
"version": "v1"
}
] |
2024-07-12
|
[
[
"Noorizadegan",
"A.",
""
],
[
"Hon",
"Y. C.",
""
],
[
"Young",
"D. L.",
""
],
[
"Chen",
"C. S.",
""
]
] |
Surface reconstruction from point clouds is a fundamental challenge in computer graphics and medical imaging. In this paper, we explore the application of advanced neural network architectures for the accurate and efficient reconstruction of surfaces from data points. We introduce a novel variant of the Highway network (Hw) called Square-Highway (SqrHw) within the context of multilayer perceptrons and investigate its performance alongside plain neural networks and a simplified Hw in various numerical examples. These examples include the reconstruction of simple and complex surfaces, such as spheres, human hands, and intricate models like the Stanford Bunny. We analyze the impact of factors such as the number of hidden layers, interior and exterior points, and data distribution on surface reconstruction quality. Our results show that the proposed SqrHw architecture outperforms other neural network configurations, achieving faster convergence and higher-quality surface reconstructions. Additionally, we demonstrate the SqrHw's ability to predict surfaces over missing data, a valuable feature for challenging applications like medical imaging. Furthermore, our study delves into further details, demonstrating that the proposed method based on highway networks yields more stable weight norms and backpropagation gradients compared to the Plain Network architecture. This research not only advances the field of computer graphics but also holds utility for other purposes such as function interpolation and physics-informed neural networks, which integrate multilayer perceptrons into their algorithms.
|
2309.00616
|
Zhening Huang
|
Zhening Huang, Xiaoyang Wu, Xi Chen, Hengshuang Zhao, Lei Zhu, Joan
Lasenby
|
OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation
|
ECCV 2024. Project page: https://zheninghuang.github.io/OpenIns3D/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we introduce OpenIns3D, a new 3D-input-only framework for 3D
open-vocabulary scene understanding. The OpenIns3D framework employs a
"Mask-Snap-Lookup" scheme. The "Mask" module learns class-agnostic mask
proposals in 3D point clouds, the "Snap" module generates synthetic scene-level
images at multiple scales and leverages 2D vision-language models to extract
interesting objects, and the "Lookup" module searches through the outcomes of
"Snap" to assign category names to the proposed masks. This approach, yet
simple, achieves state-of-the-art performance across a wide range of 3D
open-vocabulary tasks, including recognition, object detection, and instance
segmentation, on both indoor and outdoor datasets. Moreover, OpenIns3D
facilitates effortless switching between different 2D detectors without
requiring retraining. When integrated with powerful 2D open-world models, it
achieves excellent results in scene understanding tasks. Furthermore, when
combined with LLM-powered 2D models, OpenIns3D exhibits an impressive
capability to comprehend and process highly complex text queries that demand
intricate reasoning and real-world knowledge. Project page:
https://zheninghuang.github.io/OpenIns3D/
|
[
{
"created": "Fri, 1 Sep 2023 17:59:56 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Sep 2023 17:59:54 GMT",
"version": "v2"
},
{
"created": "Thu, 5 Oct 2023 15:15:58 GMT",
"version": "v3"
},
{
"created": "Wed, 17 Jul 2024 15:05:38 GMT",
"version": "v4"
},
{
"created": "Mon, 12 Aug 2024 16:58:33 GMT",
"version": "v5"
}
] |
2024-08-13
|
[
[
"Huang",
"Zhening",
""
],
[
"Wu",
"Xiaoyang",
""
],
[
"Chen",
"Xi",
""
],
[
"Zhao",
"Hengshuang",
""
],
[
"Zhu",
"Lei",
""
],
[
"Lasenby",
"Joan",
""
]
] |
In this work, we introduce OpenIns3D, a new 3D-input-only framework for 3D open-vocabulary scene understanding. The OpenIns3D framework employs a "Mask-Snap-Lookup" scheme. The "Mask" module learns class-agnostic mask proposals in 3D point clouds, the "Snap" module generates synthetic scene-level images at multiple scales and leverages 2D vision-language models to extract interesting objects, and the "Lookup" module searches through the outcomes of "Snap" to assign category names to the proposed masks. This approach, yet simple, achieves state-of-the-art performance across a wide range of 3D open-vocabulary tasks, including recognition, object detection, and instance segmentation, on both indoor and outdoor datasets. Moreover, OpenIns3D facilitates effortless switching between different 2D detectors without requiring retraining. When integrated with powerful 2D open-world models, it achieves excellent results in scene understanding tasks. Furthermore, when combined with LLM-powered 2D models, OpenIns3D exhibits an impressive capability to comprehend and process highly complex text queries that demand intricate reasoning and real-world knowledge. Project page: https://zheninghuang.github.io/OpenIns3D/
|
1904.07793
|
Xiaosen Wang
|
Xiaosen Wang, Kun He, Chuanbiao Song, Liwei Wang, John E. Hopcroft
|
AT-GAN: An Adversarial Generator Model for Non-constrained Adversarial
Examples
|
15 pages, 6 figures
| null | null | null |
cs.CV cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the rapid development of adversarial machine learning, most
adversarial attack and defense researches mainly focus on the
perturbation-based adversarial examples, which is constrained by the input
images. In comparison with existing works, we propose non-constrained
adversarial examples, which are generated entirely from scratch without any
constraint on the input. Unlike perturbation-based attacks, or the so-called
unrestricted adversarial attack which is still constrained by the input noise,
we aim to learn the distribution of adversarial examples to generate
non-constrained but semantically meaningful adversarial examples. Following
this spirit, we propose a novel attack framework called AT-GAN (Adversarial
Transfer on Generative Adversarial Net). Specifically, we first develop a
normal GAN model to learn the distribution of benign data, and then transfer
the pre-trained GAN model to estimate the distribution of adversarial examples
for the target model. In this way, AT-GAN can learn the distribution of
adversarial examples that is very close to the distribution of real data. To
our knowledge, this is the first work of building an adversarial generator
model that could produce adversarial examples directly from any input noise.
Extensive experiments and visualizations show that the proposed AT-GAN can very
efficiently generate diverse adversarial examples that are more realistic to
human perception. In addition, AT-GAN yields higher attack success rates
against adversarially trained models under white-box attack setting and
exhibits moderate transferability against black-box models.
|
[
{
"created": "Tue, 16 Apr 2019 16:26:19 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Apr 2019 02:19:07 GMT",
"version": "v2"
},
{
"created": "Tue, 21 May 2019 15:26:32 GMT",
"version": "v3"
},
{
"created": "Fri, 7 Feb 2020 18:11:58 GMT",
"version": "v4"
}
] |
2020-02-10
|
[
[
"Wang",
"Xiaosen",
""
],
[
"He",
"Kun",
""
],
[
"Song",
"Chuanbiao",
""
],
[
"Wang",
"Liwei",
""
],
[
"Hopcroft",
"John E.",
""
]
] |
Despite the rapid development of adversarial machine learning, most adversarial attack and defense researches mainly focus on the perturbation-based adversarial examples, which is constrained by the input images. In comparison with existing works, we propose non-constrained adversarial examples, which are generated entirely from scratch without any constraint on the input. Unlike perturbation-based attacks, or the so-called unrestricted adversarial attack which is still constrained by the input noise, we aim to learn the distribution of adversarial examples to generate non-constrained but semantically meaningful adversarial examples. Following this spirit, we propose a novel attack framework called AT-GAN (Adversarial Transfer on Generative Adversarial Net). Specifically, we first develop a normal GAN model to learn the distribution of benign data, and then transfer the pre-trained GAN model to estimate the distribution of adversarial examples for the target model. In this way, AT-GAN can learn the distribution of adversarial examples that is very close to the distribution of real data. To our knowledge, this is the first work of building an adversarial generator model that could produce adversarial examples directly from any input noise. Extensive experiments and visualizations show that the proposed AT-GAN can very efficiently generate diverse adversarial examples that are more realistic to human perception. In addition, AT-GAN yields higher attack success rates against adversarially trained models under white-box attack setting and exhibits moderate transferability against black-box models.
|
2201.08378
|
Ruslan Nikolaev
|
Ruslan Nikolaev, Hassan Nadeem, Cathlyn Stone, Binoy Ravindran
|
Adelie: Continuous Address Space Layout Re-randomization for Linux
Drivers
|
27th ACM International Conference on Architectural Support for
Programming Languages and Operating Systems (ASPLOS '22), February 28 - March
4, 2022, Lausanne, Switzerland
| null |
10.1145/3503222.3507779
| null |
cs.OS cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While address space layout randomization (ASLR) has been extensively studied
for user-space programs, the corresponding OS kernel's KASLR support remains
very limited, making the kernel vulnerable to just-in-time (JIT)
return-oriented programming (ROP) attacks. Furthermore, commodity OSs such as
Linux restrict their KASLR range to 32 bits due to architectural constraints
(e.g., x86-64 only supports 32-bit immediate operands for most instructions),
which makes them vulnerable to even unsophisticated brute-force ROP attacks due
to low entropy. Most in-kernel pointers remain static, exacerbating the problem
when pointers are leaked.
Adelie, our kernel defense mechanism, overcomes KASLR limitations, increases
KASLR entropy, and makes successful ROP attacks on the Linux kernel much harder
to achieve. First, Adelie enables the position-independent code (PIC) model so
that the kernel and its modules can be placed anywhere in the 64-bit virtual
address space, at any distance apart from each other. Second, Adelie implements
stack re-randomization and address encryption on modules. Finally, Adelie
enables efficient continuous KASLR for modules by using the PIC model to make
it (almost) impossible to inject ROP gadgets through these modules regardless
of gadget's origin.
Since device drivers (typically compiled as modules) are often developed by
third parties and are typically less tested than core OS parts, they are also
often more vulnerable. By fully re-randomizing device drivers, the last two
contributions together prevent most JIT ROP attacks since vulnerable modules
are very likely to be a starting point of an attack. Furthermore, some OS
instances in virtualized environments are specifically designated to run device
drivers, where drivers are the primary target of JIT ROP attacks. Our
evaluation shows high efficiency of Adelie's approach.
[full abstract is in the paper]
|
[
{
"created": "Thu, 20 Jan 2022 18:58:44 GMT",
"version": "v1"
}
] |
2022-01-21
|
[
[
"Nikolaev",
"Ruslan",
""
],
[
"Nadeem",
"Hassan",
""
],
[
"Stone",
"Cathlyn",
""
],
[
"Ravindran",
"Binoy",
""
]
] |
While address space layout randomization (ASLR) has been extensively studied for user-space programs, the corresponding OS kernel's KASLR support remains very limited, making the kernel vulnerable to just-in-time (JIT) return-oriented programming (ROP) attacks. Furthermore, commodity OSs such as Linux restrict their KASLR range to 32 bits due to architectural constraints (e.g., x86-64 only supports 32-bit immediate operands for most instructions), which makes them vulnerable to even unsophisticated brute-force ROP attacks due to low entropy. Most in-kernel pointers remain static, exacerbating the problem when pointers are leaked. Adelie, our kernel defense mechanism, overcomes KASLR limitations, increases KASLR entropy, and makes successful ROP attacks on the Linux kernel much harder to achieve. First, Adelie enables the position-independent code (PIC) model so that the kernel and its modules can be placed anywhere in the 64-bit virtual address space, at any distance apart from each other. Second, Adelie implements stack re-randomization and address encryption on modules. Finally, Adelie enables efficient continuous KASLR for modules by using the PIC model to make it (almost) impossible to inject ROP gadgets through these modules regardless of gadget's origin. Since device drivers (typically compiled as modules) are often developed by third parties and are typically less tested than core OS parts, they are also often more vulnerable. By fully re-randomizing device drivers, the last two contributions together prevent most JIT ROP attacks since vulnerable modules are very likely to be a starting point of an attack. Furthermore, some OS instances in virtualized environments are specifically designated to run device drivers, where drivers are the primary target of JIT ROP attacks. Our evaluation shows high efficiency of Adelie's approach. [full abstract is in the paper]
|
1901.02873
|
Omur Ozel
|
Peng Zou and Omur Ozel and Suresh Subramaniam
|
Waiting before Serving: A Companion to Packet Management in Status
Update Systems
| null | null | null | null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we explore the potential of server waiting before packet
transmission in improving the Age of Information (AoI) in status update
systems. We consider a non-preemptive queue with Poisson arrivals and
independent general service distribution and we incorporate waiting before
serving in two packet management schemes: M/GI/1/1 and M/GI/1/$2^*$. In
M/GI/1/1 scheme, the server waits for a deterministic time immediately after a
packet enters the server. In M/GI/1/$2^*$ scheme, depending on idle or busy
system state, the server waits for a deterministic time before starting service
of the packet. In both cases, if a potential newer arrival is captured existing
packet is discarded. Different from most existing works, we analyze AoI
evolution by indexing the incoming packets, which is enabled by an alternative
method of partitioning the area under the evolution of instantaneous AoI to
calculate its time average. We obtain expressions for average and average peak
AoI for both queueing disciplines with waiting. Our numerical results
demonstrate that waiting before service can bring significant improvement in
average age, particularly, for heavy-tailed service distributions. This
improvement comes at the expense of an increase in average peak AoI. We
highlight the trade-off between average and average peak AoI generated by
waiting before serving.
|
[
{
"created": "Wed, 9 Jan 2019 18:46:44 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Feb 2019 14:54:46 GMT",
"version": "v2"
},
{
"created": "Tue, 5 Mar 2019 16:31:08 GMT",
"version": "v3"
},
{
"created": "Mon, 22 Apr 2019 14:32:30 GMT",
"version": "v4"
}
] |
2019-04-23
|
[
[
"Zou",
"Peng",
""
],
[
"Ozel",
"Omur",
""
],
[
"Subramaniam",
"Suresh",
""
]
] |
In this paper, we explore the potential of server waiting before packet transmission in improving the Age of Information (AoI) in status update systems. We consider a non-preemptive queue with Poisson arrivals and independent general service distribution and we incorporate waiting before serving in two packet management schemes: M/GI/1/1 and M/GI/1/$2^*$. In M/GI/1/1 scheme, the server waits for a deterministic time immediately after a packet enters the server. In M/GI/1/$2^*$ scheme, depending on idle or busy system state, the server waits for a deterministic time before starting service of the packet. In both cases, if a potential newer arrival is captured existing packet is discarded. Different from most existing works, we analyze AoI evolution by indexing the incoming packets, which is enabled by an alternative method of partitioning the area under the evolution of instantaneous AoI to calculate its time average. We obtain expressions for average and average peak AoI for both queueing disciplines with waiting. Our numerical results demonstrate that waiting before service can bring significant improvement in average age, particularly, for heavy-tailed service distributions. This improvement comes at the expense of an increase in average peak AoI. We highlight the trade-off between average and average peak AoI generated by waiting before serving.
|
2207.10180
|
Feng Liu
|
Feng Liu, Minchul Kim, Anil Jain, and Xiaoming Liu
|
Controllable and Guided Face Synthesis for Unconstrained Face
Recognition
|
to be published in ECCV 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Although significant advances have been made in face recognition (FR), FR in
unconstrained environments remains challenging due to the domain gap between
the semi-constrained training datasets and unconstrained testing scenarios. To
address this problem, we propose a controllable face synthesis model (CFSM)
that can mimic the distribution of target datasets in a style latent space.
CFSM learns a linear subspace with orthogonal bases in the style latent space
with precise control over the diversity and degree of synthesis. Furthermore,
the pre-trained synthesis model can be guided by the FR model, making the
resulting images more beneficial for FR model training. Besides, target dataset
distributions are characterized by the learned orthogonal bases, which can be
utilized to measure the distributional similarity among face datasets. Our
approach yields significant performance gains on unconstrained benchmarks, such
as IJB-B, IJB-C, TinyFace and IJB-S (+5.76% Rank1).
|
[
{
"created": "Wed, 20 Jul 2022 20:13:29 GMT",
"version": "v1"
}
] |
2022-07-22
|
[
[
"Liu",
"Feng",
""
],
[
"Kim",
"Minchul",
""
],
[
"Jain",
"Anil",
""
],
[
"Liu",
"Xiaoming",
""
]
] |
Although significant advances have been made in face recognition (FR), FR in unconstrained environments remains challenging due to the domain gap between the semi-constrained training datasets and unconstrained testing scenarios. To address this problem, we propose a controllable face synthesis model (CFSM) that can mimic the distribution of target datasets in a style latent space. CFSM learns a linear subspace with orthogonal bases in the style latent space with precise control over the diversity and degree of synthesis. Furthermore, the pre-trained synthesis model can be guided by the FR model, making the resulting images more beneficial for FR model training. Besides, target dataset distributions are characterized by the learned orthogonal bases, which can be utilized to measure the distributional similarity among face datasets. Our approach yields significant performance gains on unconstrained benchmarks, such as IJB-B, IJB-C, TinyFace and IJB-S (+5.76% Rank1).
|
1903.03408
|
Marc Maliar
|
Marc Maliar
|
How Machine (Deep) Learning Helps Us Understand Human Learning: the
Value of Big Ideas
|
17 pages
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
I use simulation of two multilayer neural networks to gain intuition into the
determinants of human learning. The first network, the teacher, is trained to
achieve a high accuracy in handwritten digit recognition. The second network,
the student, learns to reproduce the output of the first network. I show that
learning from the teacher is more effective than learning from the data under
the appropriate degree of regularization. Regularization allows the teacher to
distinguish the trends and to deliver "big ideas" to the student. I also model
other learning situations such as expert and novice teachers, high- and
low-ability students and biased learning experience due to, e.g., poverty and
trauma. The results from computer simulation accord remarkably well with
finding of the modern psychological literature. The code is written in MATLAB
and will be publicly available from the author's web page.
|
[
{
"created": "Sat, 16 Feb 2019 16:06:42 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Mar 2019 20:55:49 GMT",
"version": "v2"
}
] |
2019-03-22
|
[
[
"Maliar",
"Marc",
""
]
] |
I use simulation of two multilayer neural networks to gain intuition into the determinants of human learning. The first network, the teacher, is trained to achieve a high accuracy in handwritten digit recognition. The second network, the student, learns to reproduce the output of the first network. I show that learning from the teacher is more effective than learning from the data under the appropriate degree of regularization. Regularization allows the teacher to distinguish the trends and to deliver "big ideas" to the student. I also model other learning situations such as expert and novice teachers, high- and low-ability students and biased learning experience due to, e.g., poverty and trauma. The results from computer simulation accord remarkably well with finding of the modern psychological literature. The code is written in MATLAB and will be publicly available from the author's web page.
|
2201.03331
|
Christian Ponte-Fern\'andez
|
Christian Ponte-Fern\'andez (1), Jorge Gonz\'alez-Dom\'inguez (1) and
Mar\'ia J. Mart\'in (1) ((1) Universidade da Coru\~na, CITIC, Computer
Architecture Group, A Coru\~na, Spain)
|
Fiuncho: a program for any-order epistasis detection in CPU clusters
|
Submitted to The Journal of Supercomputing. Source code available at
https://github.com/UDC-GAC/fiuncho
| null | null | null |
cs.DC cs.CE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Epistasis can be defined as the statistical interaction of genes during the
expression of a phenotype. It is believed that it plays a fundamental role in
gene expression, as individual genetic variants have reported a very small
increase in disease risk in previous Genome-Wide Association Studies. The most
successful approach to epistasis detection is the exhaustive method, although
its exponential time complexity requires a highly parallel implementation in
order to be used. This work presents Fiuncho, a program that exploits all
levels of parallelism present in \textit{x86\_64} CPU clusters in order to
mitigate the complexity of this approach. It supports epistasis interactions of
any order, and when compared with other exhaustive methods, it is on average
358, 7 and 3 times faster than MDR, MPI3SNP and BitEpi, respectively.
|
[
{
"created": "Mon, 10 Jan 2022 13:19:31 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Mar 2022 16:22:12 GMT",
"version": "v2"
},
{
"created": "Tue, 8 Mar 2022 17:07:11 GMT",
"version": "v3"
}
] |
2022-03-09
|
[
[
"Ponte-Fernández",
"Christian",
""
],
[
"González-Domínguez",
"Jorge",
""
],
[
"Martín",
"María J.",
""
]
] |
Epistasis can be defined as the statistical interaction of genes during the expression of a phenotype. It is believed that it plays a fundamental role in gene expression, as individual genetic variants have reported a very small increase in disease risk in previous Genome-Wide Association Studies. The most successful approach to epistasis detection is the exhaustive method, although its exponential time complexity requires a highly parallel implementation in order to be used. This work presents Fiuncho, a program that exploits all levels of parallelism present in \textit{x86\_64} CPU clusters in order to mitigate the complexity of this approach. It supports epistasis interactions of any order, and when compared with other exhaustive methods, it is on average 358, 7 and 3 times faster than MDR, MPI3SNP and BitEpi, respectively.
|
2105.11527
|
Qi Qian
|
Qi Qian, Yuanhong Xu, Juhua Hu, Hao Li, Rong Jin
|
Unsupervised Visual Representation Learning by Online Constrained
K-Means
|
accepted by CVPR'22
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cluster discrimination is an effective pretext task for unsupervised
representation learning, which often consists of two phases: clustering and
discrimination. Clustering is to assign each instance a pseudo label that will
be used to learn representations in discrimination. The main challenge resides
in clustering since prevalent clustering methods (e.g., k-means) have to run in
a batch mode. Besides, there can be a trivial solution consisting of a
dominating cluster. To address these challenges, we first investigate the
objective of clustering-based representation learning. Based on this, we
propose a novel clustering-based pretext task with online \textbf{Co}nstrained
\textbf{K}-m\textbf{e}ans (\textbf{CoKe}). Compared with the balanced
clustering that each cluster has exactly the same size, we only constrain the
minimal size of each cluster to flexibly capture the inherent data structure.
More importantly, our online assignment method has a theoretical guarantee to
approach the global optimum. By decoupling clustering and discrimination, CoKe
can achieve competitive performance when optimizing with only a single view
from each instance. Extensive experiments on ImageNet and other benchmark data
sets verify both the efficacy and efficiency of our proposal. Code is available
at \url{https://github.com/idstcv/CoKe}.
|
[
{
"created": "Mon, 24 May 2021 20:38:32 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Dec 2021 18:37:45 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Mar 2022 20:15:05 GMT",
"version": "v3"
}
] |
2022-03-30
|
[
[
"Qian",
"Qi",
""
],
[
"Xu",
"Yuanhong",
""
],
[
"Hu",
"Juhua",
""
],
[
"Li",
"Hao",
""
],
[
"Jin",
"Rong",
""
]
] |
Cluster discrimination is an effective pretext task for unsupervised representation learning, which often consists of two phases: clustering and discrimination. Clustering is to assign each instance a pseudo label that will be used to learn representations in discrimination. The main challenge resides in clustering since prevalent clustering methods (e.g., k-means) have to run in a batch mode. Besides, there can be a trivial solution consisting of a dominating cluster. To address these challenges, we first investigate the objective of clustering-based representation learning. Based on this, we propose a novel clustering-based pretext task with online \textbf{Co}nstrained \textbf{K}-m\textbf{e}ans (\textbf{CoKe}). Compared with the balanced clustering that each cluster has exactly the same size, we only constrain the minimal size of each cluster to flexibly capture the inherent data structure. More importantly, our online assignment method has a theoretical guarantee to approach the global optimum. By decoupling clustering and discrimination, CoKe can achieve competitive performance when optimizing with only a single view from each instance. Extensive experiments on ImageNet and other benchmark data sets verify both the efficacy and efficiency of our proposal. Code is available at \url{https://github.com/idstcv/CoKe}.
|
1706.09297
|
Swapnil Dhamal
|
Swapnil Dhamal, Walid Ben-Ameur, Tijani Chahed, and Eitan Altman
|
Optimal Investment Strategies for Competing Camps in a Social Network: A
Broad Framework
|
The original version of this paper is accepted for publication in
IEEE Transactions on Network Science and Engineering. The copyright for this
article belongs to IEEE
| null |
10.1109/TNSE.2018.2864575
| null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of optimally investing in nodes of a social network in a
competitive setting, wherein two camps aim to drive the average opinion of the
population in their own favor. Using a well-established model of opinion
dynamics, we formulate the problem as a zero-sum game with its players being
the two camps. We derive optimal investment strategies for both camps, and show
that a random investment strategy is optimal when the underlying network
follows a popular class of weight distributions. We study a broad framework,
where we consider various well-motivated settings of the problem, namely, when
the influence of a camp on a node is a concave function of its investment on
that node, when a camp aims at maximizing competitor's investment or deviation
from its desired investment, and when one of the camps has uncertain
information about the values of the model parameters. We also study a
Stackelberg variant of this game under common coupled constraints on the
combined investments by the camps and derive their equilibrium strategies, and
hence quantify the first-mover advantage. For a quantitative and illustrative
study, we conduct simulations on real-world datasets and provide results and
insights.
|
[
{
"created": "Wed, 28 Jun 2017 14:02:41 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jan 2018 14:46:20 GMT",
"version": "v2"
},
{
"created": "Fri, 16 Feb 2018 18:19:59 GMT",
"version": "v3"
},
{
"created": "Sun, 24 Jun 2018 09:22:12 GMT",
"version": "v4"
},
{
"created": "Fri, 10 Aug 2018 01:49:10 GMT",
"version": "v5"
}
] |
2018-08-13
|
[
[
"Dhamal",
"Swapnil",
""
],
[
"Ben-Ameur",
"Walid",
""
],
[
"Chahed",
"Tijani",
""
],
[
"Altman",
"Eitan",
""
]
] |
We study the problem of optimally investing in nodes of a social network in a competitive setting, wherein two camps aim to drive the average opinion of the population in their own favor. Using a well-established model of opinion dynamics, we formulate the problem as a zero-sum game with its players being the two camps. We derive optimal investment strategies for both camps, and show that a random investment strategy is optimal when the underlying network follows a popular class of weight distributions. We study a broad framework, where we consider various well-motivated settings of the problem, namely, when the influence of a camp on a node is a concave function of its investment on that node, when a camp aims at maximizing competitor's investment or deviation from its desired investment, and when one of the camps has uncertain information about the values of the model parameters. We also study a Stackelberg variant of this game under common coupled constraints on the combined investments by the camps and derive their equilibrium strategies, and hence quantify the first-mover advantage. For a quantitative and illustrative study, we conduct simulations on real-world datasets and provide results and insights.
|
2403.14702
|
Achraf Hsain Him
|
Achraf Hsain and Hamza El Housni
|
Large language model-powered chatbots for internationalizing student
support in higher education
|
Key Words: Chatbot, Higher Education, Large Language model, Student
Support, Information retrieval. Presented in the conference: The
Internationalization of Higher Education and Digital Transformation:
Addressing Current and Future Possibilities in Oujda, Morocco
| null | null | null |
cs.CY cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
This research explores the integration of chatbot technology powered by
GPT-3.5 and GPT-4 Turbo into higher education to enhance internationalization
and leverage digital transformation. It delves into the design, implementation,
and application of Large Language Models (LLMs) for improving student
engagement, information access, and support. Utilizing technologies like Python
3, GPT API, LangChain, and Chroma Vector Store, the research emphasizes
creating a high-quality, timely, and relevant transcript dataset for chatbot
testing. Findings indicate the chatbot's efficacy in providing comprehensive
responses, its preference over traditional methods by users, and a low error
rate. Highlighting the chatbot's real-time engagement, memory capabilities, and
critical data access, the study demonstrates its potential to elevate
accessibility, efficiency, and satisfaction. Concluding, the research suggests
the chatbot significantly aids higher education internationalization, proposing
further investigation into digital technology's role in educational enhancement
and strategy development.
|
[
{
"created": "Sat, 16 Mar 2024 23:50:19 GMT",
"version": "v1"
}
] |
2024-03-25
|
[
[
"Hsain",
"Achraf",
""
],
[
"Housni",
"Hamza El",
""
]
] |
This research explores the integration of chatbot technology powered by GPT-3.5 and GPT-4 Turbo into higher education to enhance internationalization and leverage digital transformation. It delves into the design, implementation, and application of Large Language Models (LLMs) for improving student engagement, information access, and support. Utilizing technologies like Python 3, GPT API, LangChain, and Chroma Vector Store, the research emphasizes creating a high-quality, timely, and relevant transcript dataset for chatbot testing. Findings indicate the chatbot's efficacy in providing comprehensive responses, its preference over traditional methods by users, and a low error rate. Highlighting the chatbot's real-time engagement, memory capabilities, and critical data access, the study demonstrates its potential to elevate accessibility, efficiency, and satisfaction. Concluding, the research suggests the chatbot significantly aids higher education internationalization, proposing further investigation into digital technology's role in educational enhancement and strategy development.
|
2306.11964
|
Anay Mehrotra
|
Sruthi Gorantla, Anay Mehrotra, Amit Deshpande, Anand Louis
|
Sampling Individually-Fair Rankings that are Always Group Fair
|
Full version of a paper accepted for presentation in ACM AIES 2023
| null | null | null |
cs.CY cs.DS cs.IR cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rankings on online platforms help their end-users find the relevant
information -- people, news, media, and products -- quickly. Fair ranking
tasks, which ask to rank a set of items to maximize utility subject to
satisfying group-fairness constraints, have gained significant interest in the
Algorithmic Fairness, Information Retrieval, and Machine Learning literature.
Recent works, however, identify uncertainty in the utilities of items as a
primary cause of unfairness and propose introducing randomness in the output.
This randomness is carefully chosen to guarantee an adequate representation of
each item (while accounting for the uncertainty). However, due to this
randomness, the output rankings may violate group fairness constraints. We give
an efficient algorithm that samples rankings from an individually-fair
distribution while ensuring that every output ranking is group fair. The
expected utility of the output ranking is at least $\alpha$ times the utility
of the optimal fair solution. Here, $\alpha$ depends on the utilities,
position-discounts, and constraints -- it approaches 1 as the range of
utilities or the position-discounts shrinks, or when utilities satisfy
distributional assumptions. Empirically, we observe that our algorithm achieves
individual and group fairness and that Pareto dominates the state-of-the-art
baselines.
|
[
{
"created": "Wed, 21 Jun 2023 01:26:34 GMT",
"version": "v1"
}
] |
2023-06-22
|
[
[
"Gorantla",
"Sruthi",
""
],
[
"Mehrotra",
"Anay",
""
],
[
"Deshpande",
"Amit",
""
],
[
"Louis",
"Anand",
""
]
] |
Rankings on online platforms help their end-users find the relevant information -- people, news, media, and products -- quickly. Fair ranking tasks, which ask to rank a set of items to maximize utility subject to satisfying group-fairness constraints, have gained significant interest in the Algorithmic Fairness, Information Retrieval, and Machine Learning literature. Recent works, however, identify uncertainty in the utilities of items as a primary cause of unfairness and propose introducing randomness in the output. This randomness is carefully chosen to guarantee an adequate representation of each item (while accounting for the uncertainty). However, due to this randomness, the output rankings may violate group fairness constraints. We give an efficient algorithm that samples rankings from an individually-fair distribution while ensuring that every output ranking is group fair. The expected utility of the output ranking is at least $\alpha$ times the utility of the optimal fair solution. Here, $\alpha$ depends on the utilities, position-discounts, and constraints -- it approaches 1 as the range of utilities or the position-discounts shrinks, or when utilities satisfy distributional assumptions. Empirically, we observe that our algorithm achieves individual and group fairness and that Pareto dominates the state-of-the-art baselines.
|
2211.07387
|
Steven Bilaj
|
Steven Bilaj, Sofien Dhouib, Setareh Maghsudi
|
Hypothesis Transfer in Bandits by Weighted Models
|
16 pages, 6 figures, published in the European Conference on Machine
Learning and Principles and Practice of Knowledge Discovery in Databases 2022
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of contextual multi-armed bandits in the setting of
hypothesis transfer learning. That is, we assume having access to a previously
learned model on an unobserved set of contexts, and we leverage it in order to
accelerate exploration on a new bandit problem. Our transfer strategy is based
on a re-weighting scheme for which we show a reduction in the regret over the
classic Linear UCB when transfer is desired, while recovering the classic
regret rate when the two tasks are unrelated. We further extend this method to
an arbitrary amount of source models, where the algorithm decides which model
is preferred at each time step. Additionally we discuss an approach where a
dynamic convex combination of source models is given in terms of a biased
regularization term in the classic LinUCB algorithm. The algorithms and the
theoretical analysis of our proposed methods substantiated by empirical
evaluations on simulated and real-world data.
|
[
{
"created": "Mon, 14 Nov 2022 14:13:02 GMT",
"version": "v1"
}
] |
2022-11-15
|
[
[
"Bilaj",
"Steven",
""
],
[
"Dhouib",
"Sofien",
""
],
[
"Maghsudi",
"Setareh",
""
]
] |
We consider the problem of contextual multi-armed bandits in the setting of hypothesis transfer learning. That is, we assume having access to a previously learned model on an unobserved set of contexts, and we leverage it in order to accelerate exploration on a new bandit problem. Our transfer strategy is based on a re-weighting scheme for which we show a reduction in the regret over the classic Linear UCB when transfer is desired, while recovering the classic regret rate when the two tasks are unrelated. We further extend this method to an arbitrary amount of source models, where the algorithm decides which model is preferred at each time step. Additionally we discuss an approach where a dynamic convex combination of source models is given in terms of a biased regularization term in the classic LinUCB algorithm. The algorithms and the theoretical analysis of our proposed methods substantiated by empirical evaluations on simulated and real-world data.
|
2205.09573
|
Suryadi -
|
Suryadi, Yew-Soon Ong, Lock Yue Chew
|
Jacobian Granger Causal Neural Networks for Analysis of Stationary and
Nonstationary Data
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Granger causality is a commonly used method for uncovering information flow
and dependencies in a time series. Here we introduce JGC (Jacobian Granger
Causality), a neural network-based approach to Granger causality using the
Jacobian as a measure of variable importance, and propose a thresholding
procedure for inferring Granger causal variables using this measure. The
resulting approach performs consistently well compared to other approaches in
identifying Granger causal variables, the associated time lags, as well as
interaction signs. Lastly, through the inclusion of a time variable, we show
that this approach is able to learn the temporal dependencies for nonstationary
systems whose Granger causal structures change in time.
|
[
{
"created": "Thu, 19 May 2022 14:07:54 GMT",
"version": "v1"
}
] |
2022-05-20
|
[
[
"Suryadi",
"",
""
],
[
"Ong",
"Yew-Soon",
""
],
[
"Chew",
"Lock Yue",
""
]
] |
Granger causality is a commonly used method for uncovering information flow and dependencies in a time series. Here we introduce JGC (Jacobian Granger Causality), a neural network-based approach to Granger causality using the Jacobian as a measure of variable importance, and propose a thresholding procedure for inferring Granger causal variables using this measure. The resulting approach performs consistently well compared to other approaches in identifying Granger causal variables, the associated time lags, as well as interaction signs. Lastly, through the inclusion of a time variable, we show that this approach is able to learn the temporal dependencies for nonstationary systems whose Granger causal structures change in time.
|
1509.08891
|
Hao Wu
|
Hao Wu
|
The Computational Principles of Learning Ability
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
It has been quite a long time since AI researchers in the field of computer
science stop talking about simulating human intelligence or trying to explain
how brain works. Recently, represented by deep learning techniques, the field
of machine learning is experiencing unprecedented prosperity and some
applications with near human-level performance bring researchers confidence to
imply that their approaches are the promising candidate for understanding the
mechanism of human brain. However apart from several ancient philological
criteria and some imaginary black box tests (Turing test, Chinese room) there
is no computational level explanation, definition or criteria about
intelligence or any of its components. Base on the common sense that learning
ability is one critical component of intelligence and inspect from the
viewpoint of mapping relations, this paper presents two laws which explains
what is the "learning ability" as we familiar with and under what conditions a
mapping relation can be acknowledged as "Learning Model".
|
[
{
"created": "Wed, 23 Sep 2015 04:25:44 GMT",
"version": "v1"
}
] |
2015-09-30
|
[
[
"Wu",
"Hao",
""
]
] |
It has been quite a long time since AI researchers in the field of computer science stop talking about simulating human intelligence or trying to explain how brain works. Recently, represented by deep learning techniques, the field of machine learning is experiencing unprecedented prosperity and some applications with near human-level performance bring researchers confidence to imply that their approaches are the promising candidate for understanding the mechanism of human brain. However apart from several ancient philological criteria and some imaginary black box tests (Turing test, Chinese room) there is no computational level explanation, definition or criteria about intelligence or any of its components. Base on the common sense that learning ability is one critical component of intelligence and inspect from the viewpoint of mapping relations, this paper presents two laws which explains what is the "learning ability" as we familiar with and under what conditions a mapping relation can be acknowledged as "Learning Model".
|
2204.02524
|
Alexander H. Liu
|
Alexander H. Liu, Cheng-I Jeff Lai, Wei-Ning Hsu, Michael Auli, Alexei
Baevski, James Glass
|
Simple and Effective Unsupervised Speech Synthesis
|
preprint, equal contribution from first two authors
| null | null | null |
cs.SD cs.CL eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the first unsupervised speech synthesis system based on a
simple, yet effective recipe. The framework leverages recent work in
unsupervised speech recognition as well as existing neural-based speech
synthesis. Using only unlabeled speech audio and unlabeled text as well as a
lexicon, our method enables speech synthesis without the need for a
human-labeled corpus. Experiments demonstrate the unsupervised system can
synthesize speech similar to a supervised counterpart in terms of naturalness
and intelligibility measured by human evaluation.
|
[
{
"created": "Wed, 6 Apr 2022 00:19:13 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Apr 2022 02:46:21 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Apr 2022 17:45:35 GMT",
"version": "v3"
}
] |
2022-04-21
|
[
[
"Liu",
"Alexander H.",
""
],
[
"Lai",
"Cheng-I Jeff",
""
],
[
"Hsu",
"Wei-Ning",
""
],
[
"Auli",
"Michael",
""
],
[
"Baevski",
"Alexei",
""
],
[
"Glass",
"James",
""
]
] |
We introduce the first unsupervised speech synthesis system based on a simple, yet effective recipe. The framework leverages recent work in unsupervised speech recognition as well as existing neural-based speech synthesis. Using only unlabeled speech audio and unlabeled text as well as a lexicon, our method enables speech synthesis without the need for a human-labeled corpus. Experiments demonstrate the unsupervised system can synthesize speech similar to a supervised counterpart in terms of naturalness and intelligibility measured by human evaluation.
|
2103.15515
|
Cong-Thanh Do
|
Cong-Thanh Do, Rama Doddipatla, Thomas Hain
|
Multiple-hypothesis CTC-based semi-supervised adaptation of end-to-end
speech recognition
|
Accepted at ICASSP 2021
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes an adaptation method for end-to-end speech recognition.
In this method, multiple automatic speech recognition (ASR) 1-best hypotheses
are integrated in the computation of the connectionist temporal classification
(CTC) loss function. The integration of multiple ASR hypotheses helps
alleviating the impact of errors in the ASR hypotheses to the computation of
the CTC loss when ASR hypotheses are used. When being applied in
semi-supervised adaptation scenarios where part of the adaptation data do not
have labels, the CTC loss of the proposed method is computed from different ASR
1-best hypotheses obtained by decoding the unlabeled adaptation data.
Experiments are performed in clean and multi-condition training scenarios where
the CTC-based end-to-end ASR systems are trained on Wall Street Journal (WSJ)
clean training data and CHiME-4 multi-condition training data, respectively,
and tested on Aurora-4 test data. The proposed adaptation method yields 6.6%
and 5.8% relative word error rate (WER) reductions in clean and multi-condition
training scenarios, respectively, compared to a baseline system which is
adapted with part of the adaptation data having manual transcriptions using
back-propagation fine-tuning.
|
[
{
"created": "Mon, 29 Mar 2021 11:38:35 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Mar 2021 09:30:35 GMT",
"version": "v2"
}
] |
2021-04-01
|
[
[
"Do",
"Cong-Thanh",
""
],
[
"Doddipatla",
"Rama",
""
],
[
"Hain",
"Thomas",
""
]
] |
This paper proposes an adaptation method for end-to-end speech recognition. In this method, multiple automatic speech recognition (ASR) 1-best hypotheses are integrated in the computation of the connectionist temporal classification (CTC) loss function. The integration of multiple ASR hypotheses helps alleviating the impact of errors in the ASR hypotheses to the computation of the CTC loss when ASR hypotheses are used. When being applied in semi-supervised adaptation scenarios where part of the adaptation data do not have labels, the CTC loss of the proposed method is computed from different ASR 1-best hypotheses obtained by decoding the unlabeled adaptation data. Experiments are performed in clean and multi-condition training scenarios where the CTC-based end-to-end ASR systems are trained on Wall Street Journal (WSJ) clean training data and CHiME-4 multi-condition training data, respectively, and tested on Aurora-4 test data. The proposed adaptation method yields 6.6% and 5.8% relative word error rate (WER) reductions in clean and multi-condition training scenarios, respectively, compared to a baseline system which is adapted with part of the adaptation data having manual transcriptions using back-propagation fine-tuning.
|
1910.07968
|
Arvind Kiwelekar
|
Arvind W Kiwelekar
|
Role of Ontology Training to Software Engineering Students
|
Short Position Paper
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Students of software engineering struggle to develop a systems perspective
because most of the software engineering methodologies focus on developing a
particular aspect of a system. Lack of unified coverage to the topic of systems
modelling is identified as the root cause behind this problem. The paper
explains the role of ontology in building systems perspective. A case for the
necessity of ontology training as a means to overcome this problem is
presented. The course content for a typical course on ontology is also
described in the paper.
|
[
{
"created": "Thu, 17 Oct 2019 15:25:51 GMT",
"version": "v1"
}
] |
2019-10-18
|
[
[
"Kiwelekar",
"Arvind W",
""
]
] |
Students of software engineering struggle to develop a systems perspective because most of the software engineering methodologies focus on developing a particular aspect of a system. Lack of unified coverage to the topic of systems modelling is identified as the root cause behind this problem. The paper explains the role of ontology in building systems perspective. A case for the necessity of ontology training as a means to overcome this problem is presented. The course content for a typical course on ontology is also described in the paper.
|
2012.00058
|
Trang Le
|
Joseph D. Romano, Trang T. Le, William La Cava, John T. Gregg, Daniel
J. Goldberg, Natasha L. Ray, Praneel Chakraborty, Daniel Himmelstein, Weixuan
Fu, and Jason H. Moore
|
PMLB v1.0: An open source dataset collection for benchmarking machine
learning methods
|
4 pages, 1 figure. *: These authors contributed equally
| null | null | null |
cs.LG cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Motivation: Novel machine learning and statistical modeling studies rely on
standardized comparisons to existing methods using well-studied benchmark
datasets. Few tools exist that provide rapid access to many of these datasets
through a standardized, user-friendly interface that integrates well with
popular data science workflows.
Results: This release of PMLB provides the largest collection of diverse,
public benchmark datasets for evaluating new machine learning and data science
methods aggregated in one location. v1.0 introduces a number of critical
improvements developed following discussions with the open-source community.
Availability: PMLB is available at https://github.com/EpistasisLab/pmlb.
Python and R interfaces for PMLB can be installed through the Python Package
Index and Comprehensive R Archive Network, respectively.
|
[
{
"created": "Mon, 30 Nov 2020 19:21:44 GMT",
"version": "v1"
},
{
"created": "Sun, 4 Apr 2021 20:31:09 GMT",
"version": "v2"
},
{
"created": "Tue, 6 Apr 2021 12:37:35 GMT",
"version": "v3"
}
] |
2021-04-07
|
[
[
"Romano",
"Joseph D.",
""
],
[
"Le",
"Trang T.",
""
],
[
"La Cava",
"William",
""
],
[
"Gregg",
"John T.",
""
],
[
"Goldberg",
"Daniel J.",
""
],
[
"Ray",
"Natasha L.",
""
],
[
"Chakraborty",
"Praneel",
""
],
[
"Himmelstein",
"Daniel",
""
],
[
"Fu",
"Weixuan",
""
],
[
"Moore",
"Jason H.",
""
]
] |
Motivation: Novel machine learning and statistical modeling studies rely on standardized comparisons to existing methods using well-studied benchmark datasets. Few tools exist that provide rapid access to many of these datasets through a standardized, user-friendly interface that integrates well with popular data science workflows. Results: This release of PMLB provides the largest collection of diverse, public benchmark datasets for evaluating new machine learning and data science methods aggregated in one location. v1.0 introduces a number of critical improvements developed following discussions with the open-source community. Availability: PMLB is available at https://github.com/EpistasisLab/pmlb. Python and R interfaces for PMLB can be installed through the Python Package Index and Comprehensive R Archive Network, respectively.
|
2105.02351
|
Necati Cihan Camgoz Dr.
|
Necati Cihan Camgoz, Ben Saunders, Guillaume Rochette, Marco
Giovanelli, Giacomo Inches, Robin Nachtrab-Ribback, Richard Bowden
|
Content4All Open Research Sign Language Translation Datasets
| null | null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Computational sign language research lacks the large-scale datasets that
enables the creation of useful reallife applications. To date, most research
has been limited to prototype systems on small domains of discourse, e.g.
weather forecasts. To address this issue and to push the field forward, we
release six datasets comprised of 190 hours of footage on the larger domain of
news. From this, 20 hours of footage have been annotated by Deaf experts and
interpreters and is made publicly available for research purposes. In this
paper, we share the dataset collection process and tools developed to enable
the alignment of sign language video and subtitles, as well as baseline
translation results to underpin future research.
|
[
{
"created": "Wed, 5 May 2021 22:14:53 GMT",
"version": "v1"
}
] |
2021-05-07
|
[
[
"Camgoz",
"Necati Cihan",
""
],
[
"Saunders",
"Ben",
""
],
[
"Rochette",
"Guillaume",
""
],
[
"Giovanelli",
"Marco",
""
],
[
"Inches",
"Giacomo",
""
],
[
"Nachtrab-Ribback",
"Robin",
""
],
[
"Bowden",
"Richard",
""
]
] |
Computational sign language research lacks the large-scale datasets that enables the creation of useful reallife applications. To date, most research has been limited to prototype systems on small domains of discourse, e.g. weather forecasts. To address this issue and to push the field forward, we release six datasets comprised of 190 hours of footage on the larger domain of news. From this, 20 hours of footage have been annotated by Deaf experts and interpreters and is made publicly available for research purposes. In this paper, we share the dataset collection process and tools developed to enable the alignment of sign language video and subtitles, as well as baseline translation results to underpin future research.
|
2301.00418
|
Duc-Vu Nguyen
|
Duc-Vu Nguyen, Ngan Luu-Thuy Nguyen
|
Is word segmentation necessary for Vietnamese sentiment classification?
|
In Proceedings of the 16th International Conference on Computing and
Communication Technologies (RIVF 2022)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To the best of our knowledge, this paper made the first attempt to answer
whether word segmentation is necessary for Vietnamese sentiment classification.
To do this, we presented five pre-trained monolingual S4- based language models
for Vietnamese, including one model without word segmentation, and four models
using RDRsegmenter, uitnlp, pyvi, or underthesea toolkits in the pre-processing
data phase. According to comprehensive experimental results on two corpora,
including the VLSP2016-SA corpus of technical article reviews from the news and
social media and the UIT-VSFC corpus of the educational survey, we have two
suggestions. Firstly, using traditional classifiers like Naive Bayes or Support
Vector Machines, word segmentation maybe not be necessary for the Vietnamese
sentiment classification corpus, which comes from the social domain. Secondly,
word segmentation is necessary for Vietnamese sentiment classification when
word segmentation is used before using the BPE method and feeding into the deep
learning model. In this way, the RDRsegmenter is the stable toolkit for word
segmentation among the uitnlp, pyvi, and underthesea toolkits.
|
[
{
"created": "Sun, 1 Jan 2023 15:04:47 GMT",
"version": "v1"
}
] |
2023-01-03
|
[
[
"Nguyen",
"Duc-Vu",
""
],
[
"Nguyen",
"Ngan Luu-Thuy",
""
]
] |
To the best of our knowledge, this paper made the first attempt to answer whether word segmentation is necessary for Vietnamese sentiment classification. To do this, we presented five pre-trained monolingual S4- based language models for Vietnamese, including one model without word segmentation, and four models using RDRsegmenter, uitnlp, pyvi, or underthesea toolkits in the pre-processing data phase. According to comprehensive experimental results on two corpora, including the VLSP2016-SA corpus of technical article reviews from the news and social media and the UIT-VSFC corpus of the educational survey, we have two suggestions. Firstly, using traditional classifiers like Naive Bayes or Support Vector Machines, word segmentation maybe not be necessary for the Vietnamese sentiment classification corpus, which comes from the social domain. Secondly, word segmentation is necessary for Vietnamese sentiment classification when word segmentation is used before using the BPE method and feeding into the deep learning model. In this way, the RDRsegmenter is the stable toolkit for word segmentation among the uitnlp, pyvi, and underthesea toolkits.
|
2309.17339
|
Maximilian Schambach
|
Maximilian Schambach, Dominique Paul, Johannes S. Otterbach
|
Scaling Experiments in Self-Supervised Cross-Table Representation
Learning
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
To analyze the scaling potential of deep tabular representation learning
models, we introduce a novel Transformer-based architecture specifically
tailored to tabular data and cross-table representation learning by utilizing
table-specific tokenizers and a shared Transformer backbone. Our training
approach encompasses both single-table and cross-table models, trained via
missing value imputation through a self-supervised masked cell recovery
objective. To understand the scaling behavior of our method, we train models of
varying sizes, ranging from approximately $10^4$ to $10^7$ parameters. These
models are trained on a carefully curated pretraining dataset, consisting of
135M training tokens sourced from 76 diverse datasets. We assess the scaling of
our architecture in both single-table and cross-table pretraining setups by
evaluating the pretrained models using linear probing on a curated set of
benchmark datasets and comparing the results with conventional baselines.
|
[
{
"created": "Fri, 29 Sep 2023 15:48:38 GMT",
"version": "v1"
}
] |
2023-10-02
|
[
[
"Schambach",
"Maximilian",
""
],
[
"Paul",
"Dominique",
""
],
[
"Otterbach",
"Johannes S.",
""
]
] |
To analyze the scaling potential of deep tabular representation learning models, we introduce a novel Transformer-based architecture specifically tailored to tabular data and cross-table representation learning by utilizing table-specific tokenizers and a shared Transformer backbone. Our training approach encompasses both single-table and cross-table models, trained via missing value imputation through a self-supervised masked cell recovery objective. To understand the scaling behavior of our method, we train models of varying sizes, ranging from approximately $10^4$ to $10^7$ parameters. These models are trained on a carefully curated pretraining dataset, consisting of 135M training tokens sourced from 76 diverse datasets. We assess the scaling of our architecture in both single-table and cross-table pretraining setups by evaluating the pretrained models using linear probing on a curated set of benchmark datasets and comparing the results with conventional baselines.
|
2202.10587
|
Yin Fang
|
Yin Fang, Zhuo Chen, Xiaohui Fan and Ningyu Zhang
|
Knowledge-informed Molecular Learning: A Survey on Paradigm Transfer
|
8 pages, 3 figures
| null | null | null |
cs.LG cs.AI physics.chem-ph q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning, notably deep learning, has significantly propelled
molecular investigations within the biochemical sphere. Traditionally, modeling
for such research has centered around a handful of paradigms. For instance, the
prediction paradigm is frequently deployed for tasks such as molecular property
prediction. To enhance the generation and decipherability of purely data-driven
models, scholars have integrated biochemical domain knowledge into these
molecular study models. This integration has sparked a surge in paradigm
transfer, which is solving one molecular learning task by reformulating it as
another one. With the emergence of Large Language Models, these paradigms have
demonstrated an escalating trend towards harmonized unification. In this work,
we delineate a literature survey focused on knowledge-informed molecular
learning from the perspective of paradigm transfer. We classify the paradigms,
scrutinize their methodologies, and dissect the contribution of domain
knowledge. Moreover, we encapsulate prevailing trends and identify intriguing
avenues for future exploration in molecular learning.
|
[
{
"created": "Thu, 17 Feb 2022 06:18:02 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Sep 2023 10:46:44 GMT",
"version": "v2"
}
] |
2023-09-06
|
[
[
"Fang",
"Yin",
""
],
[
"Chen",
"Zhuo",
""
],
[
"Fan",
"Xiaohui",
""
],
[
"Zhang",
"Ningyu",
""
]
] |
Machine learning, notably deep learning, has significantly propelled molecular investigations within the biochemical sphere. Traditionally, modeling for such research has centered around a handful of paradigms. For instance, the prediction paradigm is frequently deployed for tasks such as molecular property prediction. To enhance the generation and decipherability of purely data-driven models, scholars have integrated biochemical domain knowledge into these molecular study models. This integration has sparked a surge in paradigm transfer, which is solving one molecular learning task by reformulating it as another one. With the emergence of Large Language Models, these paradigms have demonstrated an escalating trend towards harmonized unification. In this work, we delineate a literature survey focused on knowledge-informed molecular learning from the perspective of paradigm transfer. We classify the paradigms, scrutinize their methodologies, and dissect the contribution of domain knowledge. Moreover, we encapsulate prevailing trends and identify intriguing avenues for future exploration in molecular learning.
|
2304.00483
|
Mathieu Ravaut
|
Iva Bojic, Josef Halim, Verena Suharman, Sreeja Tar, Qi Chwen Ong, Duy
Phung, Mathieu Ravaut, Shafiq Joty, Josip Car
|
A Data-centric Framework for Improving Domain-specific Machine Reading
Comprehension Datasets
| null |
2023.In The Fourth Workshop on Insights from Negative Results in
NLP, pages 19-32, Dubrovnik, Croatia. Association for Computational
Linguistics
|
10.18653/v1/2023.insights-1.3
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Low-quality data can cause downstream problems in high-stakes applications.
Data-centric approach emphasizes on improving dataset quality to enhance model
performance. High-quality datasets are needed for general-purpose Large
Language Models (LLMs) training, as well as for domain-specific models, which
are usually small in size as it is costly to engage a large number of domain
experts for their creation. Thus, it is vital to ensure high-quality
domain-specific training data. In this paper, we propose a framework for
enhancing the data quality of original datasets. We applied the proposed
framework to four biomedical datasets and showed relative improvement of up to
33%/40% for fine-tuning of retrieval/reader models on the BioASQ dataset when
using back translation to enhance the original dataset quality.
|
[
{
"created": "Sun, 2 Apr 2023 08:26:38 GMT",
"version": "v1"
},
{
"created": "Fri, 26 May 2023 05:43:19 GMT",
"version": "v2"
}
] |
2023-10-13
|
[
[
"Bojic",
"Iva",
""
],
[
"Halim",
"Josef",
""
],
[
"Suharman",
"Verena",
""
],
[
"Tar",
"Sreeja",
""
],
[
"Ong",
"Qi Chwen",
""
],
[
"Phung",
"Duy",
""
],
[
"Ravaut",
"Mathieu",
""
],
[
"Joty",
"Shafiq",
""
],
[
"Car",
"Josip",
""
]
] |
Low-quality data can cause downstream problems in high-stakes applications. Data-centric approach emphasizes on improving dataset quality to enhance model performance. High-quality datasets are needed for general-purpose Large Language Models (LLMs) training, as well as for domain-specific models, which are usually small in size as it is costly to engage a large number of domain experts for their creation. Thus, it is vital to ensure high-quality domain-specific training data. In this paper, we propose a framework for enhancing the data quality of original datasets. We applied the proposed framework to four biomedical datasets and showed relative improvement of up to 33%/40% for fine-tuning of retrieval/reader models on the BioASQ dataset when using back translation to enhance the original dataset quality.
|
1902.09197
|
Manoel Horta Ribeiro
|
Manoel Horta Ribeiro, Kristina Gligori\'c, Robert West
|
Message Distortion in Information Cascades
|
Presented at TheWebConf 2019
| null |
10.1145/3308558.3313531
| null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Information diffusion is usually modeled as a process in which immutable
pieces of information propagate over a network. In reality, however, messages
are not immutable, but may be morphed with every step, potentially entailing
large cumulative distortions. This process may lead to misinformation even in
the absence of malevolent actors, and understanding it is crucial for modeling
and improving online information systems. Here, we perform a controlled,
crowdsourced experiment in which we simulate the propagation of information
from medical research papers. Starting from the original abstracts, crowd
workers iteratively shorten previously produced summaries to increasingly
smaller lengths. We also collect control summaries where the original abstract
is compressed directly to the final target length. Comparing cascades to
controls allows us to separate the effect of the length constraint from that of
accumulated distortion. Via careful manual coding, we annotate lexical and
semantic units in the medical abstracts and track them along cascades. We find
that iterative summarization has a negative impact due to the accumulation of
error, but that high-quality intermediate summaries result in less distorted
messages than in the control case. Different types of information behave
differently; in particular, the conclusion of a medical abstract (i.e., its key
message) is distorted most. Finally, we compare abstractive with extractive
summaries, finding that the latter are less prone to semantic distortion.
Overall, this work is a first step in studying information cascades without the
assumption that disseminated content is immutable, with implications on our
understanding of the role of word-of-mouth effects on the misreporting of
science.
|
[
{
"created": "Mon, 25 Feb 2019 11:20:23 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jun 2019 23:36:43 GMT",
"version": "v2"
}
] |
2019-06-11
|
[
[
"Ribeiro",
"Manoel Horta",
""
],
[
"Gligorić",
"Kristina",
""
],
[
"West",
"Robert",
""
]
] |
Information diffusion is usually modeled as a process in which immutable pieces of information propagate over a network. In reality, however, messages are not immutable, but may be morphed with every step, potentially entailing large cumulative distortions. This process may lead to misinformation even in the absence of malevolent actors, and understanding it is crucial for modeling and improving online information systems. Here, we perform a controlled, crowdsourced experiment in which we simulate the propagation of information from medical research papers. Starting from the original abstracts, crowd workers iteratively shorten previously produced summaries to increasingly smaller lengths. We also collect control summaries where the original abstract is compressed directly to the final target length. Comparing cascades to controls allows us to separate the effect of the length constraint from that of accumulated distortion. Via careful manual coding, we annotate lexical and semantic units in the medical abstracts and track them along cascades. We find that iterative summarization has a negative impact due to the accumulation of error, but that high-quality intermediate summaries result in less distorted messages than in the control case. Different types of information behave differently; in particular, the conclusion of a medical abstract (i.e., its key message) is distorted most. Finally, we compare abstractive with extractive summaries, finding that the latter are less prone to semantic distortion. Overall, this work is a first step in studying information cascades without the assumption that disseminated content is immutable, with implications on our understanding of the role of word-of-mouth effects on the misreporting of science.
|
2205.07611
|
Haochen Han
|
Haochen Han, Qinghua Zheng, Minnan Luo, Kaiyao Miao, Feng Tian and Yan
Chen
|
Noise-Tolerant Learning for Audio-Visual Action Recognition
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, video recognition is emerging with the help of multi-modal
learning, which focuses on integrating distinct modalities to improve the
performance or robustness of the model. Although various multi-modal learning
methods have been proposed and offer remarkable recognition results, almost all
of these methods rely on high-quality manual annotations and assume that
modalities among multi-modal data provide semantically relevant information.
Unfortunately, the widely used video datasets are usually coarse-annotated or
collected from the Internet. Thus, it inevitably contains a portion of noisy
labels and noisy correspondence. To address this challenge, we use the
audio-visual action recognition task as a proxy and propose a noise-tolerant
learning framework to find anti-interference model parameters against both
noisy labels and noisy correspondence. Specifically, our method consists of two
phases that aim to rectify noise by the inherent correlation between
modalities. First, a noise-tolerant contrastive training phase is performed to
make the model immune to the possible noisy-labeled data. To alleviate the
influence of noisy correspondence, we propose a cross-modal noise estimation
component to adjust the consistency between different modalities. As the noisy
correspondence existed at the instance level, we further propose a
category-level contrastive loss to reduce its interference. Second, in the
hybrid-supervised training phase, we calculate the distance metric among
features to obtain corrected labels, which are used as complementary
supervision to guide the training. Extensive experiments on a wide range of
noisy levels demonstrate that our method significantly improves the robustness
of the action recognition model and surpasses the baselines by a clear margin.
|
[
{
"created": "Mon, 16 May 2022 12:14:03 GMT",
"version": "v1"
},
{
"created": "Fri, 20 May 2022 10:10:55 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Sep 2023 04:23:25 GMT",
"version": "v3"
}
] |
2023-09-12
|
[
[
"Han",
"Haochen",
""
],
[
"Zheng",
"Qinghua",
""
],
[
"Luo",
"Minnan",
""
],
[
"Miao",
"Kaiyao",
""
],
[
"Tian",
"Feng",
""
],
[
"Chen",
"Yan",
""
]
] |
Recently, video recognition is emerging with the help of multi-modal learning, which focuses on integrating distinct modalities to improve the performance or robustness of the model. Although various multi-modal learning methods have been proposed and offer remarkable recognition results, almost all of these methods rely on high-quality manual annotations and assume that modalities among multi-modal data provide semantically relevant information. Unfortunately, the widely used video datasets are usually coarse-annotated or collected from the Internet. Thus, it inevitably contains a portion of noisy labels and noisy correspondence. To address this challenge, we use the audio-visual action recognition task as a proxy and propose a noise-tolerant learning framework to find anti-interference model parameters against both noisy labels and noisy correspondence. Specifically, our method consists of two phases that aim to rectify noise by the inherent correlation between modalities. First, a noise-tolerant contrastive training phase is performed to make the model immune to the possible noisy-labeled data. To alleviate the influence of noisy correspondence, we propose a cross-modal noise estimation component to adjust the consistency between different modalities. As the noisy correspondence existed at the instance level, we further propose a category-level contrastive loss to reduce its interference. Second, in the hybrid-supervised training phase, we calculate the distance metric among features to obtain corrected labels, which are used as complementary supervision to guide the training. Extensive experiments on a wide range of noisy levels demonstrate that our method significantly improves the robustness of the action recognition model and surpasses the baselines by a clear margin.
|
1908.02284
|
Zachary Ren
|
Zongze Ren, Guofu Yang, Shugong Xu
|
Two-stage Training for Chinese Dialect Recognition
|
Accepted to Interspeech 2019
| null | null | null |
cs.CL cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a two-stage language identification (LID) system
based on a shallow ResNet14 followed by a simple 2-layer recurrent neural
network (RNN) architecture, which was used for Xunfei (iFlyTek) Chinese Dialect
Recognition Challenge and won the first place among 110 teams. The system
trains an acoustic model (AM) firstly with connectionist temporal
classification (CTC) to recognize the given phonetic sequence annotation and
then train another RNN to classify dialect category by utilizing the
intermediate features as inputs from the AM. Compared with a three-stage system
we further explore, our results show that the two-stage system can achieve high
accuracy for Chinese dialects recognition under both short utterance and long
utterance conditions with less training time.
|
[
{
"created": "Tue, 6 Aug 2019 04:28:56 GMT",
"version": "v1"
},
{
"created": "Sat, 10 Aug 2019 09:28:00 GMT",
"version": "v2"
}
] |
2019-08-13
|
[
[
"Ren",
"Zongze",
""
],
[
"Yang",
"Guofu",
""
],
[
"Xu",
"Shugong",
""
]
] |
In this paper, we present a two-stage language identification (LID) system based on a shallow ResNet14 followed by a simple 2-layer recurrent neural network (RNN) architecture, which was used for Xunfei (iFlyTek) Chinese Dialect Recognition Challenge and won the first place among 110 teams. The system trains an acoustic model (AM) firstly with connectionist temporal classification (CTC) to recognize the given phonetic sequence annotation and then train another RNN to classify dialect category by utilizing the intermediate features as inputs from the AM. Compared with a three-stage system we further explore, our results show that the two-stage system can achieve high accuracy for Chinese dialects recognition under both short utterance and long utterance conditions with less training time.
|
2309.16158
|
Jindong Li
|
Jindong Li, Guobin Shen, Dongcheng Zhao, Qian Zhang, Yi Zeng
|
FireFly v2: Advancing Hardware Support for High-Performance Spiking
Neural Network with a Spatiotemporal FPGA Accelerator
| null | null | null | null |
cs.NE cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spiking Neural Networks (SNNs) are expected to be a promising alternative to
Artificial Neural Networks (ANNs) due to their strong biological
interpretability and high energy efficiency. Specialized SNN hardware offers
clear advantages over general-purpose devices in terms of power and
performance. However, there's still room to advance hardware support for
state-of-the-art (SOTA) SNN algorithms and improve computation and memory
efficiency. As a further step in supporting high-performance SNNs on
specialized hardware, we introduce FireFly v2, an FPGA SNN accelerator that can
address the issue of non-spike operation in current SOTA SNN algorithms, which
presents an obstacle in the end-to-end deployment onto existing SNN hardware.
To more effectively align with the SNN characteristics, we design a
spatiotemporal dataflow that allows four dimensions of parallelism and
eliminates the need for membrane potential storage, enabling on-the-fly spike
processing and spike generation. To further improve hardware acceleration
performance, we develop a high-performance spike computing engine as a backend
based on a systolic array operating at 500-600MHz. To the best of our
knowledge, FireFly v2 achieves the highest clock frequency among all FPGA-based
implementations. Furthermore, it stands as the first SNN accelerator capable of
supporting non-spike operations, which are commonly used in advanced SNN
algorithms. FireFly v2 has doubled the throughput and DSP efficiency when
compared to our previous version of FireFly and it exhibits 1.33 times the DSP
efficiency and 1.42 times the power efficiency compared to the current most
advanced FPGA accelerators.
|
[
{
"created": "Thu, 28 Sep 2023 04:17:02 GMT",
"version": "v1"
}
] |
2023-09-29
|
[
[
"Li",
"Jindong",
""
],
[
"Shen",
"Guobin",
""
],
[
"Zhao",
"Dongcheng",
""
],
[
"Zhang",
"Qian",
""
],
[
"Zeng",
"Yi",
""
]
] |
Spiking Neural Networks (SNNs) are expected to be a promising alternative to Artificial Neural Networks (ANNs) due to their strong biological interpretability and high energy efficiency. Specialized SNN hardware offers clear advantages over general-purpose devices in terms of power and performance. However, there's still room to advance hardware support for state-of-the-art (SOTA) SNN algorithms and improve computation and memory efficiency. As a further step in supporting high-performance SNNs on specialized hardware, we introduce FireFly v2, an FPGA SNN accelerator that can address the issue of non-spike operation in current SOTA SNN algorithms, which presents an obstacle in the end-to-end deployment onto existing SNN hardware. To more effectively align with the SNN characteristics, we design a spatiotemporal dataflow that allows four dimensions of parallelism and eliminates the need for membrane potential storage, enabling on-the-fly spike processing and spike generation. To further improve hardware acceleration performance, we develop a high-performance spike computing engine as a backend based on a systolic array operating at 500-600MHz. To the best of our knowledge, FireFly v2 achieves the highest clock frequency among all FPGA-based implementations. Furthermore, it stands as the first SNN accelerator capable of supporting non-spike operations, which are commonly used in advanced SNN algorithms. FireFly v2 has doubled the throughput and DSP efficiency when compared to our previous version of FireFly and it exhibits 1.33 times the DSP efficiency and 1.42 times the power efficiency compared to the current most advanced FPGA accelerators.
|
2305.20055
|
Songning Lai
|
Haoxuan Xu, Songning Lai, Xianyang Li, Yang Yang
|
Cross-Domain Car Detection Model with Integrated Convolutional Block
Attention Mechanism
|
It needs to be returned for major modifications
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Car detection, particularly through camera vision, has become a major focus
in the field of computer vision and has gained widespread adoption. While
current car detection systems are capable of good detection, reliable detection
can still be challenging due to factors such as proximity between the car,
light intensity, and environmental visibility. To address these issues, we
propose cross-domain Car Detection Model with integrated convolutional block
Attention mechanism(CDMA) that we apply to car recognition for autonomous
driving and other areas. CDMA includes several novelties: 1)Building a complete
cross-domain target detection framework. 2)Developing an unpaired target domain
picture generation module with an integrated convolutional attention mechanism
which specifically emphasizes the car headlights feature. 3)Adopting
Generalized Intersection over Union (GIOU) as the loss function of the target
detection framework. 4)Designing an object detection model integrated with
two-headed Convolutional Block Attention Module(CBAM). 5)Utilizing an effective
data enhancement method. To evaluate the model's effectiveness, we performed a
reduced will resolution process on the data in the SSLAD dataset and used it as
the benchmark dataset for our task. Experimental results show that the
performance of the cross-domain car target detection model improves by 40% over
the model without our framework, and our improvements have a significant impact
on cross-domain car recognition.
|
[
{
"created": "Wed, 31 May 2023 17:28:13 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Jun 2023 12:10:08 GMT",
"version": "v2"
},
{
"created": "Wed, 21 Jun 2023 11:34:33 GMT",
"version": "v3"
},
{
"created": "Thu, 29 Jun 2023 18:08:22 GMT",
"version": "v4"
}
] |
2023-07-03
|
[
[
"Xu",
"Haoxuan",
""
],
[
"Lai",
"Songning",
""
],
[
"Li",
"Xianyang",
""
],
[
"Yang",
"Yang",
""
]
] |
Car detection, particularly through camera vision, has become a major focus in the field of computer vision and has gained widespread adoption. While current car detection systems are capable of good detection, reliable detection can still be challenging due to factors such as proximity between the car, light intensity, and environmental visibility. To address these issues, we propose cross-domain Car Detection Model with integrated convolutional block Attention mechanism(CDMA) that we apply to car recognition for autonomous driving and other areas. CDMA includes several novelties: 1)Building a complete cross-domain target detection framework. 2)Developing an unpaired target domain picture generation module with an integrated convolutional attention mechanism which specifically emphasizes the car headlights feature. 3)Adopting Generalized Intersection over Union (GIOU) as the loss function of the target detection framework. 4)Designing an object detection model integrated with two-headed Convolutional Block Attention Module(CBAM). 5)Utilizing an effective data enhancement method. To evaluate the model's effectiveness, we performed a reduced will resolution process on the data in the SSLAD dataset and used it as the benchmark dataset for our task. Experimental results show that the performance of the cross-domain car target detection model improves by 40% over the model without our framework, and our improvements have a significant impact on cross-domain car recognition.
|
2111.14998
|
Jack Ziegler
|
Jack Ziegler and Ryan M. Mcgranaghan
|
Harnessing expressive capacity of Machine Learning modeling to represent
complex coupling of Earth's auroral space weather regimes
|
Lower resolution
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We develop multiple Deep Learning (DL) models that advance the
state-of-the-art predictions of the global auroral particle precipitation. We
use observations from low Earth orbiting spacecraft of the electron energy flux
to develop a model that improves global nowcasts (predictions at the time of
observation) of the accelerated particles. Multiple Machine Learning (ML)
modeling approaches are compared, including a novel multi-task model, models
with tail- and distribution-based loss functions, and a spatio-temporally
sparse 2D-convolutional model. We detail the data preparation process as well
as the model development that will be illustrative for many similar time series
global regression problems in space weather and across domains. Our ML
improvements are three-fold: 1) loss function engineering; 2) multi-task
learning; and 3) transforming the task from time series prediction to
spatio-temporal prediction. Notably, the ML models improve prediction of the
extreme events, historically obstinate to accurate specification and indicate
that increased expressive capacity provided by ML innovation can address grand
challenges in the science of space weather.
|
[
{
"created": "Mon, 29 Nov 2021 22:35:09 GMT",
"version": "v1"
}
] |
2021-12-01
|
[
[
"Ziegler",
"Jack",
""
],
[
"Mcgranaghan",
"Ryan M.",
""
]
] |
We develop multiple Deep Learning (DL) models that advance the state-of-the-art predictions of the global auroral particle precipitation. We use observations from low Earth orbiting spacecraft of the electron energy flux to develop a model that improves global nowcasts (predictions at the time of observation) of the accelerated particles. Multiple Machine Learning (ML) modeling approaches are compared, including a novel multi-task model, models with tail- and distribution-based loss functions, and a spatio-temporally sparse 2D-convolutional model. We detail the data preparation process as well as the model development that will be illustrative for many similar time series global regression problems in space weather and across domains. Our ML improvements are three-fold: 1) loss function engineering; 2) multi-task learning; and 3) transforming the task from time series prediction to spatio-temporal prediction. Notably, the ML models improve prediction of the extreme events, historically obstinate to accurate specification and indicate that increased expressive capacity provided by ML innovation can address grand challenges in the science of space weather.
|
2208.01548
|
Sunjay Cauligi
|
Sunjay Cauligi, Marco Guarnieri, Daniel Moghimi, Deian Stefan, Marco
Vassena
|
A Turning Point for Verified Spectre Sandboxing
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Spectre attacks enable an attacker to access restricted data in an
application's memory. Both the academic community and industry veterans have
developed several mitigations to block Spectre attacks, but to date, very few
have been formally vetted; most are "best effort" strategies. Formal guarantees
are particularly crucial for protecting isolated environments like sandboxing
against Spectre attacks. In such environments, a subtle flaw in the mitigation
would allow untrusted code to break out of the sandbox and access trusted
memory regions.
In our work, we develop principled foundations to build isolated environments
resistant against Spectre attacks. We propose a formal framework for reasoning
about sandbox execution and Spectre attacks. We formalize properties that sound
mitigation strategies must fulfill and we show how various existing mitigations
satisfy (or fail to satisfy!) these properties.
|
[
{
"created": "Tue, 2 Aug 2022 15:56:17 GMT",
"version": "v1"
}
] |
2022-08-03
|
[
[
"Cauligi",
"Sunjay",
""
],
[
"Guarnieri",
"Marco",
""
],
[
"Moghimi",
"Daniel",
""
],
[
"Stefan",
"Deian",
""
],
[
"Vassena",
"Marco",
""
]
] |
Spectre attacks enable an attacker to access restricted data in an application's memory. Both the academic community and industry veterans have developed several mitigations to block Spectre attacks, but to date, very few have been formally vetted; most are "best effort" strategies. Formal guarantees are particularly crucial for protecting isolated environments like sandboxing against Spectre attacks. In such environments, a subtle flaw in the mitigation would allow untrusted code to break out of the sandbox and access trusted memory regions. In our work, we develop principled foundations to build isolated environments resistant against Spectre attacks. We propose a formal framework for reasoning about sandbox execution and Spectre attacks. We formalize properties that sound mitigation strategies must fulfill and we show how various existing mitigations satisfy (or fail to satisfy!) these properties.
|
1912.00778
|
Itay Lieder
|
Itay Lieder, Meirav Segal, Eran Avidan, Asaf Cohen, Tom Hope
|
Learning a faceted customer segmentation for discovering new business
opportunities at Intel
|
3 pages, 4 figures, Published in proceedings of IEEE BigData 2019
| null | null | null |
cs.IR cs.CL cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For sales and marketing organizations within large enterprises, identifying
and understanding new markets, customers and partners is a key challenge.
Intel's Sales and Marketing Group (SMG) faces similar challenges while growing
in new markets and domains and evolving its existing business. In today's
complex technological and commercial landscape, there is need for intelligent
automation supporting a fine-grained understanding of businesses in order to
help SMG sift through millions of companies across many geographies and
languages and identify relevant directions. We present a system developed in
our company that mines millions of public business web pages, and extracts a
faceted customer representation. We focus on two key customer aspects that are
essential for finding relevant opportunities: industry segments (ranging from
broad verticals such as healthcare, to more specific fields such as 'video
analytics') and functional roles (e.g., 'manufacturer' or 'retail'). To address
the challenge of labeled data collection, we enrich our data with external
information gleaned from Wikipedia, and develop a semi-supervised multi-label,
multi-lingual deep learning model that parses customer website texts and
classifies them into their respective facets. Our system scans and indexes
companies as part of a large-scale knowledge graph that currently holds tens of
millions of connected entities with thousands being fetched, enriched and
connected to the graph by the hour in real time, and also supports knowledge
and insight discovery. In experiments conducted in our company, we are able to
significantly boost the performance of sales personnel in the task of
discovering new customers and commercial partnership opportunities.
|
[
{
"created": "Wed, 27 Nov 2019 15:48:26 GMT",
"version": "v1"
}
] |
2019-12-03
|
[
[
"Lieder",
"Itay",
""
],
[
"Segal",
"Meirav",
""
],
[
"Avidan",
"Eran",
""
],
[
"Cohen",
"Asaf",
""
],
[
"Hope",
"Tom",
""
]
] |
For sales and marketing organizations within large enterprises, identifying and understanding new markets, customers and partners is a key challenge. Intel's Sales and Marketing Group (SMG) faces similar challenges while growing in new markets and domains and evolving its existing business. In today's complex technological and commercial landscape, there is need for intelligent automation supporting a fine-grained understanding of businesses in order to help SMG sift through millions of companies across many geographies and languages and identify relevant directions. We present a system developed in our company that mines millions of public business web pages, and extracts a faceted customer representation. We focus on two key customer aspects that are essential for finding relevant opportunities: industry segments (ranging from broad verticals such as healthcare, to more specific fields such as 'video analytics') and functional roles (e.g., 'manufacturer' or 'retail'). To address the challenge of labeled data collection, we enrich our data with external information gleaned from Wikipedia, and develop a semi-supervised multi-label, multi-lingual deep learning model that parses customer website texts and classifies them into their respective facets. Our system scans and indexes companies as part of a large-scale knowledge graph that currently holds tens of millions of connected entities with thousands being fetched, enriched and connected to the graph by the hour in real time, and also supports knowledge and insight discovery. In experiments conducted in our company, we are able to significantly boost the performance of sales personnel in the task of discovering new customers and commercial partnership opportunities.
|
2309.16686
|
Luisa Schuhmacher
|
Luisa Schuhmacher, Sofie Pollin, Hazem Sallouha
|
ecoBLE: A Low-Computation Energy Consumption Prediction Framework for
Bluetooth Low Energy
|
To be published in proceedings of the 2023 International Conference
on Embedded Wireless Systems and Networks (EWSN)
| null | null | null |
cs.NI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Bluetooth Low Energy (BLE) is a de-facto technology for Internet of Things
(IoT) applications, promising very low energy consumption. However, this low
energy consumption accounts only for the radio part, and it overlooks the
energy consumption of other hardware and software components. Monitoring and
predicting the energy consumption of IoT nodes after deployment can
substantially aid in ensuring low energy consumption, calculating the remaining
battery lifetime, predicting needed energy for energy-harvesting nodes, and
detecting anomalies. In this paper, we introduce a Long Short-Term Memory
Projection (LSTMP)-based BLE energy consumption prediction framework together
with a dataset for a healthcare application scenario where BLE is widely
adopted. Unlike radio-focused theoretical energy models, our framework provides
a comprehensive energy consumption prediction, considering all components of
the IoT node, including the radio, sensor as well as microcontroller unit
(MCU). Our measurement-based results show that the proposed framework predicts
the energy consumption of different BLE nodes with a Mean Absolute Percentage
Error (MAPE) of up to 12%, giving comparable accuracy to state-of-the-art
energy consumption prediction with a five times smaller prediction model size.
|
[
{
"created": "Wed, 2 Aug 2023 13:04:23 GMT",
"version": "v1"
}
] |
2023-10-02
|
[
[
"Schuhmacher",
"Luisa",
""
],
[
"Pollin",
"Sofie",
""
],
[
"Sallouha",
"Hazem",
""
]
] |
Bluetooth Low Energy (BLE) is a de-facto technology for Internet of Things (IoT) applications, promising very low energy consumption. However, this low energy consumption accounts only for the radio part, and it overlooks the energy consumption of other hardware and software components. Monitoring and predicting the energy consumption of IoT nodes after deployment can substantially aid in ensuring low energy consumption, calculating the remaining battery lifetime, predicting needed energy for energy-harvesting nodes, and detecting anomalies. In this paper, we introduce a Long Short-Term Memory Projection (LSTMP)-based BLE energy consumption prediction framework together with a dataset for a healthcare application scenario where BLE is widely adopted. Unlike radio-focused theoretical energy models, our framework provides a comprehensive energy consumption prediction, considering all components of the IoT node, including the radio, sensor as well as microcontroller unit (MCU). Our measurement-based results show that the proposed framework predicts the energy consumption of different BLE nodes with a Mean Absolute Percentage Error (MAPE) of up to 12%, giving comparable accuracy to state-of-the-art energy consumption prediction with a five times smaller prediction model size.
|
2103.06624
|
Huan Zhang
|
Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh,
J. Zico Kolter
|
Beta-CROWN: Efficient Bound Propagation with Per-neuron Split
Constraints for Complete and Incomplete Neural Network Robustness
Verification
|
Shiqi Wang, Huan Zhang and Kaidi Xu contributed equally. Accepted by
NeurIPS 2021
| null | null | null |
cs.LG cs.AI cs.CR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bound propagation based incomplete neural network verifiers such as CROWN are
very efficient and can significantly accelerate branch-and-bound (BaB) based
complete verification of neural networks. However, bound propagation cannot
fully handle the neuron split constraints introduced by BaB commonly handled by
expensive linear programming (LP) solvers, leading to loose bounds and hurting
verification efficiency. In this work, we develop $\beta$-CROWN, a new bound
propagation based method that can fully encode neuron splits via optimizable
parameters $\beta$ constructed from either primal or dual space. When jointly
optimized in intermediate layers, $\beta$-CROWN generally produces better
bounds than typical LP verifiers with neuron split constraints, while being as
efficient and parallelizable as CROWN on GPUs. Applied to complete robustness
verification benchmarks, $\beta$-CROWN with BaB is up to three orders of
magnitude faster than LP-based BaB methods, and is notably faster than all
existing approaches while producing lower timeout rates. By terminating BaB
early, our method can also be used for efficient incomplete verification. We
consistently achieve higher verified accuracy in many settings compared to
powerful incomplete verifiers, including those based on convex barrier breaking
techniques. Compared to the typically tightest but very costly semidefinite
programming (SDP) based incomplete verifiers, we obtain higher verified
accuracy with three orders of magnitudes less verification time. Our algorithm
empowered the $\alpha,\!\beta$-CROWN (alpha-beta-CROWN) verifier, the winning
tool in VNN-COMP 2021. Our code is available at http://PaperCode.cc/BetaCROWN
|
[
{
"created": "Thu, 11 Mar 2021 11:56:54 GMT",
"version": "v1"
},
{
"created": "Sun, 31 Oct 2021 22:51:18 GMT",
"version": "v2"
}
] |
2021-11-02
|
[
[
"Wang",
"Shiqi",
""
],
[
"Zhang",
"Huan",
""
],
[
"Xu",
"Kaidi",
""
],
[
"Lin",
"Xue",
""
],
[
"Jana",
"Suman",
""
],
[
"Hsieh",
"Cho-Jui",
""
],
[
"Kolter",
"J. Zico",
""
]
] |
Bound propagation based incomplete neural network verifiers such as CROWN are very efficient and can significantly accelerate branch-and-bound (BaB) based complete verification of neural networks. However, bound propagation cannot fully handle the neuron split constraints introduced by BaB commonly handled by expensive linear programming (LP) solvers, leading to loose bounds and hurting verification efficiency. In this work, we develop $\beta$-CROWN, a new bound propagation based method that can fully encode neuron splits via optimizable parameters $\beta$ constructed from either primal or dual space. When jointly optimized in intermediate layers, $\beta$-CROWN generally produces better bounds than typical LP verifiers with neuron split constraints, while being as efficient and parallelizable as CROWN on GPUs. Applied to complete robustness verification benchmarks, $\beta$-CROWN with BaB is up to three orders of magnitude faster than LP-based BaB methods, and is notably faster than all existing approaches while producing lower timeout rates. By terminating BaB early, our method can also be used for efficient incomplete verification. We consistently achieve higher verified accuracy in many settings compared to powerful incomplete verifiers, including those based on convex barrier breaking techniques. Compared to the typically tightest but very costly semidefinite programming (SDP) based incomplete verifiers, we obtain higher verified accuracy with three orders of magnitudes less verification time. Our algorithm empowered the $\alpha,\!\beta$-CROWN (alpha-beta-CROWN) verifier, the winning tool in VNN-COMP 2021. Our code is available at http://PaperCode.cc/BetaCROWN
|
2407.20608
|
Otso Haavisto
|
Otso Haavisto and Robin Welsch
|
Questionnaires for Everyone: Streamlining Cross-Cultural Questionnaire
Adaptation with GPT-Based Translation Quality Evaluation
|
19 pages, 13 figures
| null | null | null |
cs.HC cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Adapting questionnaires to new languages is a resource-intensive process
often requiring the hiring of multiple independent translators, which limits
the ability of researchers to conduct cross-cultural research and effectively
creates inequalities in research and society. This work presents a prototype
tool that can expedite the questionnaire translation process. The tool
incorporates forward-backward translation using DeepL alongside GPT-4-generated
translation quality evaluations and improvement suggestions. We conducted two
online studies in which participants translated questionnaires from English to
either German (Study 1; n=10) or Portuguese (Study 2; n=20) using our
prototype. To evaluate the quality of the translations created using the tool,
evaluation scores between conventionally translated and tool-supported versions
were compared. Our results indicate that integrating LLM-generated translation
quality evaluations and suggestions for improvement can help users
independently attain results similar to those provided by conventional,
non-NLP-supported translation methods. This is the first step towards more
equitable questionnaire-based research, powered by AI.
|
[
{
"created": "Tue, 30 Jul 2024 07:34:40 GMT",
"version": "v1"
}
] |
2024-07-31
|
[
[
"Haavisto",
"Otso",
""
],
[
"Welsch",
"Robin",
""
]
] |
Adapting questionnaires to new languages is a resource-intensive process often requiring the hiring of multiple independent translators, which limits the ability of researchers to conduct cross-cultural research and effectively creates inequalities in research and society. This work presents a prototype tool that can expedite the questionnaire translation process. The tool incorporates forward-backward translation using DeepL alongside GPT-4-generated translation quality evaluations and improvement suggestions. We conducted two online studies in which participants translated questionnaires from English to either German (Study 1; n=10) or Portuguese (Study 2; n=20) using our prototype. To evaluate the quality of the translations created using the tool, evaluation scores between conventionally translated and tool-supported versions were compared. Our results indicate that integrating LLM-generated translation quality evaluations and suggestions for improvement can help users independently attain results similar to those provided by conventional, non-NLP-supported translation methods. This is the first step towards more equitable questionnaire-based research, powered by AI.
|
2001.04129
|
Jorge Calvo-Zaragoza
|
Antonio-Javier Gallego, Jorge Calvo-Zaragoza, Robert B. Fisher
|
Incremental Unsupervised Domain-Adversarial Training of Neural Networks
|
26 pages, 7 figures
|
IEEE Trans. Neural Networks Learn. Syst. 32(11): 4864-4878 (2021)
|
10.1109/TNNLS.2020.3025954
| null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the context of supervised statistical learning, it is typically assumed
that the training set comes from the same distribution that draws the test
samples. When this is not the case, the behavior of the learned model is
unpredictable and becomes dependent upon the degree of similarity between the
distribution of the training set and the distribution of the test set. One of
the research topics that investigates this scenario is referred to as domain
adaptation. Deep neural networks brought dramatic advances in pattern
recognition and that is why there have been many attempts to provide good
domain adaptation algorithms for these models. Here we take a different avenue
and approach the problem from an incremental point of view, where the model is
adapted to the new domain iteratively. We make use of an existing unsupervised
domain-adaptation algorithm to identify the target samples on which there is
greater confidence about their true label. The output of the model is analyzed
in different ways to determine the candidate samples. The selected set is then
added to the source training set by considering the labels provided by the
network as ground truth, and the process is repeated until all target samples
are labelled. Our results report a clear improvement with respect to the
non-incremental case in several datasets, also outperforming other
state-of-the-art domain adaptation algorithms.
|
[
{
"created": "Mon, 13 Jan 2020 09:54:35 GMT",
"version": "v1"
}
] |
2022-05-12
|
[
[
"Gallego",
"Antonio-Javier",
""
],
[
"Calvo-Zaragoza",
"Jorge",
""
],
[
"Fisher",
"Robert B.",
""
]
] |
In the context of supervised statistical learning, it is typically assumed that the training set comes from the same distribution that draws the test samples. When this is not the case, the behavior of the learned model is unpredictable and becomes dependent upon the degree of similarity between the distribution of the training set and the distribution of the test set. One of the research topics that investigates this scenario is referred to as domain adaptation. Deep neural networks brought dramatic advances in pattern recognition and that is why there have been many attempts to provide good domain adaptation algorithms for these models. Here we take a different avenue and approach the problem from an incremental point of view, where the model is adapted to the new domain iteratively. We make use of an existing unsupervised domain-adaptation algorithm to identify the target samples on which there is greater confidence about their true label. The output of the model is analyzed in different ways to determine the candidate samples. The selected set is then added to the source training set by considering the labels provided by the network as ground truth, and the process is repeated until all target samples are labelled. Our results report a clear improvement with respect to the non-incremental case in several datasets, also outperforming other state-of-the-art domain adaptation algorithms.
|
1903.03214
|
Stewart Jamieson
|
Yogesh Girdhar, Levi Cai, Stewart Jamieson, Nathan McGuire, Genevieve
Flaspohler, Stefano Suman, Brian Claus
|
Streaming Scene Maps for Co-Robotic Exploration in Bandwidth Limited
Environments
|
8 pages, 6 figures, accepted for presentation in IEEE Int. Conf. on
Robotics and Automation, ICRA '19, Montreal, Canada, May 2019
|
2019 IEEE International Conference on Robotics and Automation
(ICRA), Montreal, QC, Canada, 2019, pp. 7940-7946
|
10.1109/ICRA.2019.8794132
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a bandwidth tunable technique for real-time probabilistic
scene modeling and mapping to enable co-robotic exploration in communication
constrained environments such as the deep sea. The parameters of the system
enable the user to characterize the scene complexity represented by the map,
which in turn determines the bandwidth requirements. The approach is
demonstrated using an underwater robot that learns an unsupervised scene model
of the environment and then uses this scene model to communicate the spatial
distribution of various high-level semantic scene constructs to a human
operator. Preliminary experiments in an artificially constructed tank
environment as well as simulated missions over a 10m$\times$10m coral reef
using real data show the tunability of the maps to different bandwidth
constraints and science interests. To our knowledge this is the first paper to
quantify how the free parameters of the unsupervised scene model impact both
the scientific utility of and bandwidth required to communicate the resulting
scene model.
|
[
{
"created": "Thu, 7 Mar 2019 23:05:23 GMT",
"version": "v1"
}
] |
2020-03-09
|
[
[
"Girdhar",
"Yogesh",
""
],
[
"Cai",
"Levi",
""
],
[
"Jamieson",
"Stewart",
""
],
[
"McGuire",
"Nathan",
""
],
[
"Flaspohler",
"Genevieve",
""
],
[
"Suman",
"Stefano",
""
],
[
"Claus",
"Brian",
""
]
] |
This paper proposes a bandwidth tunable technique for real-time probabilistic scene modeling and mapping to enable co-robotic exploration in communication constrained environments such as the deep sea. The parameters of the system enable the user to characterize the scene complexity represented by the map, which in turn determines the bandwidth requirements. The approach is demonstrated using an underwater robot that learns an unsupervised scene model of the environment and then uses this scene model to communicate the spatial distribution of various high-level semantic scene constructs to a human operator. Preliminary experiments in an artificially constructed tank environment as well as simulated missions over a 10m$\times$10m coral reef using real data show the tunability of the maps to different bandwidth constraints and science interests. To our knowledge this is the first paper to quantify how the free parameters of the unsupervised scene model impact both the scientific utility of and bandwidth required to communicate the resulting scene model.
|
2104.08742
|
Rik Koncel-Kedziorski
|
Rik Koncel-Kedziorski and Noah A. Smith
|
Go Forth and Prosper: Language Modeling with Ancient Textual History
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a technique for improving document-level language models (LM) by
leveraging "ancient history": text that is outside the LM's current context
window. We learn an auxiliary function to select spans from the ancient history
which can help the LM to predict future text. The selected text spans are then
copied directly into the LM's context window, replacing less predictive spans.
This method can improve perplexity of pretrained LMs with no updates to the
LM's own parameters. We further observe that an auxiliary function trained in a
specific textual domain like Wikipedia will also work in a substantially
different domain such as scientific publications. With this technique we see a
7 percent perplexity reduction on Wikipedia articles, and a 12 percent
perplexity reduction on scientific texts.
|
[
{
"created": "Sun, 18 Apr 2021 06:57:30 GMT",
"version": "v1"
}
] |
2021-04-20
|
[
[
"Koncel-Kedziorski",
"Rik",
""
],
[
"Smith",
"Noah A.",
""
]
] |
We introduce a technique for improving document-level language models (LM) by leveraging "ancient history": text that is outside the LM's current context window. We learn an auxiliary function to select spans from the ancient history which can help the LM to predict future text. The selected text spans are then copied directly into the LM's context window, replacing less predictive spans. This method can improve perplexity of pretrained LMs with no updates to the LM's own parameters. We further observe that an auxiliary function trained in a specific textual domain like Wikipedia will also work in a substantially different domain such as scientific publications. With this technique we see a 7 percent perplexity reduction on Wikipedia articles, and a 12 percent perplexity reduction on scientific texts.
|
1810.07088
|
Shaofeng Yuan
|
Yunan Wu, Feng Yang, Ying Liu, Xuefan Zha, Shaofeng Yuan
|
A Comparison of 1-D and 2-D Deep Convolutional Neural Networks in ECG
Classification
|
4 pages, 5 figures, 3 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effective detection of arrhythmia is an important task in the remote
monitoring of electrocardiogram (ECG). The traditional ECG recognition depends
on the judgment of the clinicians' experience, but the results suffer from the
probability of human error due to the fatigue. To solve this problem, an ECG
signal classification method based on the images is presented to classify ECG
signals into normal and abnormal beats by using two-dimensional convolutional
neural networks (2D-CNNs). First, we compare the accuracy and robustness
between one-dimensional ECG signal input method and two-dimensional image input
method in AlexNet network. Then, in order to alleviate the overfitting problem
in two-dimensional network, we initialize AlexNet-like network with weights
trained on ImageNet, to fit the training ECG images and fine-tune the model,
and to further improve the accuracy and robustness of ECG classification. The
performance evaluated on the MIT-BIH arrhythmia database demonstrates that the
proposed method can achieve the accuracy of 98% and maintain high accuracy
within SNR range from 20 dB to 35 dB. The experiment shows that the 2D-CNNs
initialized with AlexNet weights performs better than one-dimensional signal
method without a large-scale dataset.
|
[
{
"created": "Tue, 16 Oct 2018 15:40:33 GMT",
"version": "v1"
}
] |
2018-10-17
|
[
[
"Wu",
"Yunan",
""
],
[
"Yang",
"Feng",
""
],
[
"Liu",
"Ying",
""
],
[
"Zha",
"Xuefan",
""
],
[
"Yuan",
"Shaofeng",
""
]
] |
Effective detection of arrhythmia is an important task in the remote monitoring of electrocardiogram (ECG). The traditional ECG recognition depends on the judgment of the clinicians' experience, but the results suffer from the probability of human error due to the fatigue. To solve this problem, an ECG signal classification method based on the images is presented to classify ECG signals into normal and abnormal beats by using two-dimensional convolutional neural networks (2D-CNNs). First, we compare the accuracy and robustness between one-dimensional ECG signal input method and two-dimensional image input method in AlexNet network. Then, in order to alleviate the overfitting problem in two-dimensional network, we initialize AlexNet-like network with weights trained on ImageNet, to fit the training ECG images and fine-tune the model, and to further improve the accuracy and robustness of ECG classification. The performance evaluated on the MIT-BIH arrhythmia database demonstrates that the proposed method can achieve the accuracy of 98% and maintain high accuracy within SNR range from 20 dB to 35 dB. The experiment shows that the 2D-CNNs initialized with AlexNet weights performs better than one-dimensional signal method without a large-scale dataset.
|
2209.03826
|
Felix Engelmann
|
Pascal Oser, Felix Engelmann, Stefan L\"uders, Frank Kargl
|
Evaluating the Future Device Security Risk Indicator for Hundreds of IoT
Devices
|
accepted at ESORICS STM22 workshop
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
IoT devices are present in many, especially corporate and sensitive, networks
and regularly introduce security risks due to slow vendor responses to
vulnerabilities and high difficulty of patching. In this paper, we want to
evaluate to what extent the development of future risk of IoT devices due to
new and unpatched vulnerabilities can be predicted based on historic
information. For this analysis, we build on existing prediction algorithms
available in the SAFER framework (prophet and ARIMA) which we evaluate by means
of a large data-set of vulnerabilities and patches from 793 IoT devices. Our
analysis shows that the SAFER framework can predict a correct future risk for
91% of the devices, demonstrating its applicability. We conclude that this
approach is a reliable means for network operators to efficiently detect and
act on risks emanating from IoT devices in their networks.
|
[
{
"created": "Thu, 8 Sep 2022 14:00:48 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Sep 2022 12:19:16 GMT",
"version": "v2"
}
] |
2022-09-19
|
[
[
"Oser",
"Pascal",
""
],
[
"Engelmann",
"Felix",
""
],
[
"Lüders",
"Stefan",
""
],
[
"Kargl",
"Frank",
""
]
] |
IoT devices are present in many, especially corporate and sensitive, networks and regularly introduce security risks due to slow vendor responses to vulnerabilities and high difficulty of patching. In this paper, we want to evaluate to what extent the development of future risk of IoT devices due to new and unpatched vulnerabilities can be predicted based on historic information. For this analysis, we build on existing prediction algorithms available in the SAFER framework (prophet and ARIMA) which we evaluate by means of a large data-set of vulnerabilities and patches from 793 IoT devices. Our analysis shows that the SAFER framework can predict a correct future risk for 91% of the devices, demonstrating its applicability. We conclude that this approach is a reliable means for network operators to efficiently detect and act on risks emanating from IoT devices in their networks.
|
2304.10348
|
Bata Vasic Dr
|
Bata Vasc, Nithin Raveendran and Bane Vasic
|
Neuro-OSVETA: A Robust Watermarking of 3D Meshes
|
10 pages, 5 figures
|
Proceedings of the International Telemetering Conference (ITC
2019), ISSN 1546-2188, vol. 55, pp. 387 - 396, Las Vegas, NV, USA, Octobar 21
- 24, 2019
| null | null |
cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Best and practical watermarking schemes for copyright protection of 3D meshes
are required to be blind and robust to attacks and errors. In this paper, we
present the latest developments in 3D blind watermarking with a special
emphasis on our Ordered Statistics Vertex Extraction and Tracing Algorithm
(OSVETA) algorithm and its improvements. OSVETA is based on a combination of
quantization index modulation (QIM) and error correction coding using novel
ways for judicial selection of mesh vertices which are stable under mesh
simplification, and the technique we propose in this paper offers a systematic
method for vertex selection based on neural networks replacing a heuristic
approach in the OSVETA. The Neuro-OSVETA enables a more precise mesh geometry
estimation and better curvature and topological feature estimation. These
enhancements result in a more accurate identification of stable vertices
resulting in significant reduction of deletion probability.
|
[
{
"created": "Thu, 20 Apr 2023 14:39:24 GMT",
"version": "v1"
}
] |
2023-04-21
|
[
[
"Vasc",
"Bata",
""
],
[
"Raveendran",
"Nithin",
""
],
[
"Vasic",
"Bane",
""
]
] |
Best and practical watermarking schemes for copyright protection of 3D meshes are required to be blind and robust to attacks and errors. In this paper, we present the latest developments in 3D blind watermarking with a special emphasis on our Ordered Statistics Vertex Extraction and Tracing Algorithm (OSVETA) algorithm and its improvements. OSVETA is based on a combination of quantization index modulation (QIM) and error correction coding using novel ways for judicial selection of mesh vertices which are stable under mesh simplification, and the technique we propose in this paper offers a systematic method for vertex selection based on neural networks replacing a heuristic approach in the OSVETA. The Neuro-OSVETA enables a more precise mesh geometry estimation and better curvature and topological feature estimation. These enhancements result in a more accurate identification of stable vertices resulting in significant reduction of deletion probability.
|
1811.10396
|
Arash Ardakani
|
Arash Ardakani, Zhengyun Ji, Warren J. Gross
|
Learning to Skip Ineffectual Recurrent Computations in LSTMs
|
Accepted as a conference paper for presentation at DATE 2019
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Long Short-Term Memory (LSTM) is a special class of recurrent neural network,
which has shown remarkable successes in processing sequential data. The typical
architecture of an LSTM involves a set of states and gates: the states retain
information over arbitrary time intervals and the gates regulate the flow of
information. Due to the recursive nature of LSTMs, they are computationally
intensive to deploy on edge devices with limited hardware resources. To reduce
the computational complexity of LSTMs, we first introduce a method that learns
to retain only the important information in the states by pruning redundant
information. We then show that our method can prune over 90% of information in
the states without incurring any accuracy degradation over a set of temporal
tasks. This observation suggests that a large fraction of the recurrent
computations are ineffectual and can be avoided to speed up the process during
the inference as they involve noncontributory multiplications/accumulations
with zero-valued states. Finally, we introduce a custom hardware accelerator
that can perform the recurrent computations using both sparse and dense states.
Experimental measurements show that performing the computations using the
sparse states speeds up the process and improves energy efficiency by up to
5.2x when compared to implementation results of the accelerator performing the
computations using dense states.
|
[
{
"created": "Fri, 9 Nov 2018 15:51:40 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Nov 2018 22:44:01 GMT",
"version": "v2"
}
] |
2018-12-03
|
[
[
"Ardakani",
"Arash",
""
],
[
"Ji",
"Zhengyun",
""
],
[
"Gross",
"Warren J.",
""
]
] |
Long Short-Term Memory (LSTM) is a special class of recurrent neural network, which has shown remarkable successes in processing sequential data. The typical architecture of an LSTM involves a set of states and gates: the states retain information over arbitrary time intervals and the gates regulate the flow of information. Due to the recursive nature of LSTMs, they are computationally intensive to deploy on edge devices with limited hardware resources. To reduce the computational complexity of LSTMs, we first introduce a method that learns to retain only the important information in the states by pruning redundant information. We then show that our method can prune over 90% of information in the states without incurring any accuracy degradation over a set of temporal tasks. This observation suggests that a large fraction of the recurrent computations are ineffectual and can be avoided to speed up the process during the inference as they involve noncontributory multiplications/accumulations with zero-valued states. Finally, we introduce a custom hardware accelerator that can perform the recurrent computations using both sparse and dense states. Experimental measurements show that performing the computations using the sparse states speeds up the process and improves energy efficiency by up to 5.2x when compared to implementation results of the accelerator performing the computations using dense states.
|
2003.02518
|
V\'aclav Rozho\v{n}
|
V\'aclav Rozho\v{n}
|
Simple and sharp analysis of k-means||
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a simple analysis of k-means|| (Bahmani et al., PVLDB 2012) -- a
distributed variant of the k-means++ algorithm (Arthur and Vassilvitskii, SODA
2007). Moreover, the bound on the number of rounds is improved from $O(\log n)$
to $O(\log n / \log\log n)$, which we show to be tight.
|
[
{
"created": "Thu, 5 Mar 2020 10:18:48 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Jul 2020 13:34:45 GMT",
"version": "v2"
}
] |
2020-07-03
|
[
[
"Rozhoň",
"Václav",
""
]
] |
We present a simple analysis of k-means|| (Bahmani et al., PVLDB 2012) -- a distributed variant of the k-means++ algorithm (Arthur and Vassilvitskii, SODA 2007). Moreover, the bound on the number of rounds is improved from $O(\log n)$ to $O(\log n / \log\log n)$, which we show to be tight.
|
1405.4041
|
Ethan Jackson
|
Ethan K. Jackson
|
A Module System for Domain-Specific Languages
|
Appearing in International Conference on Logic Programming (ICLP)
2014
|
Theory and Practice of Logic Programming 14 (2014) 771-785
|
10.1017/S1471068414000337
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Domain-specific languages (DSLs) are routinely created to simplify difficult
or specialized programming tasks. They expose useful abstractions and design
patterns in the form of language constructs, provide static semantics to
eagerly detect misuse of these constructs, and dynamic semantics to completely
define how language constructs interact. However, implementing and composing
DSLs is a non-trivial task, and there is a lack of tools and techniques.
We address this problem by presenting a complete module system over LP for
DSL construction, reuse, and composition. LP is already useful for DSL design,
because it supports executable language specifications using notations familiar
to language designers. We extend LP with a module system that is simple (with a
few concepts), succinct (for key DSL specification scenarios), and composable
(on the level of languages, compilers, and programs). These design choices
reflect our use of LP for industrial DSL design. Our module system has been
implemented in the FORMULA language, and was used to build key Windows 8 device
drivers via DSLs. Though we present our module system as it actually appears in
our FORMULA language, our emphasis is on concepts adaptable to other LP
languages.
|
[
{
"created": "Fri, 16 May 2014 00:47:10 GMT",
"version": "v1"
}
] |
2020-02-19
|
[
[
"Jackson",
"Ethan K.",
""
]
] |
Domain-specific languages (DSLs) are routinely created to simplify difficult or specialized programming tasks. They expose useful abstractions and design patterns in the form of language constructs, provide static semantics to eagerly detect misuse of these constructs, and dynamic semantics to completely define how language constructs interact. However, implementing and composing DSLs is a non-trivial task, and there is a lack of tools and techniques. We address this problem by presenting a complete module system over LP for DSL construction, reuse, and composition. LP is already useful for DSL design, because it supports executable language specifications using notations familiar to language designers. We extend LP with a module system that is simple (with a few concepts), succinct (for key DSL specification scenarios), and composable (on the level of languages, compilers, and programs). These design choices reflect our use of LP for industrial DSL design. Our module system has been implemented in the FORMULA language, and was used to build key Windows 8 device drivers via DSLs. Though we present our module system as it actually appears in our FORMULA language, our emphasis is on concepts adaptable to other LP languages.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.